Posts

Image
The Turning Test In 1950 Turing sidestepped the traditional debate concerning the definition of intelligence, introducing a practical test for computer intelligence that is now known simply as the Turing test. The Turing test involves three participants: a computer, a human interrogator, and a human foil. The interrogator attempts to determine, by asking questions of the other two participants, which is the computer. All communication is via keyboard and display screen. The interrogator may ask questions as penetrating and wide-ranging as he or she likes, and the computer is permitted to do everything possible to force a wrong identification. (For instance, the computer might answer, “No,” in response to, “Are you a computer?” and might follow a request to multiply one large number by another with a long pause and an incorrect answer.) The foil must help the interrogator to make a correct identification. A number of different people play the roles of interrogator and foil, and, if a su...
Image
Chess At Bletchley Park, Turing illustrated his ideas on machine intelligence by reference to chess—a useful source of challenging and clearly defined problems against which proposed methods for problem solving could be tested. In principle, a chess-playing computer could play by searching exhaustively through all the available moves, but in practice this is impossible because it would involve examining an astronomically large number of moves. Heuristics are necessary to guide a narrower, more discriminative search. Although Turing experimented with designing chess programs, he had to content himself with theory in the absence of a computer to run his chess program. The first true AI programs had to await the arrival of stored-program electronic digital computers. In 1945 Turing predicted that computers would one day play very good chess, and just over 50 years later, in 1997, Deep Blue, a chess computer built by the International Business Machines Corporation (IBM), beat the reigning ...
Image
  Theoretical work The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing. In 1935 Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The actions of the scanner are dictated by a program of instructions that also is stored in the memory in the form of symbols. This is Turing’s stored-program concept, and implicit in it is the possibility of the machine operating on, and so modifying or improving, its own program. Turing’s conception is now known simply as the universal Turing machine. All modern computers are in essence universal Turing machines. During World War II, Turing was a leading cryptanalyst at the Government Code and Cypher School in Bletchley Park, Buckinghamshire, England. Turing could not turn to the p...
Image
Strong AI, applied AI, and cognitive simulation Employing the methods outlined above, AI research attempts to reach one of three goals: strong AI, applied AI, or cognitive simulation. Strong AI aims to build machines that think. (The term strong AI was introduced for this category of research in 1980 by the philosopher John Searle of the University of California at Berkeley.) The ultimate ambition of strong AI is to produce a machine whose overall intellectual ability is indistinguishable from that of a human being. As is described in the section Early milestones in AI, this goal generated great interest in the 1950s and ’60s, but such optimism has given way to an appreciation of the extreme difficulties involved. To date, progress has been meagre. Some critics doubt whether research will produce even a system with the overall intellectual ability of an ant in the forseeable future. Indeed, some researchers working in AI’s other two branches view strong AI as not worth pursuing. Applie...
Image
Symbolic vs. connectionist approaches AI research follows two distinct, and to some extent competing, methods, the symbolic (or “top-down”) approach, and the connectionist (or “bottom-up”) approach. The top-down approach seeks to replicate intelligence by analyzing cognition independent of the biological structure of the brain, in terms of the processing of symbols—whence the symbolic label. The bottom-up approach, on the other hand, involves creating artificial neural networks in imitation of the brain’s structure—hence the connectionist label. To illustrate the difference between these approaches, consider the task of building a system, equipped with an optical scanner, that recognizes the letters of the alphabet. A bottom-up approach typically involves training an artificial neural network by presenting letters to it one by one, gradually improving performance by “tuning” the network. (Tuning adjusts the responsiveness of different neural pathways to different stimuli.) In contrast,...
Image
  Deep learning, big data and artificial general intelligence: 2011–present In the first decades of the 21st century, access to large amounts of data (known as "big data"), cheaper and faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. In fact, McKinsey Global Institute estimated in their famous paper "Big data: The next frontier for innovation, competition, and productivity" that "by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data". By 2016, the market for AI-related products, hardware, and software reached more than 8 billion dollars, and the New York Times reported that interest in AI had reached a "frenzy". The applications of big data began to reach into other fields as well, such as training models in ecology and for various applications in economics. Advances in deep learning (particularly deep convolutional neural n...
Image
AI 1993–2011 The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence". AI was both more cautious and more successful than it had ever been. Milestones and ...
Image
  Bust: the second AI winter 1987–1993 The business community's fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors – the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence. A New and Different AI pinter  The term "AI winter" was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow. Their fears were well founded: in the late 1980s and early 1990s, AI suffered a series of financial setbacks. The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gai...
Image
  Boom 1980–1987 In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.  The rise of expert systems An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.  Expert systems restricted themselves to a small domain of spe...
Image
  The first AI winter 1974–1980 In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[84] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky's devastating criticism of perceptrons. Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas. The problems In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys" AI researchers had begun to run into several fundamental limits that could not be overcome in t...
Image
  The golden years 1956–1974 The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply "astonishing": computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all. Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.Government agencies like DARPA poured money into the new field.  The work There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these: Reasoning as search Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduct...
Image
  The birth of artificial intelligence 1952–1956 In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956. Cybernetics and early neural networks The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. Claude Shannon's information theory described digital signals (i.e., all-or-nothing signals). Alan Turing's theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas...
Image
 History of Artificial Intelligence  The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they...
Image
Introduction  Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—as, for example, discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific...

Yellow Mustard

Image
 If you have muscle cramps or these 4 problems, yellow mustard is very beneficial. 1. Yellow mustard contains nutrients like iron, carbohydrates, omega-3 and calcium as well as phosphorus and zinc which help in fighting bacteria and germs and prevent infections. 2. Yellow mustard is beneficial in the problem of muscle cramps, strain and pain. Applying yellow mustard paste on the affected area provides relief. 3. Consuming it can increase saliva production up to 8 times, which improves digestion and further increases metabolism. It is great for improving digestion power. 4 It reduces the risk of infection by increasing the immunity of the body. Apart from this, it also works as anti fungal and anti septic. 5. Yellow mustard is also beneficial in keeping blood pressure normal. But before consuming it, get your blood pressure checked by a doctor and take his advice.
Image
  Meningitis Meningitis is also called brain fever. It is usually caused by infection with viruses, bacteria, fungi, parasites, and certain organisms. Physical defects or a weakened immune system may be linked to recurrent bacterial meningitis. This is a type of infection that causes swelling in the membranes that protect the brain and spinal cord. In most cases the cause is a virus. However, some non-infectious causes of meningitis also exist. Symptoms of Rubella: • In young children, the following symptoms should be observed - crying loudly or moaning - fast or abnormal breathing - pale or rashy face - red or purple spots • In older children, the following symptoms should be observed - stiff neck - severe pain in back and joints - throbbing headache - Irritation by bright light - very cold hands and feet    to tremble - breathing rapidly • Muscle pain • Cold hands and feet • Fever • Headache • Vomit Ayurvedic treatment: • Give complete rest to the body. • Keep the patie...

Bone Fracture

Image
  Fracture & Dislocation of Bone If you have a fracture in any of your bones, you should seek medical help immediately. Once the broken bone is immobilized, you can start taking natural remedies to heal it. Although there is no magic potion to heal bone fractures in half the time, there are some measures you can take to minimize recovery time. According to the Center for Better Bones, these remedies can help you heal a broken bone quickly. Ayurvedic treatment: 1. The major mineral in bones is calcium, so calcium is essential for bone health. To heal bone fractures quickly, you should include this important mineral in your diet. It is found in abundance in sea vegetables, green leafy vegetables, fish, salmon, sardines, and dairy products such as yogurt, kefir, and amasai. 2. Vitamin K is essential for blood clotting and bone formation. Vitamin K1 is found in kale, broccoli, spinach and other green vegetables. Whereas Vitamin K-2 is found in abundance in raw dairy products like c...

Lungs Cancer

Image
  Cancer  of  Lungs Lung cancer occurs in the lining of the bronchial tubes. Which works to provide oxygen to our lungs and blood. Chemotherapy and surgery are suggested by doctors to treat lung cancer. But there are some home remedies which can be helpful in treating this cancer without any side effects. Ayurvedic treatment: 1. Healthy diet like raw fruits, vegetables and grains etc. are beneficial in lung cancer. Consuming lots of salad and distilled water is essential for patients. However, consumption of peanuts is prohibited in case of lung cancer. Citrus fruits and juices should also be taken by the patients regularly in the afternoon. 2. Sugar and beef (cow or buffalo meat) should not be eaten in this cancer, because they increase the growth of cancer cells. Also, food containing sugar should be avoided. Also one should stay away from smoking and alcohol consumption. 3. Vitamin D is very important for patients suffering from lung cancer. Exposure to sunlight is the...

Astronomy

Image
  ASTRONOMY  Astronomy is defined as the study of the objects that lie beyond our planet Earth and the processes by which these objects interact with one another. We will see, though, that it is much more. It is also humanity's attempt to organize what we learn into a clear history of the universe, from the instant of its birth in the Big Bang to the present moment. In considering the history of the universe, we will see again and again that the cosmos evolves; it changes in profound ways over long periods of time. For example, the universe made the carbon, the calcium, and the oxygen necessary to construct something as interesting and complicated as you. Today, many billions of years later, the universe has evolved into a more hospitable place for life. Tracing the evolutionary processes that continue to shape the universe is one of the most important (and satisfying) parts of modern astronomy. The ultimate judge in science is always what nature itself reveals based on observ...