What if these theories are really true, and we were magically shrunk and put into someone's brain while he was thinking. We would see all the pumps, pistons, gears and levers working away, and we would be able to describe their workings completely, in mechanical terms, thereby completely describing the thought processes of the brain. But that description would nowhere contain any mention of thought! It would contain nothing but descriptions of pumps, pistons, levers!
Gottfried Willhelm Leibniz (1679).
Not even a century ago -- in fact, not even a half-century ago -- few people could have imagined the present-day world with computers operating most of the government and business processes and the Internet running in millions of homes. Thus it would have been nearly impossible to comprehend artificial intelligence (AI) and that scientists would try to create a machine (AI) to learn, adapt, reason, correct or improve itself. Whether or not this will become a reality is still unknown. AI pioneer Chris Langton says that this "intelligent entity" will never be possible. He believes, "when scientists are faced with the choice of either admitting that the computer process is alive, or moving the goalposts to exclude the computer from the exclusive club of living organisms, they will choose the latter." Is this true? Will humans never admit that a computer can actually function as real life? Or will they instead decide there is nothing special about life, and humanity can therefore be designed, built and replicated? At least for the time being, there is no answer to this dilemma.
According to the American Association for Artificial Intelligence, AI is "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines." The evolution of this science actually became noticed as early as 1821 when Charles Babbage stared at a table of logarithms and said, "I think that all these tables might be calculated by machinery." From then on, the scientist devoted his life to developing the first programmable computer.
Much later, in 1943, Babbage's idea finally took hold when Warren McCulloch (a psychiatrist, cybernetician, philosopher, and poet) and Walter Pitts (a research student in mathematics) published an innovative paper combining early twentieth-century ideas on computation, logic, and the nervous system. In fact, the report promised to revolutionize psychology and philosophy. The next year, Harvard University applied these ideas to develop the first American programmable computer, the Merck I.
It did not take long for British scientist Alan Turing to see the similarity of the computational process to that of human thinking. In his paper, "Comparing Machinery and Intelligence," he explained the direction for the remainder of the century -- developing computers for game playing, decision-making, natural language understanding, translation, theorem proving and encryption code cracking.
To help recognize if and when a computer had actually become intelligent, Turing suggested the idea of the "imitation game" where an interrogator would interview a human being and a computer and not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish the two by questioning, then it would be unreasonable not to call the computer intelligent.
Turing's game is now usually called "the Turing test for intelligence."
In the 1950s, Newell, Shaw and Simon created the program Logic Theorist (later called General Problem Solver), which used recursive search techniques, or defining a solution in terms of itself. IBM developed the first program that could play a full game of chess in 1957. The following year, Newell, Shaw and Simon noted, "There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until -- in a visible future -- the range of problems they can handle will be co-extensive with the range to which the human mind has been applied (Simon, p. 3).
In 1967, an MIT computer won the first tournament match against a human player. The world chess champion Gary Kasparov said in 1988 that there was "no way" a grandmaster would be defeated by a computer in a tournament before 2000. Ten months later he lost the bet. However, many people changed their tune and said that winning a championship game really did not require "real" intelligence. For a number of persons, the connection between human and machine was getting a little too close for comfort.
This was exactly why Turing had developed his test -- any other attempt to define intelligence seemed to run into problems. The AI specialists did not even get excited about the win, because the computer "which put custom chips inside a machine," was seen as a type of idiot savant that was able to play a good game of chess without an understanding of why it did what it did (Gershenfeld, 1999, p130).
This is a curious argument. It retroactively adds a clause to the Turing test, demanding that not only must a machine be able to match the performance of humans at quintessentially intelligent tasks such as chess or conversation, but the way that it does so must be deemed to be satisfactory" (ibid).
Since then, the basic question has not been whether or not an advanced computer can be built, but rather what is intelligence. Fatmi and Young define intelligence as "that faculty of mind by which order is perceived in a situation previously considered disordered. Another definition states that something intelligent should be able to improve its own processes (Martin, 2000, 46).
By the mid-1960s, AI was being pursued by researchers worldwide. Yet, the memories of computers remained very limited. Perception and knowledge representation in computers became the theme of many AI studies. For example, in the Blocks Micro World project at MIT, the computer SHRDLU looked through cameras and a collection of pure geometric shapes and interpreted what it saw. Then it manipulated blocks and expressed its perceptions, activities, and motivations. SHRDLU could respond to commands typed in natural English, such as, "Will you please stack up both of the red blocks and either a green cube or a pyramid." The program would plan out a sequence of actions, and the robot arm would arrange the blocks appropriately. SHRDLU could correctly answer questions about its world of blocks, for example, "Can a pyramid be supported by a pyramid?" (SHRDLU attempts to stack up two pyramids and fails) and "Is there anything which is bigger than every pyramid but is not as wide as the thing that supports it?" (to which SHRDLU answered "Yes, the blue block").
Although SHRDLU was initially seen as a major breakthrough, the program's developer, Winograd, soon announced this work was a dead end. The techniques pioneered in the program proved unsuitable for use in wider applications. Moreover, the appearance that SHRDLU gave of understanding the blocks and English statements concerning them was in fact false. SHRDLU had no idea what a red block was.
Since then, computers have been created that are more "intelligent" in narrow domains such as diagnosing medical conditions, selling stock and guiding missiles. Copeland says that this so-called "expert system" has come much closer to the idea of AI. This is a computer program dedicated to solving problems and giving advice within a specialized area of knowledge. A good system can match the performance of a human specialist.
The basic components of an expert system are a "knowledge base" or KB and an "inference engine." The information in the KB is obtained by interviewing experts on a particular topic. The interviewer, or "knowledge engineer," organizes the gathered information into a collection of "production rules" typically of "if-then" structure. The inference engine enables the expert system to draw deductions from the rules in the KB. For example, if the KB contains production rules "if x then y" and "if y then z," the inference engine is able to deduce "if x then z." The expert system might then query its user "is x true in the situation that we are considering?" or, for example, "does the patient have a rash?" If the answer is affirmative, the system will proceed to infer z (ibid).
However, expert systems have no common sense nor an understanding of what they are for, the limits of their applicability, and of how their recommendations fit into a larger context. If a medical diagnosis computer is told that a patient with a gunshot wound is bleeding to death, the program attempts to diagnose a bacterial cause for the patient's symptoms. Expert systems can make also absurd errors, such as prescribing an obviously incorrect dosage of a drug for a patient whose weight and age are accidentally swapped by the clerk.
One of the most advanced such programs is being been developed by Douglas Lenat (Martin, 2000, 439). In the 1980s, he set out to understand what would it take to give a computer common sense, "or a vast number of ordinary-sounding pieces of knowledge that, when used collectively, enable…