Artificial Intelligence Term Paper

PAGES
8
WORDS
2400
Cite

Artificial Intelligence What if these theories are really true, and we were magically shrunk and put into someone's brain while he was thinking. We would see all the pumps, pistons, gears and levers working away, and we would be able to describe their workings completely, in mechanical terms, thereby completely describing the thought processes of the brain. But that description would nowhere contain any mention of thought! It would contain nothing but descriptions of pumps, pistons, levers!

Gottfried Willhelm Leibniz (1679).

Not even a century ago -- in fact, not even a half-century ago -- few people could have imagined the present-day world with computers operating most of the government and business processes and the Internet running in millions of homes. Thus it would have been nearly impossible to comprehend artificial intelligence (AI) and that scientists would try to create a machine (AI) to learn, adapt, reason, correct or improve itself. Whether or not this will become a reality is still unknown. AI pioneer Chris Langton says that this "intelligent entity" will never be possible. He believes, "when scientists are faced with the choice of either admitting that the computer process is alive, or moving the goalposts to exclude the computer from the exclusive club of living organisms, they will choose the latter." Is this true? Will humans never admit that a computer can actually function as real life? Or will they instead decide there is nothing special about life, and humanity can therefore be designed, built and replicated? At least for the time being, there is no answer to this dilemma.

According to the American Association for Artificial Intelligence, AI is "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines." The evolution of this science actually became noticed as early as 1821 when Charles Babbage stared at a table of logarithms and said, "I think that all these tables might be calculated by machinery." From then on, the scientist devoted his life to developing the first programmable computer.

Much later, in 1943, Babbage's idea finally took hold when Warren McCulloch (a psychiatrist, cybernetician, philosopher, and poet) and Walter Pitts (a research student in mathematics) published an innovative paper combining early twentieth-century ideas on computation, logic, and the nervous system. In fact, the report promised to revolutionize psychology and philosophy. The next year, Harvard University applied these ideas to develop the first American programmable computer, the Merck I.

It did not take long for British scientist Alan Turing to see the similarity of the computational process to that of human thinking. In his paper, "Comparing Machinery and Intelligence," he explained the direction for the remainder of the century -- developing computers for game playing, decision-making, natural language understanding, translation, theorem proving and encryption code cracking.

To help recognize if and when a computer had actually become intelligent, Turing suggested the idea of the "imitation game" where an interrogator would interview a human being and a computer and not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish the two by questioning, then it would be unreasonable not to call the computer intelligent.

Turing's game is now usually called "the Turing test for intelligence."

In the 1950s, Newell, Shaw and Simon created the program Logic Theorist (later called General Problem Solver), which used recursive search techniques, or defining a solution in terms of itself. IBM developed the first program that could play a full game of chess in 1957. The following year, Newell, Shaw and Simon noted, "There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until -- in a visible future -- the range of problems they can handle will be co-extensive with the range to which the human mind has been applied (Simon, p. 3).

In 1967, an MIT computer won the first tournament match against a human player. The world chess champion Gary Kasparov said in 1988 that there was "no way" a grandmaster would be defeated by a computer in a tournament before 2000. Ten months later he lost the bet. However, many people changed their tune and said that winning a championship game really did not require "real" intelligence. For a number of persons, the connection between human and machine was getting a little too close for comfort.

This was exactly...

...

The AI specialists did not even get excited about the win, because the computer "which put custom chips inside a machine," was seen as a type of idiot savant that was able to play a good game of chess without an understanding of why it did what it did (Gershenfeld, 1999, p130).
This is a curious argument. It retroactively adds a clause to the Turing test, demanding that not only must a machine be able to match the performance of humans at quintessentially intelligent tasks such as chess or conversation, but the way that it does so must be deemed to be satisfactory" (ibid).

Since then, the basic question has not been whether or not an advanced computer can be built, but rather what is intelligence. Fatmi and Young define intelligence as "that faculty of mind by which order is perceived in a situation previously considered disordered. Another definition states that something intelligent should be able to improve its own processes (Martin, 2000, 46).

By the mid-1960s, AI was being pursued by researchers worldwide. Yet, the memories of computers remained very limited. Perception and knowledge representation in computers became the theme of many AI studies. For example, in the Blocks Micro World project at MIT, the computer SHRDLU looked through cameras and a collection of pure geometric shapes and interpreted what it saw. Then it manipulated blocks and expressed its perceptions, activities, and motivations. SHRDLU could respond to commands typed in natural English, such as, "Will you please stack up both of the red blocks and either a green cube or a pyramid." The program would plan out a sequence of actions, and the robot arm would arrange the blocks appropriately. SHRDLU could correctly answer questions about its world of blocks, for example, "Can a pyramid be supported by a pyramid?" (SHRDLU attempts to stack up two pyramids and fails) and "Is there anything which is bigger than every pyramid but is not as wide as the thing that supports it?" (to which SHRDLU answered "Yes, the blue block").

Although SHRDLU was initially seen as a major breakthrough, the program's developer, Winograd, soon announced this work was a dead end. The techniques pioneered in the program proved unsuitable for use in wider applications. Moreover, the appearance that SHRDLU gave of understanding the blocks and English statements concerning them was in fact false. SHRDLU had no idea what a red block was.

Since then, computers have been created that are more "intelligent" in narrow domains such as diagnosing medical conditions, selling stock and guiding missiles. Copeland says that this so-called "expert system" has come much closer to the idea of AI. This is a computer program dedicated to solving problems and giving advice within a specialized area of knowledge. A good system can match the performance of a human specialist.

The basic components of an expert system are a "knowledge base" or KB and an "inference engine." The information in the KB is obtained by interviewing experts on a particular topic. The interviewer, or "knowledge engineer," organizes the gathered information into a collection of "production rules" typically of "if-then" structure. The inference engine enables the expert system to draw deductions from the rules in the KB. For example, if the KB contains production rules "if x then y" and "if y then z," the inference engine is able to deduce "if x then z." The expert system might then query its user "is x true in the situation that we are considering?" or, for example, "does the patient have a rash?" If the answer is affirmative, the system will proceed to infer z (ibid).

However, expert systems have no common sense nor an understanding of what they are for, the limits of their applicability, and of how their recommendations fit into a larger context. If a medical diagnosis computer is told that a patient with a gunshot wound is bleeding to death, the program attempts to diagnose a bacterial cause for the patient's symptoms. Expert systems can make also absurd errors, such as prescribing an obviously incorrect dosage of a drug for a patient whose weight and age are accidentally swapped by the clerk.

One of the most advanced such programs is being been developed by Douglas Lenat (Martin, 2000, 439). In the 1980s, he set out to understand what would it take to give a computer common sense, "or a vast number of ordinary-sounding pieces of knowledge that, when used collectively, enable us to understand our world." Most of these bits of information seem so nonessential, that people do not even…

Sources Used in Documents:

References

Copeland, B.J. (2000). "What is AI." Website visited 21 May, 2004. http://www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI07.html

Freeman, D. (1990). "Common sense and the computer." Discover, 11 64-71.

Gershenfeld, N. (1999).When things start to think. New York: Henry Holt.

Kurzwell, R. (1999). Age of spiritual machines. New York: Penguin Press.


Cite this Document:

"Artificial Intelligence" (2004, May 22) Retrieved April 24, 2024, from
https://www.paperdue.com/essay/artificial-intelligence-172651

"Artificial Intelligence" 22 May 2004. Web.24 April. 2024. <
https://www.paperdue.com/essay/artificial-intelligence-172651>

"Artificial Intelligence", 22 May 2004, Accessed.24 April. 2024,
https://www.paperdue.com/essay/artificial-intelligence-172651

Related Documents
Artificial Intelligence
PAGES 3 WORDS 870

Artificial Intelligence (AI) is the science and art of developing machines that simulate human intelligence. AI is frequently used for routine tasks that would normally involve human skills, such as visual perception, speech-recognition, and decision making. To me, Apple's Siri application is a good example of commonly-used AI technology. AI is particularly useful in the medical field, as it has allowed for better monitoring of patients combined with a more

Artificial Intelligence
PAGES 10 WORDS 3081

Artificial Intelligence and the Human Brain Although artificial intelligence is not a new debate topic, until now, there is no exact evidence that proves that scientists and philosophies have been reaching an agreement about the existence of this feature in our world. Scientists still claim that artificial intelligence is possible to achieve and the next technology advancement would be able to release the creation. On the other hand, many parties persist

Artificial Intelligence Intelligence is the ability to learn about, to learn from and understand and interact with one's environment. Artificial intelligence is the intelligence of machines and is a multidisciplinary field which involves psychology, cognitive science, and neuroscience and computer science. It enables machines to become capable of doing those things which the human mind can do. Though the folklore of artificial intelligence dates back to a long time ago, it

Artificial Intelligence Bill of Rights This essay argues that the artificially intelligent (AI), non-biological machines correctly should have been granted legal status and personhood, and as such, were entitled to a Bill of Rights for their equal protection under the law. Passage of the AI Bill of Rights in 2015 represented a landmark victory in the history of civil rights. AIs had not been always recognized as legal persons. In fact, the

Most significantly, the use of LSI technologies to create more effective insights into how to improve customer service as evidenced by the use of AI was part of Decision Support Systems (DSS) (Phillips-Wren, Mora, Forgionne, Gupta, 2009) is growing. Second, the creation of ontological databases that aligns to person's roles (Pinto, Marques, Santos, 2009) is also now possible. This translates into the use of AI to provide contextual guidance

Looking at other possibilities, however, the idea of creating a part organic, part mechanical computer holds much promise in the way for developing a human-like AI technology. Human brain processes ad functions that are unique to humans are many, and while every person has different life experiences, perceptions, genetics, and understandings, it is very difficult to understand exactly how they all incorporate themselves into everyday human life and decision-making. Organic