Expert Systems and Neural Networks Term Paper
- Length: 20 pages
- Subject: Education - Computers
- Type: Term Paper
- Paper: #89110827
Excerpt from Term Paper :
Expert Systems and Neural Networks
The Development and Limitations of Expert Systems and Neural Networks
The human experience demands a constant series of decisions to survive in a hostile environment. The question of "fight or flight" and similar decisions has been translated into computer-based models by using the now-famous "if-then" programming command that has evolved into the promising field of artificial intelligence. In fact, in their groundbreaking work, Newell and Simon (1972) showed that much human problem solving could be expressed in terms of such "if-then" types of production rules. This discovery helped to launch the field of intelligent computer systems (Coovert & Doorsey 2003). Since that time, a number of expert and other intelligent systems have been used to model, capture, and support human decision making in an increasingly diverse range of disciplines; however, traditional rule-based systems are limited by several fundamental constraints, including the fact that human experts are needed to articulate propositional rules, that the symbolic processing normally used prevents direct application of mathematics, and that traditional rule-based systems require a large number of rules that are not receptive to unique data inputs. This paper provides an examination of the concepts and technologies needed to develop, implement and integrate expert systems and neural networks. The limitations of expert systems and their alternatives are discussed, followed by an analysis of the relevant and scholarly literature covering neural networks. A summary of the research is provided in the conclusion.
Review and Discussion
Background and Overview. Artificial Intelligence (AI) as a formal discipline is certainly not new, having been around for more than 50 years (Gozzi 1997). Nevertheless, AI remains a term that frequently "conjures images of HAL's refusal to open the pod bay doors or Deep Blue winning the world chess championship. But artificial intelligence (or Al) is not a phenomenon restricted to science fiction movies and chess tournaments; it has rapidly, if silently, become a fixture of daily life" (Gibson 2003:83). In fact, Kapoor (2003) emphasizes that there can be no dispute that machines with greater-than-human intelligence will be built in the next 50 years, and the creation of such AI empowered creations will have far-reaching implications for all aspects of society, science, technology, and the environment.
According to Kapoor, "The likelihood of creating AI within the next 50 years, and when it happens, its deep impacts on science and society, are both assertions that will be accepted by most futurists" (788). Bostrom (2003) covers the phenomenal increases in number-crunching capacity of supercomputers that have followed Moore's law, including IBM's biggest and best, Blue Gene that operates at 1 quadrillion operations per second which is scheduled to become operational by the end of 2005. This author notes that he is in agreement with Kapoor concerning "the tragedy of the vast unfair inequalities that exist in today's world, and also in regard to the fact that there would be considerable risks involved in creating machine intelligence"; however, this author suggests that AI assistive technologies might also serve to reduce certain other kinds of risk.
For instance, Bostrom says:
An assessment of whether machine intelligence would produce a net increase or a net decrease in overall risk is beyond the scope of my original paper or this reply. (Even if it were to be found to increase overall risk, which is very far from obvious, we would still have to weigh that fact against its potential benefits. And if we determined that the risks outweighed the benefits, we would then have to question whether attempting to slow the development of machine intelligence would actually decrease its risks, a hypothesis that is also very far from obvious (902).
While the goals of individual practitioners using AI applications have varied and changed over time, a reasonable characterization of the general field of AI is that it is intended to make computers do things that when done by people are described as having indicated intelligence (Steels 1995); this author characterized the primary goals of AI as both the construction of useful intelligent systems and the understanding of human intelligence. According to Gozzi (1997), "In the 1950s, a group of scientists decided to try to provide the computer with intelligence. Their goal seemed attainable due to a common metaphorical identification of the computer with a brain. From their efforts emerged the field of artificial intelligence, or AI" (219).
This author suggests that the basic, or root metaphors of AI, resembled a classical syllogism:
Major Premise: The computer is a brain.
Minor Premise: Thinking is computing.
Conclusion: If we provide the computer with sophisticated programs, it will develop a mind similar to human minds (220).
In recent years, this has, in fact, been the focus of AI programs. According to Komninou (2003), "The more we progress, the more possessed we become with technology, the more obsessed we become with the very idea of 'intelligence', the more we take the images of our desires to be the real thing" (793). According to Boodoo, Bouchard, Boykin et al. (1996):
Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: A given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena (77).
As a result, a variety of applications of AI have emerged as an increasingly promising technology that can help users from a variety of fields to structure, guide, and improve information processing for decision-making purposes. For example, today, AI programs provide consultative advice to physicians concerning infectious diseases and their etiologies; such programs help physicists investigate unknown molecules and make predictions about their molecular structures with spectroscopic analysis; they also assist mathematicians in solving complex problems, process credit requests for American Express, hunt submarines for the U.S. Navy, help develop timely advertisements for retailers and evaluate a client's ability to repay a loan (Jones, Martin, Mcwilliams et al. 1991).
According to Dillon (1993), artificial intelligence is "the branch of computer science devoted to the study of how computers can be used to simulate or duplicate functions of the human brain... [making] it appear as though a computer is thinking, reasoning, making decisions, storing or retrieving knowledge, solving problems, and learning" (74). There are three fundamental differences between AI and other programming languages though:
AI does not use algorithms, or step-by-step procedures, in order to solve problems; rather, it employs symbolic representation such as letters, words or numbers to represent objects (in the form of statements and procedures), processes and their relationships;
The second major area of difference between AI and other programming languages is the manner in which uncertainty is handled. Dillon uses the sentence, "Erin is taller than Esther" as an example of the uncertainty involved in a definition of "tall." According to the author, "Are you tall at five feet five inches? What about short? Are you short at four feet eleven inches or at five feet? Artificial Intelligence is able to deal with such imprecision through the use of confidence factors and probability" (emphasis added) (Dillon 75).
The final difference between AI and other programming languages concerns the realm of decision-making. According to Dillon, "Conventional software uses precise data and step-by-step instructions for solving a problem, thereby limiting the computer to predetermined solutions. Whereas in AI, the computer is given information (sometimes imprecise) and the ability to make inferences. The computer and the software determine the solution" (76).
A good example, because it is likely known to many people today, of how these imprecise or "fuzzy" conditions play out in an actual setting can be found in the popular computer game, "The Sims" and its many permutations. The characters in these games are governed by a set of "fuzzy" metrics to which they respond (or not, depending on the user preferences). For example, when they become sufficiently hungry, Sims characters will seek out food; when they become sufficiently tired, they will sleep.
In fact, the metrics by which modern people measure intelligence are closer to human experience than might be commonly thought; according to Stevens (1996), "We are already used to dealing with digital, intelligent life in the form of digital representations of other humans" (414). This is echoed in her essay, "Artificial Intelligence and the Real World," where Jenkins (2003) suggests that the scope and significance of artificial intelligence (AI) make it an important concern today and in the future, perhaps more so than other emerging technologies, particularly "because AI is concerned with replicating and enhancing intelligence, and this concept, related as it to consciousness, is at the heart of human identity" (779).
This connection with "human identity" is at the core of AI assisting technologies. In the past, computer scientists working on AI have largely ignored the social roots of human intelligence; however,…