In a descriptive syllabus for a graduate seminar in AI professor Donald gives insight into the form that AI research is taking and the bottom line is that internal brain functioning, switching, neurotransmissions, and patterns are being dissected to give the student a greater sense of the workings of the human mind so these same students may go forwards and attempt to recreate, decidedly small scale (likely single or minimally multiple) functions of human thought to create action. It is therefore not difficult to imagine that these same students will be a part of the future with regard to the development of life-like interactive toys or tools. (Donald NP) the point being made is that research developed to recreate a person in an artificial form is being done on real human science and characteristics. Science is attempting to map and dissect the whole of human physical and mental potential to both understand it better and create imitations of it for science and potentially to better the human condition. Many of them com at it as if they are designing better toasters, even though it is far greater than that. They are designing elaborate toasters that can learn from humans and can learn to anticipate what type of toast and how dark the individual would like it to be, based not on cues but on an artificial sense of empathy. This is of course a simplistic analogy and in reality mind reading toasters is a still yet more simplified concept than an AI system, as AI being developed today has the hope of science behind it that would have it be conscious, think, learn, emote and even hold viable and lifelike conversations with its human companion.
There may also be a time when, like the movies, research into such systems does not require the support of an institution and can be created and manipulated within "rogue" environments or at the very least environments with little or no legal supervision or control. This is not to say that a conspiracy of AI development currently exists as most if not all of the current research regarding AI is open and debated, with the exception of the fact that within science and even the law there are many protections for entities that wish to keep scientific knowledge a secret for the protection of their own rights, the security of nations and most often for the protection of any future right to profit from their discoveries.
A leading expert of corporate social responsibility reasons that companies are not populated by enough people with universal ideals of social responsibility. This is a statement that many experts agree on, that culpability also lies in the individual, working for any corporation, to speak out when problems are noted and force action. (Lindorff 880) Sims proposes that there needs to be a "universal ethical principle orientation" that he defines thusly:
Right is defined in terms of self-chosen universal principles of good and bad. The individual follows self-selected ethical principles. If there is a conflict between a law and a self-selected ethical principle, the individual reasons and uses conscience and moral rules to guide actions. A person at this stage may be more concerned with social ethical issues and not only rely on the business organization for ethical direction. www.questia.com/PM.qst?a=o&d=106810913" (Sims 103)
The problem then arises when the corporate and scientific climate in which these individuals work does not stand on the ideas of openness and communication about concerns and problems. The system does not reward scientists for speaking out about known problems and whistleblowers are rarely tolerated within the system, as they represent a threat to secure information, that builds profit as well as company public image. There are many cases of this sense of secrecy in the science world, that is clearly contrary to open communication and sharing about science and technology, one of sciences many founding principles.
There is, to begin with, the kind of secrecy that everyone deplores but that is fostered by institutional cultures of self-interest, both public and private -- when scientific facts that the public has a right to know are intentionally hidden and knowingly withheld to preserve the economic or political standing of powerful organizations. Examples include drug companies that fail to disclose reports of adverse reactions to their products, (57) car manufacturers that hide technical defects in their vehicles, (58) employers and polluters who conceal data about illness caused by their activities, (59) and governmental agencies that paper over malfunctions in technologies that are deemed key to the success of their missions. (60) Impediments to criticism and communication have also arisen within universities, the traditional strongholds of scientific openness, as a result of private sponsorship that contractually demands secrecy. (61) www.questia.com/PM.qst?a=o&d=5018256079" (Jasanoff 21)
Jasanoff then goes on to discuss the fact that the legal system support secrecy in science and business as a development of controlled rights to future profitability and limited liability. The scholar stresses that corporations often fund science and therefore control the outcome of information regarding it as a result of commercial confidentiality and in the case of the public sector as what Jasanoff calls an overuse of the function of "national security." (Jasanoff 21) Within the moral fiber of the consumer driven society there must be more exploration of the impact that companies have on the communities they serve and safety must be a paramount concern. Yet, it is clear that in the current system the rule of secrecy to protect image and profit drive the scientific community, as soon as such researchers and developers go on someone else's payroll. Therefore individual culpability can only be a limited answer and solution to the problem. Corporate social responsibility should be the umbrella to all decisions made with regard to client safety and with regard to AI overall principles of right and wrong that stress open communication and legal precedence setting as essential to discovery and implementation. As one can see from the above supported statements the scientific community, built around the idea of open discovery and dialogue is in no need of developing "rogue" conspiracy to develop AI technology as it is primed for secrecy already. The issue of disclosure only really comes to pass if safety is questioned (and people are willing and able to express concerns) and if the systems intention is for the development or release of a consumer product.
The current state of AI is arguably developed, though most will note that the products of such "builds" are arguably inhuman in character the language of science legitimately places the goal of the science of AI in a position to seek life-like capabilities in a robot.
In 1993, after years of research on behavior-based insect robots, Brooks and his team at MIT started to construct a robot shaped like a human. They named it COG, an abbreviation for "cognition," and also the tooth of a gear. COG was designed and built to emulate human thought processes and experience the world as a human. Brooks and his team further assumed that people would find it easier to interact with a robot and aid the robot in its learning process when the robot could respond in a somewhat human way. Consequently, the machine should have limbs, sensory organs, and a physical resemblance to humans. Unlike other artificial intelligence systems, like medical expert systems, COG was meant to test theories of human cognition and developmental psychology. (Ahmad NP) picture of COG does not elicit the idea that COG is in any way demonstrative of a human being, as he looks as he should like a robot, yet already this researcher is capable of taking the leap of pronoun from it to he. The robot is gendered and characteristically personified, as the goal of the MIT experiment was achieved through the very means discussed above, a greater understanding of the workings of the human mind, not the secrecy. Though COG may not necessarily look human those who have interacted with "him" are often struck by the fact that "he" acts a great deal like a human, learning in the same manner as a child.
Ahmad goes on to say that technology will eventually be paired with biotechnology which will likely help create much more human looking robots and he then gives examples of other AI humanoid robots, "Honda's Asimo, Sony's SDR-4X, and Kitano Symbiotic Systems' baby robot PINO are some of the other humanoid robots being developed around the world. To date, these are not commercially available," (NP) [PINO is currently available to purchase as a toy] and some products that are already available, that come in the form of electronic pets, (Sony's Aibo) capable of learning from their human owners and carrying out simple tasks for them as well as recharging itself when needed, a functionality which is clearly demonstrative of a traditional human controlled AI function.…