Despite the best of care and talent, computation is subject to uncertainties, which experts call "errors (Landau, 2008)." Some of these errors are man-made and some are produced by the computer itself. The four classes of errors are blunders or bad theory, random errors, approximation or algorithm errors, and round-off errors. Blunders are typographical errors or errors caused by running or using the wrong program or similar errors. Random errors are results of occurrences like fluctuations in electronics or cosmic rays running through the computer. Algorithm or approximation errors include the substitution of finite by infinite figures or variable inputs by constants. And round-off errors are inaccuracies as the finite number of digits for storing floating numbers (Landau). Peter Neumann at the SRI International identified more than 400 incidents of these errors, hazards and other problems, which can injure users and lead to financial loss. And these are not isolated or infrequent incidents. They happen as a consequence of encouraging programmers to produce large programs in haste and despite the awareness of the strong probability of these errors and hazards (Jacky).
Computing hazards include a race condition and memory leak (Jacky, 1989). A race condition or hazard is a system misbehavior, which results from the sequence or timing uncontrollable events or outside stimuli on which the output depends. And a memory leak refers to the gradual loss of available memory when a program or application in an operating system from continued but temporary use. But a classic computing hazard involves Therac-25. The Therac-25 is a radiation therapy machine produced by the Atomic Energy of Canada Limited put to use after the Therac-6 and Therac-20. There were 6 incidents between 1985 and 1987 during which medical patients were given massive doses of radiation approximately 100 times the intended doses. These deadly occurrences reveal the dangers that software control of safety critical systems are capable of inflicting (Jacky).
The sad state is that the production and control of these wares are still largely unregulated (Jacky, 1989). Only aviation and nuclear power application softwares are subjected to government approval prior to purchase. The Food and Drug Administration, fortunately, started regulating software for use as medical devices. After the Therac incidents, there arose the clamor for certification of software professionals involved in safety projects. Concerned sectors want stricter government control. There are other sectors, however, which oppose this because of the likelihood of government interference and increased costs. Some believe that the solution is not regulation but effective management. Those in the industry have three approaches to the problem. The first is to regulate the people who provide the services. The second is to certify organizations, companies or departments, which produce software and other wares. And the third is to regulate the products themselves. When systems fail, the victims or their families may file lawsuits against the vendors and the providers for damages (Jacky).
II. Artificial Intelligence, Robotics, Cybernetic Augmentations and Their Consequences
Artificial intelligence or AI is a computer with the learning or cognitive abilities or consciousness of a human being (Yudkowsky, 2008; Tech Target 2005)). It has been described as a "machine with a soul." The two basic approaches to AI are top-down and the bottom-up, the latter being the more familiar. Most scientists, however, believe that the top-down version is better because it allows the computer to "learn" about the environment. The ultimate goal for aI is to pass the Turing Test in which a human interrogator converses simultaneously via instant messaging with a human subject and with a computer. This computer has attained or been programmed with human-level intelligence. If the interrogator cannot distinguish between them because both sound human, the computer passes the test. It means that it has attained human level of cognition. A truly intelligent computer, however, may be unpredictable and even dangerous. If it attains the human level, it can design a more advanced model beyond the human ability to understand. As a dire consequence, these machines may take over the earth (Yudkowsky, Tech Target).
Physicalist John Searle presents the Chinese Room argument that computers do not really understand language (Koperski, 1991). Even a machine that is programmed to always respond appropriately, it will only be merely following man-made instructions. It does not possess the will to obey or disobey the set instructions or respond in any other way (Koperski).
One more dream-like capability of computers is the "upgrade" of human beings into future soldiers who will be invulnerable to torture, starvation and loss of sleep (Lin, 2012). These soldiers would not need to eat grass while in combat. They will be able to communicate by mental telepathy and perform incredible tasks like Spiderman or even Superman. But this dream will have to be subjected to realistic tests and standards of biomedical ethics, which regulate research and experimentation of human subjects. Advocates of this "upgrade" may justify it as a "military necessity" in order to secure volunteers for the test. Nonetheless, there will be other specific issues to hurdle. These super-fighters may be commanded or refuse to be commanded to undertake risky combat. Such enhancements may also be viewed as risky or unproven or pose long-term health risks, such as addiction. The question of reversibility and safety are further concerns that can surface. There, too, are the legal and policy implications. The impact of these enhancements on international humanitarian law or the laws of war is likely to be raised. Enhancements, which modify soldiers in a biological way, may also breach the Biological Weapons Convention if enhanced humans function as "biological agents." As a whole, enhancements are likely to be viewed as more frightening and threatening than improving the human condition (Lin).
III. Societal Consequences of Computer Science and the Technologies
When the internet was first discovered, it was an uncharted dimension where everyone could easily and freely go in and come out (McChesney, 2013). Nobody owned it. There were no commercials, no one was watching and those who went in and out could do what they wanted. It was very promising and life seemed good. But in the last five years, something began to take place. From many fronts, many large corporations have been emerging and using the internet for their private or selfish purposes. And they are very powerful. These include AT & T, Verizon, Comcast, Google, Facebook, Apple, and Amazon. They are extremely successful in securing information from users of the internet. Privacy has ceased to exist in the internet. What these giants do is to sell the information they are able to freely secure in and through the internet to advertisers. They do this with the cooperation of, and in collaboration with, the government, the national security state and the military. And this is disturbing (McChesney).
Everyone should be told or reminded that internet access is controlled by a cartel of these mentioned giants (McChesney, 2013). We in America pay far more paying for cellular phones than people in any other country. The same is true with broadband wired access. The issue here is not the technology or economics but corrupt policy making and the power and its motive behind these giant firms. Their vast power enables them to virtually privatize cyberspace. That power thus virtually allows them to own it. What is worse is that internet users have no choice. These giants dominate and swarm the internet with their wares. In the beginning, the internet promised to be a great economic booster of growth. It could surely enhance economic growth in all countries. It could create new businesses and employments. It could boost economic growth like no other force could or ever did. Yet from the late 90s, something else developed. It has become the biggest generator of monopolies. Whether the internet user enters Google, Apple, Facebook or Amazon, he interplays with network economics. If he is sensitive, he gets the feel that there now seems to be an invisible owner or owners of cyberspace. And he soon realizes that these virtual owners realize incredible profits from the internet. It is very much like colonization in ancient times (McChesney).
And just like in colonization times, these huge empires or colonizers -- named Google, Facebook, Apple and Amazon -- are like powerful continents in the world much like in the late 19th century times of colonization (McChesney, 2013). They use their continents to gather profits and use these profits to spread their control and even launch attacks on other contents and grab the profits of those other continents. They know that all of them are out to make as much profit in colonizing. If a country does not have a continent, it is not a participant in the competition. It has no power. They are also clever enough to use patents to bar newcomers from the realm. A more frightening reality today is the resulting loss of privacy. A Google-related company or a Facebook-related company, for instance,…