The objective of Part One of this study is to examine the use of Unmanned Vehicle Systems in intelligence collection and how this has expanded significantly. This work will discuss the major trends in UV utilization in intelligence collection, as well as some of the moral and ethical concerns when utilizing UVs. Part Two of this study will examine Open Source Intelligence (OSINT), which has been around for many years and will discuss some of the keep issues in OSINT and whether or not this is a valuable platform for the intelligence community.
Unmanned Vehicle Systems: Ethical, Legal and Moral Considerations
The work of Waddell (2007) entitled "A Theoretical, Legal and Ethical Impact of Robots on Warfare" reports that robotics are involved in assisting the reduction of participation of human beings in conflict and introduction of robots on the battlefield. This participation is large in scale and such that will serve to transform the combat environment more so than any other technology introduced into the battlefield arena since "it will ultimately remove man from the battlefield." (p.1)
The Statement of John Edward Jackson (2010) in a statement prepared for the United States House of Representatives Subcommittee on National Security and Foreign Affairs reports that students "are acutely aware of the ethical and legal issues associated with the employment of robotic systems in combat. Of particular concern is the possibility that unmanned/robotic systems could be programmed to make lethal decisions in combat situations without active human participation in the kill chain." (p.5) The Drone was named weapon of the year in 'Time Magazine' (2012) and reported is that more than 7,000 drones in the air were being used by the military with this number significantly rising.
Artificial intelligence is reported as "…part of computer science that is the intelligence and cause of action of a machine, both in hardware and software form." (Hanson, 2012, p.1) Artificial intelligence or 'AI' is reported as being "at the forefront of drone technology development." (Hanson, 2012 p.1) AI can act in an autonomous manner in terms of its both hardware and software and AI enables use of "rapid data processing, pattern recognition, and environmental perception sensors to make decisions and carry out goals and tasks." (Hanson, 2012, p.1) In addition, Artificial Intelligence "seeks to emulate human intelligence, using these sensors to understand and process to solve and adapt to problems in real time." (Hanson, 2012, p.1)
While human beings have the capacity to make sense out of visual and audio information through use of intelligence and since intelligence includes characteristics of flexible response to situations and to disseminate information from conflicting or ambiguous messages as well as acknowledge the priority of various situational elements and ultimately draw distinctions, there are legal, moral and ethical questions in these areas in regards to Artificial Intelligence capabilities. In fact, the primary issue is whether a computer can actually be in possession of intelligence.
Professor Paul Edwards of the University of Michigan is reported to have stated that scientists are in the beginnings of simulating some of the "functional aspects of biological neurons and their synaptic connections, neural networks could recognize patterns and solve certain kinds of problems without explicitly encoded knowledge or procedures…" (Hanson, 2012, p.2) This is reported to mean that Artificial Intelligence is "beginning to incorporate human biology to make it think." (Hanson, 2012, p.2) There are however skeptics who hold that Artificial Intelligence will never possess capacity that surpasses the capacity of the intelligence of human beings since the human being is very advanced and while a machine might be able to conduct faster calculation of data the machine will never reach the complexity of the brain of the human being.
Hanson (2012) reports that the computer systems if they are to emulate the human thinking process are reliant on programmed "expert systems" which is stated to be a type of Artificial Intelligence that serves as "an intelligent assistant to the AIs human user." The expert system is more than a computer program with the capacity to conduct searches and to retrieve information but rather an expert system is in possession of "expertise, pools information and creates its own conclusion emulating human reason." (Hanson, 2012, p.3)
An expert system is reported to have three components that make the expert system more advanced in the area of technology than the "simple informational retrieval system." (Hanson, 2012, p.3) Those three components include: (1) a knowledge base or a collection of declarative knowledge or facts procedural knowledge or course of action which act as the "expert system's memory bank"; (2) a user interface or hardware used by the human user in communication with the system in the formation of a two-way communication channel; and (3) interface engine reported as the most advanced aspect of the expert system in what is a program with the capacity to know when and how knowledge should be applied and further that directs the application of that knowledge. (Hanson, 2012, paraphrased)
The work of Arkin (2011) entitled "Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture" reports that the use of autonomous robotics in the battlefield "as with any new technology is primarily concerned with Jus In Bello or the definition of what it is that constitutes the ethical use of these systems during conflict, given military necessity." (p.2) Reported are two principles that the Just War tradition asserts including: (1) the principle of discrimination of military objectives and combatants from non-combatants and the structures of civil society; and (2) and the principle of proportionality of means, where acts of war should not yield damage disproportionate to the ends that justify their use." (Arkin, 2011, paraphrased)
Non-combatant harm is reported to be held as justified when that harm is not direct and not intended as there are specific rights held by combatants for example at the time they surrender and assume the position of a non-combatant they are no longer subject to attack. Jus in Bello makes additional requirements that "agents must be held responsible for their actions" including failure to abide by orders which are known to be based in immorality and the "status of ignorance in warfare." (Arkin, 2011, p.2)
It is reported by Arkin as well that the 'Laws of War' (LOW) contained in such as the Geneva Conventions nd Rules of Engagement set out the acceptable and nonacceptable actions in the battlefield in what are known as the 'laws of war' and the 'rules of engagement'. (Arkin, 2011, p.3) Battlefield limitations specific to the use of autonomous systems that are lethal in nature include: (1) surrender acceptance of combatants and humane prisoner of war treatment; (2) proportionality force use in conflict; (3) combatant and noncombatant protection from unnecessary suffering; (4) avoid of damage to people and property not in combat that is not necessary; (5) protection of individuals or vehicles marked with the Red Cross or Crescent emblems and those flying a white flat; (6) nonuse of torture for any reason; (7) nonuse of specific weapons; and (8) mutilation of corpses. (Arkin, 2011, paraphrased)
According to Arkin (2011) future autonomous robots may have the capacity to perform better than human beings for reasons that include: (1) the capacity of the unmanned vehicle to act in a conservative manner; (2) robotic sensors that are equipped for more accurate observations of the battlefield than the human being; (3) a design that is void of emotions such as cloud the judgment of the human being; (4) a design that avoids the psychological problem experienced by human beings related to "scenario fulfillment; (5) a design that integrates more information from various sources quicker prior to a response of lethal force than could the human being and this being in real-time; and (6) a design that has the ability to monitor ethical battlefield behavior both "independently and objectively." (Arkin, 2011, paraphrased)
Part Two - Open Source Intelligence (OSINT)
Open Source Intelligence (OSINT) is reported to be a type of intelligence "collection management" that includes information location, selection and acquisition from sources that are available to the public and the analysis of that information toward production of "actionable intelligence'. (Analysis Intelligence, 2012, p.1) The term open is used in the intelligence community to speak of 'overt' information that are publicly available as compared to covert sources or that which is classified in nature.
It is reported that the use of Open Source Intelligence (OSINT) has expanded greatly recently and for the intelligence community of a traditional nature OSINT is predicted to be likely to stay a component of the type of all-source intelligence capacity inclusive of sources that are classified. The majority of government agencies use OSINT solely for the purpose of decision and policy matters. Pallaris (2008) reports that OSINT is "…information gathered from publicly available sources for the purpose of meeting specific intelligence requirements. These sources can be free or subscription-based, on- or offline. OSINT is not limited to the internet, although it is here that an increasing volume…