The argument for modular brain architecture is one that holds that the brain is bestowed with some finite characteristics from birth. Scientists that advocate the modularity concept believe that the human information processing system consists of modules - relatively isolated subsystems - that can function independently of each other.
These characteristics can be thought of as structural constraints, in that the brain's nature is predetermined to a greater extent that it is the product of interaction with external forces that shape it during the development cycle. Modularity is not associated with all brain functions, although it is accepted that the most basic differences in processing and interpreting data are unique to humankind, as illustrated by the cognitive differences between humans and other mammals. However, there are many differing opinions on the subject, as researchers' perception is predicated on the interpretation of different studies.
The debate over the nature of visual perception begins with the empiricists of the 17th century. These include John Locke, Bishop George Berkeley, and David Hume, especially to Berkeley An Essay toward a New Theory of Vision (1910). The empiricists believed that all knowledge is gained through experience and that what constituted experience can be derived from easily recognizable units. Locke and others thought that perceptual experience originates in sensations. Berkeley was the principal student of the nature of vision due to his development of a specific theory of vision to explain perception of depth. His theory "is still very much present in contemporary psychology" (Rock, 1975). Berkeley and others claimed that ideas are copies of sensations available from the external world (and held in memory) which lead to the idea of the mind being a tabula rasa, or clean slate. However, recent studies in the perceptual abilities of infants have lead scientists to question these ideas.
The debate over the development of vision in infants revolves around cortical and sub-cortical vision. Whereas cortical vision is response-predicated and evokes concepts of learned behavior, sub-cortical vision is innate, like breathing, and is hardwired into the human brain. Whereas the existence of cortical vision is proof that some amount of learning is necessary, it also provides us with some insight as to how modular neurological activity provides for this necessity and constrains its nature. In his classic article, Gordon Bronson (1974) proposed that the primary visual pathway gains control over sensory motor behavior in infants at around 2 months after birth. Prior to this, the infant uses subcortical visual processing to process images. The nature by which infants process images has been found to have similarities to animals that have certain kinds of cortical damage. Other studies show that components of visually evoked potentials that are related to subcortical structures are present from birth, (Atkinson, 1984; Vaughan & Kurtzberg, 1989, for reviews) and some components related to the striate cortex are present from around the time of birth. This vidence gives weight to the idea that cortical functioning develops over the first few months, and that other changes occur in the 2nd month of life. There is some evidence of cortical activity at birth. More recently the idea that vision shifts from subcortical to cortical visual processing in early infancy has come under criticism. There is more evidence that sophisticated cortical development exists in early infancy (Bushnell, Sai, & Mullen, 1989; Slater, Morison, & Somers, 1988; Slater, 1988).
Because Infants process images in two or several different ways and proceed to develop the ability to recognize what they see, it is understandable that some interpret this as a process of learning, and therefore dependent on external stimuli. However, post-natal development follows uniform, circumstance-independent patterns that only significantly diverge as human babies are compared with those of other primates. The nature of perceptual disabilities further lends support to the argument that vision is modular rather than learned behavior. For instance, when victims of epilepsy have the halves of their brain disconnected from one another, they learn how to co-ordinate cerebral functionality through vision. However, even this process is limited in scope. A subject's hands will only function in accordance with the dictates of the opposite sides of the brain if one is blindfolded. One hand identifies the pencil, the other is aware of how to use it.
Some of the more popular network architectures show few structural constraints. Severeal networks assume total interconnectivity between all nodes (e.g. Hopfield, 1982). Completely connected architectures are the most versatile; such architectures allow almost any possible inputoutput relation (Funahashi, 1989; Horniket al., 1989; Stinchcombe and White, 1989) to be learned. Those that presume completely connected brain architectures are environmental determinists. However, available evidence indicates that the human system is not as connected as these proposed systems. Some manual learning tasks seem to be much easier than others.). At the microscopic level, the minicolumns found in the grey matter of the neocortex can be considered module-like structures (e.g. Mountcastle, 1978). These minicolumns consist of regions in the neocortex that have intracolumnar connections. (e.g. Creutzfeldt, 1977).
With reference to visual perception, there is a large body of neuropsychological evidence showing that isolated abilities, such as the ability to recognize faces (e.g. Damasioet al., 1982), may be lost without affecting other cognitive abilities at all (e.g. Gazzaniga, 1989; Luria, 1973; Shallice, 1988). Much of the work that has been done to determine cognitive faculties as they relate to visual perception has involved pattern recognition.
Perception and memory have been studied theoretically from the worldviews of mechanism and contextualism. Mechanists tend to isolate mental processes and give primacy to memory in explaining meaningful experience. Contextualists, however, believe that "perceiving is the basic cognitive activity out of which all others must emerge" (Neisser, 1976, p. 9). Moreover, visual perception is inextricably linked to memory. Mechanistic assumptions have historically dominated the study of visual perception. However, present experimental and theoretical evidence suggests that contextualistic assumptions such as Neisser's are more accurate. Jenkins, one of the seminal contextualists, (1974b) called directly for a contextualistic worldview in the field of memory, a view "still alive and well at both the empirical and theoretical levels" in human memory research (Horton & Mills, 1984, p. 362; Horton, 1991). The modular argument is a contextual argument for sensory perception in that it places memory and learning in the context of neural architecture. This is a significantly different view from the tabula rasa empiricists; in the past it was derided as an apology for genetic determinist views of human neurological function.
Pattern recognition has attracted the attention of many neuroscience researchers. Two main approaches have been used to conduct research: statistical pattern recognition (the decision-theoretic approach) and syntactic pattern recognition. (Krishnaiah and Kanal, 1982) In statistical pattern recognition, subjects are tested on their ability to assign patterns to one of a set of pattern classes. Sometimes subjects must define the characteristics of an optimal set of classes. In syntactic pattern recognition, a subject must attempt to find structural descriptions of patterns.
Linguists, who engage in comparative analyses of the way languages are structured, inspired this latter system.
Pattern recognition draws both on the structural faculties and on the learning capabilities of the brain. However, the learning faculty of the brain is structurally constrained in the recognition of patterns. Research has determined that pattern recognition aptitude and the like can differ with training, but that this young children have demonstrably different faculties for the recognition of patterns despite similar backgrounds. Not surprisingly, pattern recognition tests have been integrated into most IQ tests, which are in turn used to single out gifted children. Most of the research available supports the idea that visual perception is modular.
Studies of reaction time are of interest to those that wish to measure the cognitive function of visual perception. Whereas in simple reaction time experiments, there is only one stimulus and one response, in choice reaction time experiments, the user must give a response that corresponds to the stimulus. The study of reaction time first became popular when 19th-century neurological scientists wished to explore the nature of the brain. Reaction time has been a key focus in neurological testing and has many applications, ranging from sports to every day activities. Studies have shown that many factors impair reaction time. For example, a person who is too relaxed or too tense cannot react as quickly as one who is neither. Age is correlated to reaction time; people peak in their late 20's. Many things such as alcohol impair reaction times significantly.
Reaction time varies with factors such as age, sex, and levels of alertness. According to several studies, reaction time shortens gradually from birth until one's late 20s, then reverses and lengthens until the 50s and 60s. After one's 60's, it continues to lengthen at a faster pace. (Welford, 1977; Jevas and Yan, 2001; Luchies et al., 2002; Rose et al., 2002). For more complicated tasks, the disparity between young or middle aged people and older people is even more pronounced. This has been attributed both…