Sociocognitive Dual Coding and Processing Models Research Paper

  • Length: 10 pages
  • Sources: 1
  • Subject: Teaching
  • Type: Research Paper
  • Paper: #80770481

Excerpt from Research Paper :


Dual Coding Theory (DCT) was originally developed for memory research. The basic notion is that images and words influence memory differently. DCT has been applied to reading and has been used to improve reading programs. The assertion is that learning to read a new word is more efficient if more than one part of the brain is activated, by paring verbal and nonverbal codes. Verbal code would be language in any form; nonverbal codes are tangible objects, pictures, feelings, and events. If one code is forgotten, the second code can serve as a backup during word retrieval. By paring written words, pronunciations, pictures, and experience we are focusing on all levels of processing in DCT which fosters learning. The following paper describes the basic elements of DCT.

According to Dual Coding Theory (DCT) information is represented in the brain via both verbal and imagined codes (Paivio, 1971). These two codes organize sensory information into knowledge that can be stored, retrieved later for use, or acted upon.

Basic Features of DCT

Paivio (1971) based the precepts of DCT on empirical findings. There are two basic categories of knowledge: visual and verbally mediated knowledge representations. Mental images are analogue codes (imagens; Sadoski & Paivio, 2004) which are representations that preserve the major perceptual features of the physical stimuli we have observed. Verbal codes correspond to arbitrarily chosen representations that stand for something that they do not perceptually resembl . The two forms of mental codes also have subsets of mental representations that differ due to the different sensory experiences that they originated from. These subsets have sensory motor qualities as well. Visual representations for verbal codes (e.g., letters) and verbal representations for images (the word "dog" for the visual image of a dog) also exist as well as other associations for nonverbal images (e.g., an image of a pizza will also have haptic, gustatory, olifactory associations). However, we experience modality specific interference as we have difficulty doing two different things in the same modality at the same time (e.g., listening to two conversations at once; Sadoski & Paivio, 2004). All of these memory units are evolving and flexible. Both can be activated either directly (direct sensory input) or indirectly via associations (e.g., the word "tiger" could evoke words or images of cat, lion, zoo, etc.).

There are sequential restraints for logogens (e.g., the letters d-o-g can be sequenced as dog or god but not dgo). Speech and print have sequential and linear natures and larger units can be broken down into smaller units (sentences to words; words to letters) and vice versa (Sadoski & Paivio, 2004). The hierarchical organization of imagens are continuous and integrated and do not break down in the same manner as logoens, but instead as clusters of smaller units that can be combined into larger units. Verbal and nonverbal units then are organized and represented differently.


DCT recognizes three levels of processing (Sadoski & Paivio, 2004):

1. Representational processing depends on sensory stimulation and individual processing differences. It is simply the initial activation of logogens and/or imagens (recognition of stimuli, but not necessarily comprehension). With regards to reading the sensory stimulation factor would be the legibility of the print and the individual differences factor would be the reading ability of the person.

2. Associative processing often involves comprehension and is the spreading activation in semantic memory that the stimulus produces. This type of spreading activation occurs within the code. At the verbal level this involves the activation of logoens at the at least the morpheme level from a previously activated logoen .

3. Referential processing is spreading activation between the codes and is associated with meaningful comprehension. So when reading logogens that are activated also activate imagens; some might activate no imagens (typically abstract words/phrases), some a few imagens and some might activate many. Mental imagery assists with making sense of the world and understanding or comprehending stimuli, especially during reading. Logogens when activated could spread associations to certain imagens and these in turn could refer back to the verbal system. The spreading activation within and between the two codes defines language and expands its meaning.


DCT explains reading at the levels of decoding (they prefer "recoding" as it refers to converting printed stimuli to spoken form), comprehension, and response. Sadoski and Paivio (2004) discuss how DCT relates to reading with the sentence "The batter singled to center field." The reading process begins with the eyes fixating on the words in the sentence. The words The batter are associated activated at the representational level associated with auditory (and motor) logogens and spreading activation works rapidly with the adjective The signaling the use batter as a noun. Next fixation falls on the word singled and is recognized as a verb due to the ed suffix. Next fixation goes to toward to center with various associations but in context center (field) is understood. Final fixation goes to in the first (inning). The whole process takes about two seconds. Several other important observations are: (1) Extensive grammatical parsing is not conscious and may not be complex. (2) Text is represented in two codes and at least in two modalities, auditory-motor (e.g., inner speech) and visuo-spatial (e.g., imagery). Both could be elaborated in different ways. (3) Comprehension is the result of equilibrium in the network. Verbal and nonverbal associations correspond and restrict one another to avoid random searches. Sadoski and Paivio's mental model refers to the total verbal-nonverbal aggregate. (4) Meaning is derived from a network of activated verbal and nonverbal representations. There are a number of inferences not covered in the sentence, such as was it a fly ball, ground ball, was it sunny, cloudy, etc. These are probabilistic and the individual may infer conditions not specified based on experience, understanding, etc. Likewise, the response of the reader implies a mental model and will depend on individual differences, experience, personality, etc. (e.g., one might imagine the event; another might feel hitting the ball, etc.). If sentences are more abstract in nature the individual will rely more on verbal associations.

However, there are several detractors of the model. DCT requires both verbal and nonverbal modes. Some of the criticisms of DCT concern its notion of mental imagery. For example, blind children perform equivalently to sighted children in DCT processing tasks suggesting that perhaps the theory is missing something. This led Pylyshyn (1973) to argue that images are not stored, but instead information is stored as mental propositions. Moreover, Sternburg (1969) has demonstrated how visual search in memory was serial, which is counterintuitive to DCT.


The DCT argues that visual and verbal information is processed differently. The model is simple and predicts that pairing visual and verbal codes, keeping early reading lessons concrete, and repetition will enhance classroom learning. Multimedia products should make useful teaching aids and can improve the development of phonetics, vocabulary and grammar.


Paivio, A. (1971). Imagery and verbal processes. New York: Holt, Rinehart, and Winston.

Pylyshyn, L. (1973) What the mind's eye tells the mind's brain: A critique of mental imagery. Psychological Bulletin, 80, 1-24.

Sadoski, M. & Paivio, A. (2004). A dual coding theoretical model of reading. In R.B. Ruddell & N.J. Unrau (Eds.), Theoretical models and processes of reading (5th ed.) (pp. 1329-

1362). Newark, DE: International Reading Association.

Sternberg, S. (1969). The discovery of processing stages: Extension of Donder's method. Acta Psychologica 30, 276-315.

Reading List


Bell, N. (2007). Visualizing and verbalizing: For language comprehension and thinking. San Luis Obispo, CA: Gandler.

Fisher, D.B. & Frey, N. (2008). Teaching visual literacy: Using comic books, graphic novels, anime, cartoons, and more to develop comprehension and thinking. Thousand Oaks, CA:

Corwin Press.

Paivio, A. (2006). Mind and its evolution: A dual coding theoretical interpretation.

Mahwah, NJ: Lawrence Erlbaum Associates.

Sadoski, M. & Paivio, A. (2001). Imagery and text: A dual coding theory of reading and writing. Mahwah, NJ: Erlbaum.

Sadoski, M. & Paivio, A. (2004). A dual coding theoretical model of reading. In R.B. Ruddell & N.J. Unrau (Eds.), Theoretical models and processes of reading (5th ed.) (pp. 1329- 1362). Newark, DE: International Reading Association.

Yuille, J. (1983). Imagery, memory and cognition: Essays in honor of Allan Paivio. Hillsdale, NJ: Erlbaum.

Journal Articles:

Clark, J.M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3, 149-210.

Paivio, A., & Sadoski, M. (2011). Lexicons, contexts, events, and images: Commentary on Elman (2009) from the perspective of dual coding theory. Cognitive Science, 35(1), 198- 209.

Sadoski, M.A., & Paivo, A. (2007). Toward a unified theory of reading. Scientific Studies of Reading, 11, 337-356.

Zwaan, R.A., Stanfield, R.A., Yaxley, R.H. (2002). Do language comprehenders routinely represent the shapes of objects? Psychological Science, 13, 168-171.

Websites/Web Articles:, Dual Coding Theory. coding.html

Instructional Design.

Ryu, J., Lai, T., Colaric, S., Cawley, J., & Aldag, H. (2000). Dual coding theory.

Wikipedia: Dual Coding Theory.

Lesson Plan for "Mouse Soup"

1. Heading:

Your information goes here.

1. Objectives for the lesson…

Online Sources Used in Document:

Cite This Research Paper:

"Sociocognitive Dual Coding And Processing Models" (2012, April 15) Retrieved February 13, 2017, from

"Sociocognitive Dual Coding And Processing Models" 15 April 2012. Web.13 February. 2017. <>

"Sociocognitive Dual Coding And Processing Models", 15 April 2012, Accessed.13 February. 2017,