Readability Anyone Using Microsoft Word Can Determine Term Paper

  • Length: 8 pages
  • Subject: Teaching
  • Type: Term Paper
  • Paper: #89271326

Excerpt from Term Paper :


Anyone using Microsoft Word can determine the Flesch-Kincaid readability score for their own work by doing little more than running spell-check from the top navigation bar on their computer screen. It is conceivable that the ubiquitous presence of such an easily used readability scale could have an effect similar to that of spell-check and hand-held calculators. Those who use spell-check often lament the fact that they lazily allow the 'machine' to spell for them and have forgotten some of the words they once knew. Likewise, those who have come to depend on calculators often lament the fact that they can no longer do the simple arithmetic functions in their heads. If use of the Flesch-Kincaid program in Microsoft becomes equally ubiquitous, will people forget how to write anything except the simplest sentences? Worse still, will they be unable to express the complex thoughts that are represented by low readability scores?

These concerns have been raised by critics of using readability indexes; in fact, except for the academic research dealing with the education of the intellectually challenged, testing for readability has fallen out of favor. A look at what readability is, how it came about, and its likely effects on learning will reveal whether such a statement is accurate or not.

What is readability?

Defining readability is difficult without using language that a describes what it does. According to Templeton et al. (1982), the concept applies to that within the use of language that makes it easy or difficult to read. They note, too, however, that years and years of research have not solved problems concerned with whether and how such variables as vocabulary, sentences length and syntactic complexity should be assessed. Other researchers also point to the 'ineffable' content of written work, the unique choices for scansion and phrasing and so on that a writer will use that render a work more or less readable; these are things that, so far, programs have not be designed to read. "In a 'state of the art' review of readability research, Harris and Jacobson suggested that substantive advances in the determination of readability could only be achieved by moving beyond the traditional variables. This may be taken to mean that language per se is not the only concern in establishing text difficulty ...." (Templeton et al. 1982).

However, simplifying matters has appealed greatly to one constituency, educators (Fry, 1968; McLaughlin, 1969; Spache, 1953, cited by Templeton et al. 1982, 382-287). That is not surprising, considering the genesis of the concept. Fry (2002) noted that by readability, most reading professionals mean applying one or another readability formula to the work in question. This, Fry thinks, is short-sighted at best because "True readability does have a more general meaning found in popular dictionaries such as 'easy or interesting to read -- capable of being read' (The Random House Dictionary of the English Language, 1983, cited by Fry 2002). In classrooms and publishing houses readability is often conceived of as a number derived by applying a formula concerning word and sentence length, and little else, to a piece of written work (Fry 2002).

History of readability

The current preference for exact, concise scientific and scholarly writing began more than a century before Ben Franklin's time. Goldbort (2001) notes:

By the early 1600s, Francis Bacon, the so-called 'father of modem science,' already had successfully promoted an economical and objective use of the English language. Bacon argued that scientific and scholarly writings were duty-bound to draw attention to their content, not their words. Readable and thereby useful scientific and academic writing should avoid flowery, convoluted sentences and instead strive for simple, unambiguous, and mathematically plain language.

In colonial times, reading instruction usually involved learning the alphabet and little more, but jumping right into studying the bible. In 1836, William Holmes McGuffey developed the first set of reading instruction books -- readers -- meant to move beginning readers up through levels of difficulty. As the number ascended (1,2,3,4, etc.), so did the difficulty of the books. The number did not correspond to a grade level, although these are referred to as 'leveled' readers (Fry 2001).

McGuffey's Readers were enormously popular, selling more than 130 million copies between the '1840s and the early part of the twentieth century. The influence of the McGuffey's Readers would be difficult to impugn. The total population of the United States in 1850 was 23 million. That figure had grown to 76 million in 19000, with a "very high percentage of the school population (using) McGuffey's leveled readers" Fry 2002).

By the end of the twentieth century, educators had become disenchanted with the graded, or leveled, readers because they contained very restricted vocabularies and content. Literature-based reading material replaced these volumes, offering more interesting stories but lacking leveling.

While it was difficult to find proponents of readability as a measure of a reading textbook's usefulness, leveling -- which takes other factors into account beyond simple word and sentence length -- was popularized by Marie Clay, whose Reading Recovery system employed reading tutoring for children who were likely to fail.

Are there still proponents of readability indices?

Clearly, some educators no longer are proponents of readability indexes. However, some commercial companies have worked with traditional readability formulae that are based on sentence length and vocabulary difficulty and have added sophistication to the assessments by suing computers. The ability of the computers to assess much larger samples, or even the entire contents of books, they are able to produce finer gradations than whole-grade levels, even at the primary levels (Fry 2001).

These modern versions of readability indices produce their own reading achievement tests that correlate with their readability units; however, other tests in general use can also be correlated with them. These resources include Lexile unit assessment by Metametrics (Zakaluk & Samuels, 1995, cite by Fry 2001), ATOS grade levels by Advantage Learning Systems (2000, cited by Fry 2001), and Degrees of Reading Power units by Touchstone Applied Science Associates (1999, cited by Fry 2001).

Other readability formulae can be applied by hand. Below is a listing, along with the equation for using each type:

"a. Dale and Chall

Comprehension = .1579 (percent words not on Dale-Chall list of 3000 common words) + .0496 (words/sentences) + 3.6365.

"b. Gunning

Readability index = .4 (mean sentence length + % words over 2 syllables).

"c. Fry

Grade level = intersection of values for sentence length and word length measured in syllables on the Fry Readability Graph; factors are weighted differently for earlier vs. later grades" (Anderson et al. 1984, 175-190).

Additional formats are these:

Grammatik (Computer-based)

Right Writer (Computer-based)

Gunning-Mueller Fox Index [TM] (Grazian 1996, 19+).

Rudolf Flesch Reading Ease Formula [TM]. (Grazian 1996, 19+).

The Fog Index (Grazian 1996, 19+).

CorrectGrammar (Goldbort 2001).

Editor (Goldbort 2001).

Spache (Goldbort 2001).

Writers Workbench (Laband 1992).

Readability Plus (Laband 1992).

LIX Index (Laband 1992).

Laband notes that the correlation between all the programs he investigated was extremely high, as high as .85%, in establishing the readability levels of test passages (1992).

The Fog Index is an example of one that may be used by hand. To do so, count a group of words up to the 100th word. If it is in mid-sentence, go to the end of the sentence and count the number of words over 100. Then count the number of sentences within that specimen of 100 or so words and divide by the number of words to arrive at the number of words per sentence (Grazian 1996, 19+). It is not, however, quite that easy. The number of words with three or more syllables in the first 100 words must be counted, except for "proper nouns, combinations of short words such as 'bookkeeper' or 'manpower,' or the number of verbs made into three syllables by adding '-ed' or '-es.' Next, add the average number of words per sentence to the number of words with three or more syllables in the 100-word sample. Finally, multiply the sum by .4" (Grazian 1996, 19+).

Grazian pointed out that the average American reads on a 9th-grade level; college graduates struggle with anything above the 16th-grade level. He also points out that many people prefer to read a grade level or two beyond their best level, or, in other words, to remain in their reading comfort zone. To concerns about becoming a stilted writer by assessing one's own work this way, Grazian contends that applying the formula, and revising to remove the 'fog' will make writers more adept at incorporating its principles into ordinary writing skills. He also suggested restricting the bulk of one's words to five letters or fewer, which would put one's writing on a par with Lincoln's Gettysburg Address and some of Shakespeare. The obvious inherent problem is that those two authors had concepts to offer; any number of five-letter words signifying nothing may be easy to read, but would certainly not be great writing.

It is also obvious that the purpose of readability tests cannot be to create…

Online Sources Used in Document:

Cite This Term Paper:

"Readability Anyone Using Microsoft Word Can Determine" (2005, June 03) Retrieved January 17, 2017, from

"Readability Anyone Using Microsoft Word Can Determine" 03 June 2005. Web.17 January. 2017. <>

"Readability Anyone Using Microsoft Word Can Determine", 03 June 2005, Accessed.17 January. 2017,