No one has found a way to really help writers create readable prose. Robert Gunning developed a method for calculating the 'Fog Index' and Rudolph Flesch worked out more than one formula for measuring the simplicity of writing. By one of Flesch's formulas (the one without personal pronouns), Ronald S. Lemos in the February, 1985 issue of Communications of the ACM (CACM) was able to prove that CACM required two less years of school to read than Datamation. Statistics can prove anything. I have no idea what Sophomore in High School could read the CACM cover to cover and even understand most of it. Flesch's book 'The Art of Plain Talk' was given to me at a Yourdon Systems Analysis course. The Instructor handed it to each of us, saying something like 'read this and you'll be a manager in no time' (supposedly, management is handed to the least efficient person who can also write well). The book is full of examples, mostly journalistic, showing how good writers evoke human interest. Of course, these writers had human events, thoughts and feelings as their focal points, not software, I doubt whether any of the graduates of that week ever used Flesch as a reference for grading their own documentation. How would Bernard Shaw have documented software? Or Mingus played it? This paper addresses these burning issues.
Some reading researchers and technical communicators assume the efficacy of readability formulas. Reading researchers use such formulas to equalize the reading difficulty of texts used in experiments. Results of an informal Internet survey indicate that some professional writers and editors use readability formulas that are integrated into word-processing software. This article proposes that readability formulas fail to predict text difficulty. The results of an experiment demonstrate that "text difficulty" is a perception of the reader and therefore cannot be objectively calculated by counting syllables, word length, sentence length, and other text characteristics.
Discusses the application of readability principles and formulas. It is based upon the survey of the literature presented in succeeding chapters, and represents an interpretation of these data.
I know from some years running a reading clinic in the United States that you can make more progress if you start the students out on relatively easy reading material so the the students can read the materials with some comprehension and success. But how do you tell these African instructors how to select “relatively easy” reading materials in technical English? The answer - use a readability formula.
There is a considerable amount of information published on the Web that is intended to be read by someone. There is evidence that much of the information may be too hard to read and understand for typical readers. Baker, Wilson and Kars (1997) reported that the readability scores of most articles in the 'Health Reference Center' ranged from 10th to 14th grade levels. Another study (Graber, Roller and Kaeble, 1999) included text-based information from commercial, academic and government sites. They found that the reading material averaged the 10th grade level. In a more recent study, a group of researchers (D'Alessandro, et.al., 2001) conducted readability analyses of pediatric patient education materials on the Web, and concluded that the information was not written at an appropriate reading level for typical users.
A literature review reveals many technical weaknesses of readability formulas (when compared to direct usability testing with typical readers): they were developed for children's school books, not adult technical documentation;they ignore between-reader differences and the effects of content, layout, and retrieval aids on text usefulness; they emphasize countable features at the expense of more subtle contributors to text comprehension.
While readability formulas were intended as a quick benchmark for indexing readabilty, they are inherently unreliable: they depend on criterion (calibration) passages too short to reflect cohesiveness, too varied to support between-formula comparisons, and too text-oriented to account for the effects of lists, enumerated sequences,and tables on text comprehension. But readability formulas did spark decades of research on what comprehension really involoves.
Have you ever felt as though technical communicators don’t get the proper respect for the work they do? This isn’t just a problem of identifying with the comedian Rodney Dangerfield. And I’m not just asking whether you go home at the end of the day with your feelings hurt, although job satisfaction is certainly an important factor. But even more important, you need to have the respect of your boss and co-workers to be effective in your job and to advance your career.
After World War II, the United States government and citizenry were concerned with truth, propaganda, democracy, and national security as they entered the Cold War era. This was a time when technocrats, engineers, and scientists could lead our free-world government through the perils of our tense relationships with Russia, Red China, and Korea. In the early 1940s, Rudolf Flesch began developing what he termed a "scientific rhetoric" to help writers of functional documents more effectively communicate technical information to a general public. He came up with a readability formula to help writers evaluate whether their writing was effective and this readability formula has profoundly shaped notions of "clear writing" for the last 60 years. This article explores Flesch's development of this readability formula, placing his work in a historical context, as well as discussing how the readability formula fit into a larger project to make effective writing more of a science than an art.
A survey of 39 communicators in high-tech industries reveals low use of computerized readability formulas. Both technical and business communicators find current measures ill suited for the process or product of technical writing.