How do comments on student writing from peers compare to those from subject-matter experts? This study examined the types of comments that reviewers produce as well as their perceived helpfulness. Comments on classmates’ papers were collected from two undergraduate and one graduate-level psychology course. The undergraduate papers in one of the courses were also commented on by an independent psychology instructor experienced in providing feedback to students on similar writing tasks. The comments produced by students at both levels were shorter than the instructor’s. The instructor’s comments were predominantly directive and rarely summative. The undergraduate peers’ comments were more mixed in type; directive and praise comments were the most frequent. Consistently, undergraduate peers found directive and praise comments helpful. The helpfulness of the directive comments was also endorsed by a writing expert.
Collaboratively written by thousands of people, Wikipedia produces entries which are consistent with criteria agreed by Wikipedians and of high quality. This article focuses on Wikipedia’s Featured Articles and shows that not every contribution can be considered as being of equal quality. Two groups of articles are analysed by focusing on the edits distribution and the main editors’ contribution. The research shows how these aspects of the revision patterns can change dependent upon the category to which the articles belong.
Organizations are in urgent need of professional review processes for their intranets and public websites. Out of date content is growing year by year, and there are many horror stories about out-of-date content waiting to happen. It’s time for management to get serious and professionally manage their websites.
Provides an effective method for checking the content-accuracy, completeness, and logical order of a document. Notes that this technique is not a substitute for more careful review when time and the document's importance allow.
What is the influence of demographic variables such as gender and educational level on the reader feedback collected under the plus-minus method? To answer this question, an analysis was made of the problems detected in four public information brochures. The average amount of feedback per participant did not vary among the four brochures, but the severity of the problems did. Male participants mentioned more problems than female participants, but the problems detected by female participants were on average more severe. Highly educated participants detected more problems than participants with a lower level of education. No differences in problem types mentioned were found between male and female participants, and only one difference was found between the two educational levels: Highly educated participants focused more strongly on the structuring of information. In general, brochure characteristics had more effect on the types of feedback collected than the two demographic participant characteristics.
Like most technical writers, getting my feature team to review my help topics for technical accuracy is like keeping an Iditarod team from making a dash for the nearest McDonalds or garbage dump in the middle of a blinding blizzard. Technical contributors want to participate in technical documentation reviews but they rarely have enough bandwidth to do so effectively. Consequently, I spend a lot of time trying to determine the most effective way to squeeze my teammates for feedback. This can be a painstaking process, especially for technical writers who are unlucky enough to work with teams that are halfway around the world or spread across the country. Some contributors only produce if I corner them in their office with a paper copy. Others are overly motivated, but I love them all the same. Most technical reviewers, at least at Microsoft, require a combination of: incentives (food, beer, ...), attention getters (a stern note from their manager) and tech review tools that fit their working style and team culture.