A directory of resources inthe field of technical communication.


236 found. Page 1 of 10.

About this Site | Advanced Search | Localization | Site Maps

1 2 3 4 5 6 7 8 9 10  NEXT PAGE »

Usability testing is a technique used to evaluate a prototype of a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users will use the system.



Accentuate the Negative: Obtaining Effective Reviews Through Focused Questions   (peer-reviewed)   (members only)

How you ask a question strongly determines the type of answer that you will obtain. For effective documentation reviews, whether they are conducted internally or externally as part of usability testing, it's important to use precise questions that will provide concrete information on which to base revisions. This paper proposes an approach to obtaining useful feedback that emphasizes negative, 'what did we do wrong?' questions. This approach focuses limited resources on areas that need improvement rather than areas that already work well and that don't require immediate improvement.

Hart, Geoffrey J.S. Technical Communication Online (1997). Articles>Usability>Methods>Testing


Accuracy vs. Insights in Quantitative Usability

Better to accept a wider margin of error in usability metrics than to spend the entire budget learning too few things with extreme precision.

Nielsen, Jakob. Alertbox (2011). Articles>Usability>Testing>Methods


Analyzing and Reporting Usability Data

The Just-In-Time (JIT) method of data analysis has the virtue of immediacy, rapid turn-around, and team involvement; however there are several disadvantages. First, this type of analysis is problem-focused, rather than goal-focused. Long lists of problems are generated, but there is no clear relation to specific usability goals. Second, developers may not be able to fix things immediately so the context of the problem may be lost when it is time to fix the problem. Third, the JIT analysis requires that the entire development team observe the testing sessions since problems may occur that are the responsibility of different developers.

Wilson, Chauncey E. Usability Interface (1997). Articles>Usability>Testing>Reports


Analyzing the Interaction Between Facilitator and Participants in Two Variants of the Think-Aloud Method   (PDF)   (members only)

This paper focuses on the interaction between test participants and test facilitator in two variants of the think-aloud method. In a first, explorative study, we analyzed think-aloud transcripts from two usability tests: a concurrent think-aloud test and a constructive interaction test. The results of our analysis show that while the participants in both studies never explicitly addressed the facilitator, the think-aloud participants showed more signs of awareness of the facilitator than the participants in the constructive interaction test. This finding may have practical implications for the validity of the two methods.

van den Haak, Maaike J. and Menno D.T. de Jong. IEEE PCS (2005). Articles>Usability>Testing>Methods


Asking Questions About Internet Behavior :: UXmatters

What are we to do if we really need, during usability testing, to get some sort of handle on Internet experience? Perhaps for comparison across usability test sessions or for measuring progress in some way?

Jarrett, Caroline. UXmatters (2011). Articles>Usability>Testing>Methods


Authentic Behavior in User Testing

Despite being an artificial situation, user testing generates realistic findings because people engage strongly with the tasks and suspend their disbelief.

Nielsen, Jakob. Alertbox (2005). Articles>Usability>Testing


The Benefits of Viewing User Tests

The benefits of user testing have long been established. It is still important however to try and maximise these benefits. One way in which this can be done is by viewing the user test yourself.

Frontend Infocentre (2009). Articles>Usability>Testing>Methods


Better Reports: How to Communicate the Results of Usability Testing  (link broken)   (PDF)

You've spent several days setting up a usability test, recruiting the participants and running it. Then you've pored over the data. What next? If you are doing usability testing as part of user-centred design within a business setting, then there are many ways that you can communicate the results. This paper looks at reports and then considers presentation and observation as alternatives to reports.

Jarrett, Caroline. STC Proceedings (2004). Articles>Usability>Testing>Reports


Beyond the Five-User Assumption: Benefits of Increased Sample Sizes in Usability Testing  (link broken)   (PDF)

It is widely assumed that 5 participants suffice for usability testing. In this study, 60 users were tested and random sets of 5 or more were sampled from the whole, to demonstrate the risks of using only 5 participants and the benefits of using more. Some of the randomly selected sets of 5 participants found 99% of the problems; other sets found only 55%. With 10 users, the lowest percentage of problems revealed by any one set was increased to 80%, and with 20 users, to 95%.

Faulkner, Laura. Behavior Research Methods, Instruments and Computers (2003). Articles>Usability>Testing


Beyond the Universal User: How to Design for the Universe of Users   (PDF)

Current 'user-centered' design methods place great value on design for the user. In this, I examine how investigation methods for user-centered design like usability testing and field methods are often only used to design for the universal user and not the universe of users. I critically explore the universalizing of the user that occurs during these investigation methods. I then address the problems with designing for a universal user and finally present ways to design for the universe of users.

Bowie, Jennifer L. STC Proceedings (2003). Articles>Usability>Testing


Beyond Usability Testing

Usability testing is a powerful tool in identifying problems and issues that users may have with a website or software application. But for all its benefits, traditional testing does not necessarily give a complete picture at how effective a site or application is in terms of meeting business goals.

Farrell, Tom. Frontend Infocentre (2001). Articles>Usability>Testing>Methods


Bias in Usability Testing  (link broken)

What does 'bias' mean in usability testing? Is it bad? good? in between? What are sources of bias in usability testing? Which one(s) should we worry about most? How do we know our methods are any good? Especially when we all do things differently?

Snyder, Carolyn. STC Proceedings (2007). Articles>Usability>Testing


Breaking Things as a Form of Creativity

IT Author’s latest podcast, Testing testing 123, dives into testing. Rather than just commenting on testing from a technical writer’s point of view, Alistair Christie and his co-host Graham Campbell interviewed an actual tester. It’s a good interview with lots of informational nuggets. For example, “regression testing” is testing those software features that were tested previously. Every new feature has the potential to affect other features, so even if you’ve already tested something, you have to test it again.

Johnson, Tom H. I'd Rather Be Writing (2010). Articles>Usability>Testing


Cleaning Up for the Housekeeper, or, Why it Makes Sense to do Both Expert Review and Usability Testing

Contrasts the unique aspects of expert reviews and usability testing. The usability goals they address are different. Know when to use which one, and when to use both.

Straub, Kathleen. Usability Professionals Association (2006). Articles>Usability>Methods>Testing


Clustering for Usability Participant Selection   (peer-reviewed)

User satisfaction and usefulness are measured using usability studies that involve real customers. Given the nature of software development and delivery, having to conduct usability studies can become a costly expense in the overall budget. A major part of this expense is the participant costs. Under this condition, it is desirable to reduce the number of participants without sacrificing the quality of the experiment. If a company could use a smaller participant pool and get the same results as the entire pool; this would result in significant savings. Given a participant pool of size N, is there a subset of N that would yield the same results as the entire population? This research addresses this question using a data-mining clustering tool called Applications Quest.

Gilbert, Juan E., Andrea Williams, and Cheryl D. Seals. Journal of Usability Studies (2007). Articles>Usability>Testing>Methods


The Combined Walkthrough: Measuring Behavioral, Affective, and Cognitive Information in Usability Testing   (peer-reviewed)

This paper describes an experiment in studying users’ behavior, emotions, and cognitive processes in single usability testing sessions using an experimental method called the combined walkthrough. The users’ behavior was studied using task times and completion rates, and emotions were studied using bipolar scales for experienced valence and arousal. Cognition was studied after each task by revisiting detected usability problems together with the users and applying an interactive method based on cognitive walkthrough to each usability problem. An interactive media application was tested with 16 participants using these methods. The results of the experiment showed that the developed methods were efficient in identifying usability problems and measuring the different aspects of interaction, which enabled the researchers to obtain a more multifaceted view of the users’ interaction with the system and the nature of the problems encountered.

Partala, Timo and Riitta Kangaskorte. Journal of Usability Studies (2009). Articles>Usability>Testing


Common Industry Format for Usability Test Reports   (PDF)

The purpose of this technical specification is to facilitate incorporation of usability as part of the procurement decision-making process for interactive software products. Examples of such decisions include purchasing, upgrading and automating. It provides a common format for human factors engineers and usability professionals in supplier companies to report the methods and results of usability tests to customer organizations.

NIST (2001). Resources>Usability>Testing>Reports


A Comparative Analysis of Heuristic and Usability Evaluation Methods  (link broken)   (PDF)

Usability testing and heuristic evaluation are two methods for detecting usability problems, or bugs in software user interfaces. Usability testing identifies bugs that impair user per$ormance. It provides a realistic context for the product evaluation, whereas heuristic evaluation does not. Further, it provides an estimate of bug severity, while heuristic evaluation does not. To its detriment, usability testing is more expensive and time intensive than heuristic evaluation. It also tends to overlook bugs that may not affect user pelformance but may negatively impact the user’s perception of product quality. Recently, we have incorporated the bestfeatures of these two methods into one procedure which seems to work better than either method alone.

Simeral, Elizabeth J. and Russell J. Branaghan. STC Proceedings (1997). Articles>Usability>Testing


Comparing Assessment Techniques  (link broken)   (PDF)

In just the last few years, we have begun to see research studies comparing usability testing to other techniques for assessing usability. In general, usability testing has found more of the most serious usability problems and fewer of the least serious problems than other methods. Heuristic evaluation--having people evaluate the interface either from their own expertise or from a set of guidelines--has achieved mixed results. Although usability testing seems expensive compared to other methods, it may be less expensive when considered on the basis of 'cost per problem that needs to be fixed.'

Redish, Janice C. 'Ginny'. STC Proceedings (1993). Articles>Usability>Testing>Assessment


A Comparison of Eye Tracking Tools in Usability Testing  (link broken)   (PDF)

Eye tracking tools have recently attracted attention from usability professionals. Eye tracking offers usability researchers a new way to identify very fine-grained behaviors that indicate usability problems. This paper is a comparison of different types of eye tracking tools and their potential usefulness in usability testing. Specifically, the paper examines the cost of the systems, system types, sampling rate, and some system limitations. The paper aims to provide a basic introduction to technical communicators who are considering adding an eye- tracking system to their toolkit.

DeSantis, Rich, Quan Zhou and Judith A. Ramey. STC Proceedings (2005). Articles>Usability>Testing>Eye Tracking


Complexities Of Usability Testing   (PDF)

Usability testing has proven itself in improving product usability, but actually planning, doing testing, and interpreting results are not always straightforward. Interpretation of the results of usability testing, changes to improves usability, and general inferences to be drawn from specific tests are extremely difficult to make with accuracy. After working through the practicalities and politics of usability testing itself you must then draw conclusions and support them People who have done a lot of testing will find these problems familiar.

Ridgway, Lenore S. STC Proceedings (1994). Presentations>Usability>Methods>Testing


Comprehensive Review Of Usability And User Experience Testing Tools

Usability and user experience testing is vital to creating a successful website, and only more so if it’s an e-commerce website, a complex app or another website for which there’s a definite ROI. And running your own user tests to find out how users are interacting with your website and where problems might arise is completely possible.

Chapman, Cameron. Smashing (2011). Articles>User Experience>Usability>Testing


Conducting International Usability

User testing is a valuable tool, but how does one conduct user tests internationally? This essay draws from my experience leading an international user testing project, and I hope you can learn from my mistakes and successes.

Merholz, Peter. Adaptive Path (2003). Articles>Usability>Testing>International



Follow us on: TwitterFacebookRSSPost about us on: TwitterFacebookDeliciousRSSStumbleUpon