How you ask a question strongly determines the type of answer that you will obtain. For effective documentation reviews, whether they are conducted internally or externally as part of usability testing, it's important to use precise questions that will provide concrete information on which to base revisions. This paper proposes an approach to obtaining useful feedback that emphasizes negative, 'what did we do wrong?' questions. This approach focuses limited resources on areas that need improvement rather than areas that already work well and that don't require immediate improvement.
Cognitive friction results in a digital divide between the software development community and software users. The digital divide, in turn, has a direct correlation with the usability of the application: how well can the software users learn and use the application or the product to perform their tasks and accomplish their goals. Today's Technical Communicators can help bridge this divide and reduce cognitive friction by applying industry-acclaimed usability techniques to the documentation they produce toward accelerating user acceptance of the product. Less cognitive friction means better user adoption that results in fewer calls to tech support, higher customer satisfaction, and in the long run, better brand loyalty.
The purpose of this paper is to provide a little background on my position for the progression on usability issues. I’ll present what measures I typically collect, and the differences between performance and preference data. Having this as a starting place may help us to have a useful progression discussion.
Who are the intended offsiteuser and what are their offsitetask? (Why will they use the system? What is their experience and expertise?) What are the offsitetechnical and offsiteenvironmental constraints? (What types of hardware will be used in what organisational, technical and physical environments?)
This paper focuses on the interaction between test participants and test facilitator in two variants of the think-aloud method. In a first, explorative study, we analyzed think-aloud transcripts from two usability tests: a concurrent think-aloud test and a constructive interaction test. The results of our analysis show that while the participants in both studies never explicitly addressed the facilitator, the think-aloud participants showed more signs of awareness of the facilitator than the participants in the constructive interaction test. This finding may have practical implications for the validity of the two methods.
Anthropologist Elizabeth Briody earned her PhD studying communities of Mexican-American farm workers and Catholic nuns. For the past 11 years, though, she's been studying a different community -- the men and women of General Motors. As GM's 'industrial anthropologist,' Briody explores the intricacies of life at the company. It's not all that different from her previous work. 'Anthropologists help elicit the cultural patterns of an organization,' she says. 'What rules do people have about appropriate and inappropriate behavior? How do they learn those rules and pass them on to others?' Briody is a pioneer in a growing and influential field -- corporate anthropology. What began as an experiment in a handful of companies such as GM has become an explosion. In recent years, some of the biggest names in business have recruited highly trained anthropologists to understand their workers and customers better, and to help design products that better reflect emerging cultural trends. These companies are convinced that the tools of ethnographic research -- minute observation, subtle interviewing, systematic documentation -- can answer questions about organizations and markets that traditional research tools can't.
One common concern raised by managers and engineers alike is this: how easy to use is enough? This question, and the absence of an easy answer, is often the first defense people offer against investing in usability and ease of use. The smart usability engineer or designer has at least one response: the usability benchmark. By capturing the current level of ease of use of the current product or website, a reference point is created that can be measured against in the future. It doesn't answer the question of how usable is enough, but if the benchmark is done properly, it does enable someone to set goals and expectations around ease of use for the future.
What are we to do if we really need, during usability testing, to get some sort of handle on Internet experience? Perhaps for comparison across usability test sessions or for measuring progress in some way?
Only a few of the survey sites we analyzed in 2000 are still around. We can safely assume that the surviving sites are not a random sample of the original group, but rather that significant differences exist between the sites that made it and those that died. Survival might be due partly to luck, but it is mainly a result of good management and an understanding of Internet fundamentals. Thus, the surviving sites are likely to be disproportionately clued-in about what it takes to run an online business.
Just what do we mean by usability? Before we can set out to achieve it, we need to understand what it is we are trying to achieve. It's not enough to declare that from here on, our software will be more user friendly or that we will now be customer focused.
In Eyetrack III, we observed 46 people for one hour as their eyes followed mock news websites and real multimedia content. In this article we'll provide an overview of what we observed.
Focus groups are popular amongst marketing professionals for good reason. They are relatively quick to organise and the feedback is instantaneous. A wide range of views can be assembled from people from a wide range of backgrounds. When focus groups go well, the data can be extremely useful in identifying profitable design routes. Plus any technique that gets companies closer to their customers can't be all bad.
Usability testing is a powerful tool in identifying problems and issues that users may have with a website or software application. But for all its benefits, traditional testing does not necessarily give a complete picture at how effective a site or application is in terms of meeting business goals.
Brainstorming is an individual or group process for generating alternative ideas or solutions for a specific topic. Good brainstorming focuses on the quantity and creativity of ideas: the quality of ideas is much less important than the sheer quantity. After ideas are generated, they are often grouped into categories and prioritized for subsequent research or application.
The way you communicate the personas and present your deliverables is key to ensuring consistency of vision. Without that consistency, you'll spend far too much time arguing with your colleagues about who your users are rather than how to meet their needs.
Well-designed customer surveys can yield valuable information for your business. Unfortunately, though, a poorly worded survey can set you marching off in exactly the wrong direction. Below are some tips on designing surveys to get reliable, useful data.
In this interactive session, attendees will practice their skills in interviewing users, creating task scenarios from the users’ perspective, and turning the task scenarios into designs for information products.
This article reviews 6 simple but powerful research techniques you can use to improve the information architecture of your product or web site. None of these activities requires a computer. You simply need a bunch of cards, a participant and a desk.
This is a method for discovering the latent structure in an unsorted list of statements or ideas. The investigator writes each statement on a small index card and requests six or more informants to sort these cards into groups or clusters, working on their own. The results of the individual sorts are then combined and if necessary analysed statistically.