Success in a diverse global marketplace increasingly demands that companies engage customers from diverse global backgrounds in both discussions and usability studies. However, funding for user research travel is becoming more limited, and the availability of local users who meet the need for diversity is often insufficient. Therefore, UX professionals have started using remote usability testing methods to gather adequate user feedback.
The proliferation of usability labs is a sign of success for the field of user-centered design. Whether it’s a low-rent lab comprised of a couple adjacent conference rooms, a video camera, and a television, or a fully decked-out space with remote-control cameras, two-way mirrors, an observation room, and bowls of M&Ms — more and more companies are investing in such set-ups. Conducting user tests in labs is probably the most common means of getting user input on projects. That’s a shame, because standard user testing practice is remarkably out of sync with reality.
New screen-sharing tools and remote usability tools make it easier to conduct moderated remote usability testing. Dealing with video and audio recordings keeps getting simpler as well. But observing people remotely presents a unique set of obstacles, so this is a guide to what we’ve learned from conducting 149 remote studies with 1,213 participants over the last seven years. We can’t get that time back, but hopefully some of what we’ve picked up will be helpful.
Journaled sessions bridges usability inquiry, where you ask people about their experiences with a product, and usability testing, where you observe people experiencing the product's user interface. Journaled sessions are often used as a remote inquiry method for software user interface evaluation. A disk is distributed to a number of test subjects containing a prototype of the software product, as well as additional code to capture (or journalize) the subjects' actions when using the prototype. Users perform several tasks with the prototype, much as in formal usability tests, and their actions are captured with the journalizing software. Upon completion of the series of tasks, the users return the disks to you for you to evaluate. Because the journaling portion of the evaluation is largely automated, this approach to remote, hands-off inquiry is certainly more 'usable' then self-reporting logging, where users are requested to write down their observations and comments and send them back to you.
There’s nothing specifically wrong with in-person research. But there is that whole Internet thing that’s been happening. It does have some unique properties we can take advantage of to do things that weren’t possible with old-school research.
User research doesn’t have to be expensive and time-consuming. With online applications, you can test your designs, wireframes, and prototypes over the phone and your computer with ease and aplomb. Nate Bolt shows the way.
Participants ask questions live (via a phone connection, audio via Internet, or typed chat session). There are some usability testing products (such as ErgoLight) that enable you to test remotely when you cannot make an online connection, but they are not covered in this survey. These products are classified as Remote Control, Support Desk/Customer Service, Telecommuting, System Administration, and Video Chat tools. Many of the products have a Recorder and/or Playback facility, which is probably a natural extension of remote viewing.
Web sites have become a key communication medium, and usability is an essential factor in good web site design. Usability testing is an inexpensive way to gather valuable feedback from representative users, which can help web designers and content creators make their site more usable and relevant to its audiences. This paper examines the benefits of remote online testing over more traditional face-to-face methods. Remote online testing provides access to a larger pool of potential testers, cuts out travel time, and can significantly lower the cost of usability testing. Although the benefit of face-to- face contact is lost, research shows this method is just as effective in identifying usability issues as traditional testing. The UNECE Statistical Division recently conducted tests of its current web site (www.unece.org/stats) as a basis for redesigning the site’s information architecture and establishing a benchmark for future usability studies. Tests were conducted remotely using online conferencing software, allowing testers to be truly representative of our geographically dispersed users and significantly reducing costs.
The Graphical User Interface (GUI) of software usually consists of huge number of icons. Though the intention is to improve the usability of software, not all interface designers are able to test and evaluate the comprehensibility of icons. Increasing exposure to unevaluated icons causes cognitive fatigue to users and slows down the intuitive learning. Users from diverse geographic locations, cultures and religions are very likely to interpret and understand these icons differently. As software products are designed to address universal needs, testing and evaluation of GUI across the globe or at least, wherever the product is likely to be used becomes important. Creation of dedicated usability labs in various locations for usability testing is not a viable proposition. A software tool named 'UniFace' for remote usability testing of icons is designed capitalizing on far-reaching capability of Internet. UniFace extends the usability lab onto the desktop of every user.
Recently, there has been a surge in the number of tools that are available for conducting unmoderated, remote usability testing—and this surge is changing the usability industry. Whether we want to or not, it forces us to take a closer look at the benefits and drawbacks of unmoderated testing and decide whether we should incorporate it into our usability toolbox.