When you measure hits on inter/intranet sites, you are measuring overall volume of usage -- how many times parts of your site have been opened. However, hits don't distinguish between the opening of an entire page or a single illustration. There are many additional ways of measuring usage. However, measuring the "userability" of a site is just as important in order to improve usage numbers. But the first place any communicator should start when measuring the effectiveness of electronic communications is to identify the original objectives for putting something on-line. Conducting some baseline audience research upfront to make sure your electronic solutions will be as effective as possible and then measuring afterward to see if the intended objectives are being met.
By seeing all of the available data in one chart, associations, patterns and conclusions can be drawn simply by comparing the relationships as they are presented. This is something that I learned from Edward Tufte.
This paper describes how user assistance can streamline deliverables and improve product design by analyzing usage patterns from server-based content. We can then base decisions about how to improve deliverables on a thorough understanding of how customers use help content to find information and solve problems. This approach enables user assistance to add more value to both our companies and our customers by creating a three-way dialog between user assistance, the customer, and the product team. It also broadens the definition of assistance to include helping to design products that people can use without the need for instructions.
Visuals that provide insights come from 1) a deep understanding of the goal / objectives 2) from thinking beyond what standard trend lines or stacked bar graphs can provide. Something non-normal to grab attention and yet communicate insights (sort of already contain recommendations and action items and not just data).
In most cases a technical writer cannot do any user tests. If you have access to the user log of a web server you can derive quite interesting facts like how often and how long a specific page was viewed and how the surfers navigated.
This article describes an applied investigation into a concept of information visualization where data are not rendered as graphs, charts or diagrams on the screen but as a sensual experience beyond the screen in physical space. It introduces predecessors such as calm technologies and ambient displays among a number of poetic and applied examples from related backgrounds to establish the context and relevance for communication design and graphic design, and presents a current research undertaking in which the social activity of visiting a website is visualized in multiple sensorial modalities in real-time in the form of a kinetic and sensual display.
What is the industry standard for bounce rate? The simple and short answer is that there is no industry standard. I know you don’t want to hear that, but it is true. There is no industry standard. There are some ranges that I will share shortly but we can’t call them industry standards. There are a lot of factors that influence the bounce rate, so you really can’t compare bounce rates of one site (or page) to another.
Incorporating the voice of the user into user experience design by using personas in the design process is no longer the latest and greatest new practice. Everyone is doing it these days, and with good reason. Using personas in the design process helps focus the design team's attention and efforts on the needs and challenges of realistic users, which in turn helps the team develop a more usable finished design. While completely imaginary personas will do, it seems only logical that personas based upon real user data will do better. Web analytics can provide a helpful starting point to generate data-backed personas; this article presents an informal 5-step process for building a 'persona of the people.' In practice, outcomes indicate that designing with any persona is better than with no personas, even if the personas used are entirely fictitious. Better yet, however, are personas that are based on real user data. Reports and case studies that support this approach typically offer examples incorporating data into personas from customer service call centers, user surveys and interviews. It's nice work if you can get it, but not all design projects have all (or even any!) of these rich and varied user data sources available. However, more and more sites are now collecting web analytic data using vendor solutions or free options such as Google Analytics. Web analytics provides a rich source of user data, unique among the forms of user data that are used to evaluate websites, in that it represents the users in their native habitat of use. Despite some drawbacks to using web analytics that are inherent to the technology and data collection methods, the information it provides can be very useful for informing design.
We have two distinct sets of users; internal product consultants and end users. Prior to using RoboHelp Server we had little way of identifying who was looking at our documentation, when they were looking at it, or how often. That has now changed.
Because each website appeals to its audience differently, the prudent user experience designer takes a measured approach when communicating, especially when they do so on behalf of their client. No matter what the vision and no matter how it’s executed, a design can always communicate more effectively.
Traffic statistics have a huge impact on a Website's success, and Apache provides one of the most powerful and flexible logging features available today. Blane explains the nitty-gritty of configuring Apache Weblogs in this handy how-to.
Everything served to a visitor -- from the first page through marketing, sales, and product fulfillment -- generates data about the customer. Web marketers can tap into this 'free' source of profile data for just the cost of converting existing data into a format that can be used by a data-analysis program.
The statistical term 'correlation' has found its way into popular business language. Often, though, no measurement of correlation has actually taken place. That's too bad. Because there's probably a correlation between measuring correlation and increasing revenue.
Using a linear diagram to plot data from website traffic logs can lead you to overlook important conclusions. Sometimes advanced visualizations are worth the effort.
Despite all the talk about data-informed design, there is not much agreement on what data really means for a product or service’s user experience. That might be because teams don’t yet have a shared language for talking about data, or because access to data is uneven or siloed, or perhaps because team members have different goals for the use of data.
The key to creating great service experiences lies with uncovering data and using it in meaningful contexts that have real benefits to users. Recent advances in wearable tech, location-based data and sensors are driving greater interest by consumers in personalized data experiences. Google Glass and the Nike FuelBand are pushing boundaries on what users can expect inside the services of tomorrow. For designers, however, data presents a very interesting challenge: How can we better understand the value of data and leverage it to make digital experiences more meaningful?
Web stats are a tool and you need to know how to you that tool. Otherwise, you aren't accomplishing anything. At the very simplest level, your web stats should help you to figure out this overused business truism: 'Do more of what works. Do less of what doesn't.' But if you really want to derive value, you need to delve deeper. You need to understand what the numbers are telling you.
This paper argues that metrics can be generated from search transactional web logs that can help evaluate search engine effectiveness. Search logs from the BBC website were analysed and metrics extracted. Two search metrics — the time lapse between searches and the number of searches in a session — were developed to see whether they could measure search success or satisfaction. In all, 4 million search statements by 900,000 users were evaluated. The BBC search engine possessed a number of functional attributes which sought to improve retrieval and these were subjected to the two metrics to help determine how successful they were in practice. There was some evidence to support the proposition that the search outcome metrics did indeed indicate the effectiveness of engine functionality. The authors argue that this result is significant in that the identification of search outcome metrics will pave the way for assessing the effectiveness of site specific search engines and a greater understanding of the effectiveness of search engine functionality.