Categories
Uncategorized

The Search Before the Search: Keyword Foraging

View the original post

Summary: When users don’t know what keywords they need, they must do extra work to determine what their desired item or concept is called.


We live in a search-driven digital world, which is great as long as you know the keywords you need. But what happens when you don’t?

When users want to find an item or a piece of information and they don’t know what it’s called , they face a difficult problem. Sometimes in our research, we see users engage in keyword foraging.

In keyword foraging, a user conducts a preliminary search (usually in a web search engine like Google) to determine the right keywords for her information need.

Read Full Article

Categories
Content

How to Test Content with Users

View the original post

Summary: When evaluating content, pay extra attention to whom you recruit. Closely tailor tasks to your participants and get comfortable with silence.


Writing good digital content requires a deep understanding of who your users are , how they think, and what they know. Testing your product’s content with users can help you to determine whether:

You can evaluate your content using a variety of methods (including eyetracking and cloze tests ), but our favorite way is through usability testing . A content-focused usability test can work much like any other such test, but there are some nuances to consider when the primary goal is evaluating digital copy.

Test Structure & Facilitation

As the researcher or facilitator, you should be extremely familiar with the content you will test and with the domain it belongs to . This is particularly important for people working for agencies, since they may be new to the content area.

Read Full Article

Categories
Case Studies

Quantifying UX Improvements: A Case Study

View the original post

Summary: A research-driven overhaul of a metal and woodworking machinery B2B site’s information architecture resulted in an 85% improvement of findability.


Quantitative UX metrics allow us to track the quality of experiences over time , and see how they improve. They help UX professionals gauge the quality and impact of their work , and communicate that impact to others.

The process of UX benchmarking involves choosing one or more metrics that represent important aspects of the experience and then tracking those metrics to see how design interventions impact them .

The following case study illustrates how one team used a UX metric to evaluate the impact of its work and to demonstrate it to their client.

Read Full Article

Categories
Uncategorized

The Lawn Mower Eyetracking Pattern for Scanning Comparison Tables

View the original post

Summary: Users are likely to methodically scan comparison tables row by row, from right to left and back again.


On pages with distinct cells of content, people often scan those cells in a lawn mower pattern : they begin in the top left cell, move to right until the end of the row, then drop down to last cell of the next row and move back to the left until the end of the row;  and so on. In our eyetracking research , we observed this pattern on many types of pages and tables (especially zigzag layouts ) but most frequently on comparison tables . This article focuses on how the lawn mower pattern applies to comparison tables.

Users are likely to engage in this pattern whenever they are actively comparing several features of two or more adjacent products or services in a comparison table . (An exception would be if a user is only interested in comparing a single feature of the products — for example, price. In that case, the user would be likely to focus on a single row and wouldn’t engage in this pattern. Also, this pattern may be slightly different if the user is interested in comparing only two nonadjacent products in a comparison table that contains more than 2 products.)

The pattern is often proceeded by an appraisal — the user quickly processes the table’s layout before reading more closely.

Read Full Article