How to Make the Most of Quantitative Research in UX Design

View the original post

Florence Nightingale’s Rose Diagram showing two charts, comparing the mortality rates of soliders before and after a sanitation drive.
Image Credit: Wellcome Library, London, CC BY 4.0

If you’re not used to figures and statistics, that might seem like one cold, and disturbingly deep, ocean. But before you start worrying about drowning in numbers, think how you can use them to navigate to better places with your designs, and the best vantage points to see your users.

Over the past twelve months or so, numbers and graphs have become a big part of our lives — how many active Covid-19 cases are there in your area? Is the curve flattening? What is the efficacy of the vaccine? What percentage of the population has to get vaccinated before we get back to “normal”?

Epidemiologists and other researchers in the medical industry have long used data collected from the population to identify the spread of diseases, their likely causes and risk factors, and tested alternative courses of treatment to advance healthcare. The image at the top represents one of the earliest uses of statistics and data visualization. Dubbed the Rose Diagram, it was drawn by Florence Nightingale to advocate for better sanitary conditions in hospitals to help reduce the mortality of soldiers.

UX designers too can make use of data to study user behavior, identify patterns, diagnose problems and test alternative solutions to improve our end users’ experience.

As we spend more of our time with digital applications, we can track almost every aspect of our (and our users’) lives — ethically, and with permission, that is. If you’ve enabled the settings, or installed certain applications on your mobile phone, you already know how much data you, as one person can generate: how many steps you took, how long you looked at the screen and which apps you spent the majority of your day on, to name a few.

Data is everywhere. If you are just beginning to work with data, you might feel overwhelmed, especially if you open Google Analytics for the very first time!

In-app analytics is just one source of data: you have a host of other quantitative user research methods at your disposal to help you in your design process.

Blurry Screenshot of a dashboard with a line chart and a pie chart. The words “returning visitor” are highlighted next to the pie chart.

What is Quantitative Research and How Can It Help You Design Better?

Quantitative research is a methodology which researchers use to explore and test theories about people’s attitudes and behaviors based on numerical and statistical evidence.

With UX research methods such as interviews, you might talk with 5–10 participants for several minutes each, and then spend more time analyzing the data. With quantitative research, you can start with as many as 30 participants and work with sample sizes of hundreds, or even thousands of participants. This large sample size helps reduce the chance of bias. Since quantitative research is usually done online, it’s faster and cheaper to conduct, replicate, gather data and analyze. Surveys, tree tests, first-click tests and A/B tests are some of the quantitative research techniques that can get you specific, numerical insights to inform your design process. You can:

  • Analyze user behavior to identify opportunities for improvement.
    Example: If analytics reveal that a majority of users navigate to a different screen immediately after signing in, you can use that as a starting point to design a better landing page so that users can complete their tasks more efficiently.
  • Validate (or invalidate) assumptions to have greater confidence in your design decisions:
    Example: Let’s say you think people struggle with completing a task on your app because they are unable to navigate to the relevant feature. You can perform a tree test to validate your assumption and decide whether a new navigation structure will be easier for users.
  • Support findings from qualitative research to convince business stakeholders to invest in a design decision:
    Example: Imagine you conducted a usability study and found that people struggle to decide which pricing plan works best for them. You can support your finding, and pitch for testing different pricing models and layouts with the help of data. You could look at the number of emails sent to customer support through the help icon on the pricing page, or the number of people who land on the pricing page and then don’t take action, to support your pitch.
  • Test new features and designs:
    Example: You can analyze whether people are using the new feature, or if/how the new feature has led to a change in how people use other features.
  • Measure the monetary impact of design decisions:
    Example: You can find out if the new interface leads to an increase in conversions, or does the new flow help users complete their tasks faster? This can help you demonstrate the ROI of design activities and help convince business stakeholders to invest earlier in design. Early-design testing, in turn, can save the organization’s resources, time and money by helping you to identify and fix issues before it’s too late.

Quantitative + Qualitative Research = Winning Combination

Numbers cannot replace humans. As appealing as quantitative research sounds, it is important not to lose sight of the people who generate that data. Data can help you understand what people are doing, but it cannot explain why they think or behave a certain way. Qualitative research methods such as contextual inquiries, user interviews and diary studies help you get a more holistic picture.

“… (data is) just a tool we use to represent reality. They’re always used as a placeholder for something else, but they are never the real thing.”

— Giorgia Lupi, Information Designer

Depending on your objectives, you may need to start with qualitative research methods and back them up with quantitative methods, or vice versa.

Some sample use cases:

  1. You can use in-app analytics to identify user patterns and then dig deeper to identify why those patterns occur. For example, if a high number of users visit the pricing page, but do not complete a transaction, you can run a usability study or interview a few users to identify why they are not purchasing a plan. Perhaps a majority of the users are university students on a limited budget, or perhaps your pricing page is not optimized for mobile use.
  2. If you plan to conduct a large-scale survey, you must always conduct a few user interviews to identify what questions and answer options to include in your questionnaire. Suppose you work on a product that helps students manage their budgets, and you want to identify which of your competitors are more popular with your users. You can conduct an interview to identify which products to include in that list of competitors. Through the interview, perhaps you find that college students’ approaches to their finances differ widely depending on which country they are from, what financial background they have and whether they have a scholarship. You may even discover competitors that you hadn’t realized you had before. Armed with this insight, you can build your survey questions to factor in these variables and update your list of competitors.

What to Keep in Mind While Designing With Data

  1. Run experiments: Quantitative research methods are experiments. And like all scientific experiments, start with something to test — an idea, an assumption or a design decision. Define your success criteria; how will you know if your test (idea, assumption or design decision) will be successful? And depending on that success criteria, define your metrics and choose the research method.
    For example, if you want to test a new landing page design, your success criteria could be an increase in conversions. And you can run an A/B test to find out if the new layout leads to higher conversions. It is easy to get overwhelmed and/or lost in data (as we’ll see a little further on). When you run experiments with clear objectives, you will find it easier to navigate the sea of numbers.
  2. Choose the right audience to test with: Generally speaking, the larger the sample to test with, the better. With some methods, you can use sample size calculators (such as Optimizely) to help you identify the number of people to test with.
    However, the quality of the sample size also matters. If your product is primarily going to be used by accounting staff in a hotel company, you’ll want to test with a representative sample. While a large random sample of people may include accountants, it will also likely include accountants across different industries as well as people from other professions, which will, in turn, corrupt your data set.
  3. Correlation does not imply causation: Let’s say you find conversions on your website are higher on desktop devices than on mobile devices. This need not mean that your mobile site is poorly designed. It actually may well mean that people first visit your site on their mobile phones when they have nothing else to do (for example, while traveling on a bus or waiting at the bank). And then, when they are more comfortable, they make the purchase on a desktop. There can be any number of variables that influence people’s behaviors. Make sure you factor in the user’s journey when you look into the data.
  4. Data can be notoriously misleading: Following on from the previous point, be very careful when working with data, and always test for statistical significance. Statistical significance is a way to identify how reliable the results of your experiments are. That is, it helps you answer these questions: “What is the likelihood that the results of our experiments are just random figures?” and “If we were to conduct the test again, how likely is it that we would still get the same results?”
    In addition to these statistical tests, combine multiple research methods to get a more holistic picture before you make design decisions.
  5. Beware of your biases: Not only can raw data be misleading, but it’s also notoriously easy to manipulate. If you have a strong belief or an assumption, beware of confirmation bias — the human tendency to look for evidence to support our beliefs. Using different research methods to test the same hypothesis (known as triangulation) can help you navigate this bias to a certain extent. However, the most important step is to be aware and acknowledge your biases. Conduct your experiments with an open mind and learn from the results.
  6. Context is key: Standalone data can be beautiful to look at. But how do the metrics stack up against your competitors? What about your company’s historical performance? Are there any external factors that have influenced your dataset? Have you made assumptions, or tested with only one user persona? Make sure you understand the context of your research data before you draw conclusions.
  7. Rinse and repeat: After your experiments, take the necessary steps to implement your design decisions (if your tests are successful) or take a step back to come up with a different test (if the test was unsuccessful). Measure your results and keep running experiments. Your users’ context will continue to evolve, and — with the help of data — you can keep an eye on what works, what doesn’t and deliver better experiences to your end users.

The Take Away

You can use quantitative research methods such as surveys, tree testing, first-click testing and A/B testing to understand your users, validate (or invalidate) assumptions, test ideas and experiment with designs. While quicker and more scalable than qualitative research methods such as contextual inquiries and diary studies, quantitative methods have their limitations. It’s easy to get overwhelmed and lost in data. To make the most of quantitative research methods, use them in combination with qualitative ones. Run experiments, test your results for reliability and be aware of the constraints of data, your own biases and the users’ journey. And, most importantly, iterate. Implement successful experiments, measure the results and run more experiments to continually improve your users’ experience.

References and Where to Learn More

William Hudson’s course on Data-Driven Design: Quantitative Research for UX (free for members of the IxDF) dives into different quantitative UX research methods with real examples and practical tips on how you can design with data.

Giorgia Lupi shares her obsession with data and illustrates how you can use data to tell stories in this TED Talk [Video length ~ 11 minutes]

Jonny Longden shares tips on how not to confuse correlation with causation in this article.

Tim Harford explores the layers of Florence Nightingale’s Rose Diagram to reveal the power of data visualization to tell stories and convince stakeholders into taking decisions in his podcast, Cautionary Tales. [Podcast length ~41 minutes]:

Prof Alan Dix gives a primer on Information Visualization in his course at the IxDF.

About the Interaction Design Foundation (IxDF)

The Interaction Design Foundation (IxDF) is the biggest online design school globally. Founded in 2002, the IxDF has over 103,000 graduates (and counting) and over 450 Local Groups across 95 countries. We believe in the power of design to make our world a better place and are on a mission to make top-quality design education accessible to as many people as possible.

The IxDF offers online design courses that cover the entire spectrum of UX design, from foundational to advanced level. Our certificates are trusted by industry leaders as well as universities.

We invite design experts and leaders from around the world to share their insights through live as well as on-demand Master Classes and offer 1:1 mentor sessions through Bootcamps.

As a UX Planet reader, you get 25% off your first year of membership with the IxDF. Find out more here.

How to Make the Most of Quantitative Research in UX Design was originally published in UX Planet on Medium, where people are continuing the conversation by highlighting and responding to this story.