Quantitative UX Research – How Can it Complement our Customer Insights?
Our UX research team has started to uncover the possibilities that quantitative UX research offers.
Most people associate the term UX research with qualitative methods, for example, interviews with a small number of participants. These interviews are used to discover things such as customer problems, usability issues with a product, and customer journeys. Often, we concentrate on observable behavior by watching and interviewing customers while they actually use the product or prototype.
Observable behavior is the most important aspect, because what users say and what they do can be two quite different things. The focus on behavior is also the reason why UX researchers are the ones asking the “why” questions: Why do users behave in a certain way, is it because they do not understand how it works? Is it because they do not want to do it? Is it because they expect something different? Is it because it does not solve a problem they are facing? To answer these questions, we keep the abilities, motivations, and experiences of the individual in mind. For example: A person motivated to search for a specific product will behave differently on-site than a person motivated to search for inspiration. However, UX research offers more than just qualitative methods; big chances lie in combining this qualitative approach with quantitative methods and thinking. But what does such “quantitative UX research” entail and how can it help us to understand more of our customer’s behavior?
Diving deeper into quantitative UX research
To start answering this question, I would like to give an example through a project we conducted in the User Research Team where we combined qualitative and quantitative methodology. We wanted to better understand how users experience the Editorial section of our Fashion Store and how we can improve the section to make it more inspiring and engaging for users. For this, we first invited users to our in-house Research Lab, watched them use the Editorial section and its subparts, and asked questions about their expectations, problems on-site, and overall understanding of the content and copy. We expected different reactions to our site, based on whether a person considers him-/herself as highly fashion competent or not (i.e., considered themselves as knowledgeable in the area). Therefore, the recruiting of participants focussed on diversity regarding this personality trait.
Based on the results from our qualitative interviews, we had new research questions and hypotheses that we wanted to explore and test further, e.g.: Users with a high fashion competence use other sources for fashion inspiration compared to users with a low fashion competence. For this, we used a quantitative survey.
In the survey, we measured fashion competence again and looked at our users’ understanding of the Editorial section as well as their awareness of inspirational content on Zalando. This helped us to not only validate some of our findings from our interviews, but also gain additional insights into how much and what influence fashion competence really has on the perception and consumption of inspirational content.
In the future, we could also rely more on on-site data in such a study. In a survey, behavioral data is per definition not as precise, because it is based on the memory of the participants. However, pairing survey data on emotions/motivations/intents/personality traits that might influence behavior and actual on-site behavior data could generate completely new knowledge for areas such as helping to develop personalization features or tailoring them for shopping intent. We are working on making this possible as we speak.
What does this tell us?
The example of our editorial research shows that quantitative UX research uses the same mindset as qualitative UX research: Focusing on behavior (how did they use what) while keeping individual traits in mind (e.g., their fashion competence, emotions, motivations etc.). Quantitative UX doesn’t merely look at “how much”, but also offers answers on the question of “why”. This is mainly done through combining data on behavior (either in the past or in the moment) with insights on the person. However, in quantitative UX research we do this with standardized measures and in bigger samples. Here, we are not looking at three or five participants any more, but at 50 participants or more. This, in the end, merits analyses that use descriptive as well as inductive statistics, enabling us to reliably test hypotheses that were generated in our qualitative research.
Other research methods that are typical for quantitative UX research, apart from surveys, are those associated with remote testing (e.g., card sorting and unmoderated task based tests). Here, small scale A/B tests (meaning: comparison between different variants) are also possible to compare customer impressions of different product versions before they are finalized for example. While such tests are not as reliable as a classic A/B test when it comes to statistical power (i.e., how likely it is to discover a significant effect), sample size, and measuring of KPIs like Conversion Rate, they can be a low key solution to compare different mock-ups or prototypes early on. Furthermore, existing solutions that are not yet live in every country can be shown to customers without bringing them live.
Remote testing studies also offer the possibility to ask customers about his or her experience, granting additional insights on why one version performed better than the other. This is something we have recently done by conducting such comparisons with French customers. We researched their perception of different sizing help features (i.e., size chat, size recommendation, etc.) that were not yet live in France. This way, we were able to establish a ranking of which feature was considered most and least helpful, as well as the reasons for this opinion.
Final thoughts
Insights from quantitative UX research complement the picture of the customer that we gain through qualitative UX research, sometimes by adding numbers on phenomena we saw before, and sometimes by enabling comparisons between customers or products. It can also build new bridges to market research or A/B testing by speaking a more similar language in ours and their results. However, there are also risks involved: Getting lost in numbers instead of really listening to what the customer says while using the product.
We're hiring! Do you like working in an ever evolving organization such as Zalando? Consider joining our teams as a Frontend Engineer!