This piece is intended to opensource the ideas introduced in my presentation for IIeX Europe 2020 titled “Iterative Research for Relevant Innovation”, which won the Best New Speaker Award. This is an invitation for the market research industry to collaborate with UX researchers and designers. This piece brings together concepts from iterative product development and design thinking to suggest that researchers can afford to create smaller research modules — delivered in relevant waves — that feed more directly into decision making cycles.
How do we know people?
Would your best friend want the regular, hot coffee or the iced coffee? If we were thinking in research terms, this is a classic product consideration question.
But let’s say you’re in Tokyo for the summer Olympics and it’s 35 degrees. You’ve been walking for 3 hours. If they drink coffee, they probably want the ice coffee. In this context, the question is more basic: how well do you know your friend and their needs in this moment? Though it may be unscientific, if we spend enough time with people, we get to know them, whether intuitively or through conscious effort. That leads to empathy.
How a product junkie arrived at market research
I’ve spent a lot of time with people in the social innovation, startup, and tech industry. I tend to hang out with engineers, designers, and founders — people who build products, start companies, and in some cases, sell them. What makes them successful is that they know people who have problems, and want to continuously prototype solutions to serve them.
Successful products are iterated through successive listening and feedback loops. These loops could be user interviews, hallway testing, A/B testing, or other tools to find needs and validate hypotheses.
I entered market research recently because it brings rigorous methods for how we listen to people that leads to higher confidence that what we learn is more reflective of a situation or norms.
Through its history, research has an arsenal of tools. For example, if one wanted to find out why someone like me would buy a backpack like this (shout out to the GOT Backpack from a happy customer!), the research question might be framed as:
What are the drivers of the purchase decision?
Answer options might include the design, a local independent label, or that it’s recycled materials (eco-friendly).
If I was doing a quick hallway test, I might show the bag and just ask:
Would your son/daughter/niece/nephew like this for Christmas?
There are 3 ways to find out if they would:
- Ask them, and spoil the surprise
- Buy it, and risk rejection
- Run surveys on sustainability, brand preferences and maybe brand associations with sustainability, and take an educated guess
Where UX research and market research might meet
If I was just testing an idea, I might if there are too many “no’s” because the baseline metric for a business is will someone vote for you with their wallet. Iterative startup approaches often sacrifice (or are unaware of) representative samples in a quest for speed to inform the next decision. Maximum learning for minimum time and resource risk.
Market research provides best practices in methodology with the goal on ensuring data quality. It might provide a bigger picture and more representative data on the general trends for consumer preferences. But the traditional trade off has been speed, and therefore relevance to the product cycle. Is the right data getting to the right people, when they need it?
I’d like to think that both these practices want to serve people. So all the questions we ask really just want to get us closer to the bottom line of service:
- Does this [product/service] solve your problem?
- Does it make your life better?
Mobile & internet penetration have tipped into representative sampling
In the early days of internet and mobile (smartphone) adoption, representative samples was a legitimate concern. But the tipping point for representative sampling has arrived with high rates of mobile and high-speed internet penetration. That has arrived in most developed markets and in large emerging markets in the Asia Pacific, such as China and Indonesia.
The way people speak to people has fundamentally changed in the past 10 years with the little screen. It uses the the language of emojis, stickers, memes, movement, and interactions. It is not just a distribution channel for pen and paper surveys. It is also not just a channel for video and qualitative interviews. The small screen can be a whole language interface. How can market research change the way it’s asking questions to fully take advantage of this language?
Let’s approach mobile and digital interfaces in general from a field that has been very successful with the rise of the mobile and app economy: UX design and user research. The 5 stages of design thinking are: empathise, define, ideate, prototype, and test.
The core of this process is learning and iteration. Empathise with people. Come up with solutions, test, and improve. It is a listening cycle. Design thinking is a methodology that can apply to any field. In fact, these concepts should not sound that foreign to researchers because the spirit of research is to listen and understand people.
Traditional research has been bogged down by barriers to execution: access to access to representative samples, reliablility, cost, data delays making results redundant. Internet infrastructure and mobile removes most of these barriers. The key to success lies now in researchers understanding the capabilities (and limitations) of technologies to deliver surveys reliably at scale. Research has the opportunity to listen faster, and to proactively find consumers’ new needs. By adapting, research can become relevant throughout decision making cycles, for business, products, and marketing.
Three initial concepts I will introduce are:
- Survey UX as Brand CX
- Shed weight. Move faster.
- Closing the loop with users.
Survey UX as Brand CX
Research has generally been framed from the perspective of, “What do we want to learn?” and then “How do we design a suitable question to measure it?” The measure of success is mostly whether someone understands it and can give a response that can will add up to insights. This means that surveys are designed more with a researcher in mind than a respondent. For the respondent, understanding is the metric of success, and enjoyment isn’t really a factor.
This utilitarian frame doesn’t encourage us to consider the survey experience in more human terms, such as enjoyment. So intead, we need to incentivise users, usually through payment. We think we’re paying people for their opinions, but what we’re doing is paying for their time, which they should be paid for. But we have insufficiently invested in the respondent experience to deserve their further investment in us.
One of the ways I am proposing is to build trust by investing our time in improving the respondent’s UX, beginning at least with the user interface (UI). For example, it can be a slightly more aesthetic temperature style rating, which captures the same 1–5 points for a researcher, while having more flavour for a user.
Investing in UI takes time. It also comes with a risk. For example, a star rating can have a positive bias. An interaction that requires someone to hold rather than just tap a number needs to be thought through and introduced to reduce usage errors.
The UI isn’t just aesthetic (though mobile-responsive surveys still need to be made universal). We can design in the wrong direction. However, that’s not an excuse for doing so little in an age where there are so many design and rapid prototyping tools, such as Figma (used for these presentation mockups). Think about ways to make kinder UIs in your surveys. Invest time in design to treat respondents as equal users of these surveys. So as a next step, researchers can approach UX researchers and designers to explore how research can borrow from best practices in design for digital interfaces to improve respondent experience, thereby potentially improving the quality of data as well.
Shed weight. Move faster.
Another way to improve data quality is to shorten surveys. In traditional research, this was not feasible because fieldwork used to take so much time, therefore being costly. Because research was costly, exercises such as brand tracking has been cut to annual or semi-annual exercises. Research might try to cram in as many questions as possible during these one or two chances to collect data. The overall experience is horrible for everyone, from researchers, to respondents, to leadership reading the reports.
Internet penetration enables researchers to get access to representative samples of respondents. It is perfectly feasible now to have 1000 respondents in the US or UK within 24-hours that follows nationally representative census distributions. Given this reach, researchers can afford to slice up research into digestible amounts (say fewer than 20 questions to presevere data quality). These can be sent in waves that are relevant to reporting for respective teams, tracking, or exploring.
Focusing on the exploration aspect, consider how short 1–5 question surveys can be used the following ways:
- Ask now. Follow-up on demand later.
- Ask first. Follow the trail if worthwhile.
Asking now and following-up on demand means collecting the data first, and tracking changes. An example is how Dalia Research asked Hong Kong people’s perceptions of democracy in May 2019, and then again in November 2019. The two questions asked were identical, so the findings can be compared.
In design thinking, need finding is an essential step for discovering opportunities. One of the opportunities with digital — internet-connected mobile and desktop — is to move faster, in shorter cycles, to iterate. This type of time-series tracking can be used to track emerging topics that brands may want to use as tipping points, such as alternative protein for F&B or willingnness to spend on VR technologies.
Asking first and following up with further questions only if the insights suggest an opportunity is design thinking for research. Applying research best practices, new question types or topics can be tested on representative samples to see if there is actually consumer appetite for trends. With research that can be done in a matter of days, product ideas can be prioritised earlier in the development cycle for large brands. For example, Dalia Research did an exploratory survey asking about alternative gift giving in Germany, the US, and the UK. Anyone can do follow-up research based on the released data on the types of products that might be of interest to their business.
Borrowing from a concept introduced by Professor Liliana Caimacan from Tata Consumer Products and Professor Daniel Rukare from Hult International Business School, the idea is that researchers should allocate a percentage (say 20%) to adjacent markets, and 10% to innovative bets in order to keep an ear to the ground. Market research’s ability to not only track, but discover and iterate is what will help brands stay relevant to consumers today.
Closing the loop with users.
Currently, the flow of value is one-directional. A respondent provides data, which goes to the insights team in a company. Respondents, the public, and potentially event adjacent departments do not benefit from the value that the data has created.
In the most ambitious sense, closing the loop with users would be respondnents of the surveys — such as sharing a finding from an unrelated survey at the end of the current one, or providing the option to release results to them. YouGov is a great example of sharing the results of live polls, enabling the public to learn from findings.
Stephanie Le Geyt and Sarah Leviseur from Attest spoke about the 80/20 rule of giving away control to make research indespensible. This knowledge sharing feeds into the iterative feedback loop because adjacent teams, such as product, marketing, or business development may provide inspiration for upcoming trends to track. This further embeds research into the decision making cycle, allowing the data collected to feel relevant across an organisation.
A way to get started on the mindset of knowledge sharing is to offer short polls (1 or 2 questions) on topics that are relevant to events. These could be community events, marketing events, or business events. For example, at Dalia Research, we would run Lightning Poll questions for events hosted at the office. If the event was related to sustainable coffee, then we did one on sustainable coffee consumption in Germany. If it was women in tech, we did one on what women wanted in the workplace.
To borrow from a concept in the tech industry, open-sourcing (knowledge sharing) enables an industry to grow together. Given that online (mobile first, not mobile-only) sampling is an inevitable channel, then researchers can incorporate best practices in digital product development.
I invite researchers to think of surveys as products, and see how collaborating with people who work on products can enrich research. Consider how to create an iterative research design loop. Explore and test new interfaces. Reduce exploratory risk by slimming down surveys and sending them in waves that are relevant to the topic at hand.
Create the internal feedback loop that can make your insights more relevant to decision making more frequently. Invest in better designs for users. Focused surveys collects and delivers relevant data to teams when they need it. This frees up resources to exploratory surveys and iterate research topics for upcoming trends. Create resilient listening loops to stay ahead of the curve and track tipping points.
How can UX design work together with survey design to become more human on mobile? How can we invest in better experiences that improves response quality and retention — creating value for everyone involved?
Thank you! You can also view the slides here.