You’ll be a better researcher after reading this
If you’re a researcher or designer who’s looking for some help on doing better research, you should keep on reading. Because we worked out a bunch of tips and tricks for you.
First, a brief background story on how this set of tips and tricks came about: During the most recent UXinsight festival – the festival voor UX researchers – we hosted a virtual booth in which we ‘hung up’ some punch bags for researchers. A place where they could vent. A place where they could leave their questions and frustrations about doing (online) UX research. And they went off. Obviously, we didn’t just sit and watch – they punched, we answered.
To make sure all the knowledge remains accessible, we’ve translated the input and our answers into an article. It’s quite long, because we’ve got loads of input. To keep it navigable, we divided the tips and tricks into five categories: in-lab research, remote research, preparation, analysis, and delivering insights.
If you have a question or frustration that’s not covered in this article, feel free to hit us up.
For now: enjoy the read.
- “It’s hard to keep the team motivated and concentrated for the full research day.” That’s right – we all know that our attention span fades during the day. When we plan interviews, we make sure they last 60 min. max and we always have 15 min. breaks in between. Another way to keep everybody engaged and motivated is by having them participate actively. Have them take notes. Involve them in discussions.
- “Coming up with creative follow up questions isn’t always easy.” Use the creativity of your team. At the end of the interview, it’s perfectly fine to walk away from the interview room, leave the participant alone for a minute, and check-in with your coworkers in the observation room. Zoom out with your coworkers and ask what questions they still have after watching the interview. You don’t have to do it all by yourself.
- “Participants accidentally introduce themselves with their first and last name.” You don’t want this for privacy reasons. So you can (partially) prevent it by allowing participants to introduce themselves briefly in the waiting room, or before switching on the recording. Another way to avoid it is asking “what do you do in daily life” rather than “tell me about yourself”.
- “Participants who are late or don’t show up at all.” Ask the participants to join the interview five to ten minutes before it starts. Besides that, you should always plan one spare participant for an interview at the end of the day. As a back-up.
- “How do you not overwhelm the participant with the number of observers?” You don’t want the participant to feel too much pressure while trying to figure out an app. Therefore, we link the video of the interview (in Whereby or Lookback) to a different livestream. This way, the participant only sees the interviewer and you can invite as many observers as you want.
- “Tech issues with a participant during a remote interview.” Run a test session / technical check with the participants to tackle possible issues beforehand. We always do that two days prior to the interview. If it turns out that the tech doesn’t work, we still have time to recruit a new participant.
- “Exploratory research is a little harder.” But it’s still doable. Have participants use their phone and all of a sudden, they’re mobile. They can roam their entire house. In other words: doing exploratory research remotely, has some advantages too.
- “Participants bring extra people (his/her colleagues) to join the usability test.” There are several ways to deal with this depending on your context. If the task / scenario you want participants to complete is something they wouldn’t do alone in real life, this might actually be a nice bonus. Interaction between two people and co-browsing can lead to interesting insights. If this isn’t the case, you can send them a message beforehand, and explain why the one-on-one format is important.
- “Prototypes have to be ‘foolproof’ and work on participant’s devices. While in-lab, prototypes can have flaws because the interviewer is there to tweak things.” Very true. That’s why we prefer smaller prototypes over ‘more elaborate’ ones, and a higher frequency of doing research. As it is generally easier to arrange remote research sessions, compared to face to face (easier for participants to participate, no rooms and lab needed, etc). So, test more often with smaller prototypes.
- “Having to interrupt a participant who’s on fire sharing insights, due to time constraints and covering all your questions.” Be straight upfront: you have a schedule and specific questions. If it’s really interesting, you can schedule a follow-up call with participants. Or ask them to write down their additional feedback you don’t have time for. Generally, once you’ve got a participant talking, they will happily keep on going and sharing. Anyhow: schedule enough time per session, and ideally some short breaks in between sessions, so you can overrun the schedule a bit.
- “Not being able to pick up all subtleties of a participant’s body language.” It is absolutely true that some things get lost through a laptop screen. Remote research and face to face research are not fully interchangeable. But: remote research is better than no research. So, if face to face is not possible, but remote research is, then in all cases we would choose to do it remotely. In addition: Remote research does give you extra insights that you don’t gain during face to face research (in a lab): you get a glimpse of the participant’s home and their real context.
- “Check-in’s between interviews take too much time.” Keep the moments in between interviews short by focussing on three things: the main findings, a prototype check, and a research questions check (to shift focus if needed).
- “Involving the team during the day.” A kick-off at the start and a debrief at the end of the day are the two main moments to align with the client and the team. During the interviews, Miro is a great tool to have observers jot down their own insights. This keeps them engaged.
- “Participants receive private notifications even if you specified to them to switch them off beforehand.” Very recognizable, and we haven’t found the right way to solve it; most participants turn it off, but not all. What seems to help a bit is warning them that every incoming message will be recorded and seen by a group of people.
- “How many times do you read over a script before measuring it as ‘perfect’? And to whom do you show the script first for validation?” In every UX test, big or small, you want to see the behaviour and reactions of your participants. When you make a ‘tight’ and ‘complete’ script, you might not allow yourself any space to let the participant choose their own path. To us, the script is mostly used as ‘speaker notes’ and a way to check and communicate with our stakeholders how the tests are conducted.
- “Who should we invite to watch the livestream?” Basically anyone who’s interested in user insights about the topic you are researching. We always leave this up to the client, putting them in control of whom to invite and whom not.
- “Getting stakeholders to sit together and listen to each other for real.” Like asking the right questions to our participants, it is just as important to ask the right questions to the stakeholders. The most important thing is that everybody gets on the same page about the goal of the test; what do we need to learn to be able to solve our challenges?
- “Getting funding to do research to fix a problem instead of having to formulate a specific solution or product first.” Start with a small research to show the value of it, and how much money and time it can save you in development and design. If you start really small, you don’t need approval and it will cost you little time.
- “Enabling UX research on a project early enough, not when it’s almost built.” UX research can be done at any stage of the development cycle; the earlier the better. The trick is to know what you can and can’t ask and learn with low level prototypes. Even with a simple concept flow, made by pen and paper, you can learn a lot about people’s understanding of a product. Here’s a nice article if you want to read more on prototyping and testing early.
- “Research pitched as a ‘quick’ project with clarity about the main research questions, while in reality everything still leads to a lot of discussion.” One test rarely holds all the answers. Often, you need one test to truly understand the issue, some time to discuss and create possible solutions, and another test to try these solutions. Then when you test again you will possibly find other issues. Because of this you should always try to make testing and design an iterative process.
- “Getting last minute extra research question requests from the client.” Recognizable. And in itself a nice signal, because if more questions arise, then you loosen up a bit, and people apparently see research as something important and relevant. However, the best thing you can do is try to understand the extra research question: In which context / situation does this occur? What is the underlying user need? If it’s within scope of your research, you can tell your client that you’ll include it if there’s time left or happen to touch it. If it’s out of scope, you have a nice opportunity to talk about a new research moment.
- Convince colleagues, especially Product Owners, that it is worth and efficient putting effort into UX research – again and again, early and often.” That’s very familiar. Have them watch along with interviews, actively involve them during the research, and present your insights afterwards.
- “Deciding what is more important: is it the most mentioned ‘issue’ or that one specific comment that only pops up once but sounds super promising?” With qualitative testing you should never look at the numbers. Of course the message becomes clearer when more than half of the participants have the same issues, but if this is a ‘non-disruptive’ issue that is difficult to fix, it would be better to focus on other issues. We only report the most promising insights; try to determine the impact of every issue / possibility and the effort it would take to fix / apply it.
- “Wanting to work fast and effectively without losing interesting insights.” After the interviews on the testday, make sure you do a debrief with the client to conclude on the main findings / insight. This really makes the analysis much easier.
- “People (non researchers) not paying attention to all of the interviews during observation and then picking only one or two things that align with their point of view.” We always ask teams to watch all the sessions live and try to keep them involved by checking in with them in between the test sessions. But of course, you can’t force people to watch all the sessions. However, you can try to appoint a ‘buddy’ for each team you report to: a buddy is a stakeholder who watches all the sessions and whom you have a debrief with at the end of the day to align your findings and conclusions. When you share your insights, you know these buddies will have your back.
- “Creating insights from research over multiple studies or even over multiple projects is so time consuming.” If you put your findings / observations in one single system, and categorize them neatly, it is quite easy to bundle new collections of observations into new insights that transcend separate studies. Handy tools to support this are EnjoyHQ, Dovetail, Sticktail, etc.
- “Making (and going through) transcripts takes a lot of time.” While we do the interviews we take notes using our observations template. We perfected it throughout the years and you can download it for free. It should save you lots of time.
- “Handle expectations of immediate results, while digging and finding the important patterns can (and should) take time.” That’s right. It takes time to extensively analyse the results to find deeper patterns. This is not something that can be rushed. The reason stakeholders want early results is because they are interested. Use this in your advantage; invite them to work together on the analysis and ask them about their own interpretations.
- “Combining advice from experts with user experience data and convincing colleagues that this should get more attention then just ‘using quotes’.” In your advice, make sure to distinguish observations from research with expert / root cause analysis (grounded in theory like cognitive psychology). Observations should not lead to discussion (they are objective). If your analysis is properly supported by knowledge, it should not lead to discussion either. On the contrary: advice / next steps are up for discussion – other people / stakeholders might bring in more insights or better ideas.
- “Trying to present the impact during the user journey (wished by client) with little qualitative data.” That’s a great opportunity to fuel interest in further research. Plot (for instance on a journey map) the small amount of insights you gathered within the broader context of a journey. Visualize your blind spots. Frame your insight as hypothesis (instead of conclusions) that are in need of further validation.
- “How to support the conclusion with the right context without making it a boring and lengthy story?” Context is key to make a story last. We try to keep it vivid by using quotes, screenshots and video clips of the interview.
- “Difficult balance of putting time and effort into transferring insights, while the next study needs preparation.” To us, a project ends when our stakeholders are informed and tell us that they know what to work on. Without a proper transfer the insights lose their value. Don’t try to rush this. Ask for more time if needed and explain that the insights are valuable and need time to transfer, if you do research for the same group.
- “At a remote presentation it is more difficult to turn it into a discussion / session. It soon becomes one-way communication.” Create a collaboration space where people can take notes together during a remote session. A Miro board or Mural board, for instance.
- “Measuring and advocating the most important insights to the team without being too ‘pushy’.” Be neutral. Show the facts. It is way more powerful if observers draw their own conclusions, instead of you pushing them.
- “How to turn insights into actionable items on the roadmap?” Great question. This is not an easy one to answer and it also highly depends on the type of insights you’ve gained. What might help is to understand that it is (often) not only your task to come up with a solution. Instead, try to be specific about the problem that needs to be solved and determine clear requirements for the solution. For example: You don’t need to tell what a button looks like, but it does need to be visible and look clickable. Additionally, you can help others by looking at the bigger picture instead of the details and inspire them by offering ideas about potential design solutions.
- “How to add value when not embedded in a product team and the research you do is more ad hoc?” Make sure the product team is fully aware of the value you add. You can achieve this by having team members list their assumptions about research results and participants before you start. Then have them revisit those assumptions after the study. This will make the team more aware, and hopefully they’ll involve you more the next time.
We hope this was helpful. And again: if you have any questions or frustrations that we didn’t cover in this article, feel free to contact us anytime. We’d love to have a chat and help you make your research hassle-free.