A Journey into the World of UX Research

Portfolio

As technology continues to evolve, so does the user experience. With so many different types of products and services available, it’s important to ensure that your users have the best possible experience with your product. This is where UX Research comes in.

UX Research is a process of understanding user behavior and preferences through data analysis, user interviews and surveys, usability testing, and other methods. To help shed more light on this complex topic, we recently hosted a panel discussion with two UX experts:

In this blog post, we explore the key takeaways from our discussion with Sean and Dave regarding the significance of UX research and how it can be leveraged to enhance a product or service.

Sean:

I have been a senior product researcher at WorkHuman for nearly six years. When I started, our research team had only two researchers, but we have since grown to seven. Our design team has also grown at a similar rate.

I have a background in psychology, which led me to specialize in user experience research. During my time at WorkHuman, I have had the opportunity to work on various platforms including recognition platforms, performance management tools, data analytics tools, and even an e-commerce store.

Dave:

I’m the UX Director at The Friday Agency. I started with a Visual Communications primary degree, then moved into UI design in the earlier days of the internet. I did that for a while, but over the last 10-12 years, I’ve focused more on UX. 

I wouldn’t call myself a researcher, though. In my mind, researchers are academics with PhDs. I see myself more as a UX generalist, I lead UX projects and work with UX and UI designers, content creators and developers. 

Because I work at an agency, we work with many different clients. I’m working on 5-6 different projects right now, and about 70% of the work I do is web-based. We’re designing a website for Peter Mark, a hair salon chain; DublinTown, an events and business listing site; CES, a language school; and auditing an app called Fire, a payment app, so it’s a great mix! 

What methods do you use to identify user needs and preferences?

Dave:

We do everything from user interviews and usability testing to card sorting, tree testing, and surveys. My agency also does digital marketing and analytics, so we analyse analytics and use the insights for both qualitative and quantitative research. We also do A/B testing, heat mapping, and more. 

Each project has its own process depending on its needs. Usability testing and surveys are probably our most common methods for quickly gaining early insights into a project, before we determine the next steps.

Sean:

I would always recommend moderated sessions when trying to identify user needs and preferences. The advantage moderated sessions allow is for you to ask more complicated qualitative questions and allows for the participant to expand on their feedback that they wouldn’t do in unmoderated sessions such as surveys. I feel that surveys shouldnt be your only method and that running moderated sessions afterwards allows you to dig deeper into the insights you gained.

For clients, I usually start broad with a survey, then narrow to interviews. The survey captures lots of data. Interviews explore survey findings in more depth. A funnel approach from broad to detailed is effective.

How important is it to understand Google Analytics?

Dave:

Personally, I don’t know Google Analytics at all. It’s an absolute beast, if you’ve ever tried to use it, you know what I mean. I can use it at a basic level, like finding out what percentage of users are on mobile versus desktop. But if you want to really dig into the data, there’s a lot to learn. Luckily, I work with people who know Analytics extremely well. Something that might take me an hour to figure out, they can do in two minutes. So I usually just ask them. 

How important is it to know Analytics, though? If you don’t have people around who can do it, I think it’s important. You might want to read up on it and see how much work it would take to get comfortable with the data you need. Every time I want a new data point, it’s a different question and a different process in Analytics to find the answer. It’s a tricky tool to use.

Sean:

To be honest, I have the same experience with Google Analytics. I work with people who have much more expertise and skill with that aspect, so I can’t really talk in depth about setting up Google Analytics. I think that’s the complicated part; once it’s set up, accessing the data is easier. 

We use Heap Analytics now, which I find much simpler to use now that it’s set up for me. Accessing the data is a lot easier than with Google Analytics.

How do you measure the success of UX research?

Sean:

So I guess you could probably look at that question in two ways: If you’re working with a company on product design, many factors will influence the decisions made and the direction taken. You could have business decisions, client decisions—sometimes it’s hard to identify how to measure UX research success based on whether recommendations were implemented in the product.

The way we track this at the moment uses the HEART framework created by Google. HEART stands for Happiness, Engagement, Adoption, Attention, Retention & Task completion. We choose metrics for each and track them over time. If we have a design change or release based on research, we can see the trends. You can do this yourself—it doesn’t require huge effort. We also follow up with decision makers after research products to understand why a recommendation was ignored or a different direction taken. This helps us identify where decisions are made in the company and understand the company better.

Sometimes you shouldn’t judge UX research success just based on implementation. If you facilitate user research and get stakeholders discussing it, that could be a success. Anything that promotes discussion of user research and gets people doing it themselves is a success.

Dave:

A lot of our work focuses on performance based on conversions, if a client is selling something or wants people to sign up. So conversions are usually our primary measure of success. We also look at metrics like dwell time, bounce rate and so on, but ultimately, it’s really about conversion for most of our projects. 

Another measure is usability testing after a product goes live. We can compare the metrics from usability tests versus from a previous version of the website or app. And of course, having a happy client is also a success for us. But usually, clients are happy when we drive strong conversions.

Sean:

Yeah, Dave, actually reminded me – there’s a survey you can run after user testing called the System Usability Scale. We also use that if we’ve certain product areas that we’re reiterating over time. If we have a current flow, we’ll test that, get a score using that scale, and benchmark that score. Then we run the same survey over and over. You run the surveys in the usability test session, right after the test finishes. Users complete a survey, and you get a benchmark score.

There’s a website where you can just enter the scores to calculate them. It’s a useful tool to know, especially if you have reiterating design over time and are testing multiple versions in a short period. We’d use that score as a benchmark for ourselves to aim for. A SUS score above a 68 would be considered above average and anything below 68 is below average, so you have to revisit it.

The HEART framework is probably more of a long-term benchmark, it helps show if UX design is improving user experience.

Dave:

I think the work we do is probably quite different. Some of our clients come to us for a project, we’ll work on it, and then it’s done and they’re gone. They might come back six months later for some updates, but they’re still not really thinking about ongoing work or iterative design that’s constantly evolving. 

I’d say for probably 70% of our clients, we will design something, build something, and then we won’t hear from them for a year. Then they’ll come back with updates they want done. At that point we say, “Look, let’s do an audit, let’s do more usability testing, let’s look at the metrics and try to improve.” Sometimes they want to do it, sometimes they don’t but the culture is changing and clients’ design maturity is improving.

How do you ensure that the data collected during a UX research study is valid and reliable?

Dave:

I think for us, it’s about cross-referencing and validating qualitative with quantitative data. We triangulate all our data and check it against each other. That’s why we start broad and narrow things down. Solid analytics will tell you what’s not working and what is, but it won’t tell you why.

If we do surveys or interviews, the questions will be based on analytics findings. Each stage of the process validates the previous one by cross-referencing. This helps ensure the data is reliable.

Sean:

Yeah, that’s pretty much the exact same process I follow. Like Dave said, just validating and cross-referencing. The only other advice I’d offer for qualitative data analysis and collaboration is that your insights are probably stronger if two people discuss them, especially for qualitative data since thematic analysis can be a bit vague. Quantitative data is hopefully mostly facts. So, a lot of collaboration during analysis helps mitigate bias that might come from just one person.

Dave:

I agree. Like working in UX, it’s hard to do it alone. You need that collaboration. For me, I’m validating my thoughts and opinions with others all day, every day. Working as a freelancer or alone could lead to confirmation bias creeping in, even with lots of experience. Collaboration keeps that in check.

What tools would you recommend for gathering, analyzing, and visualizing UX data?

Sean:

Gathering UX data, in the sense of UX research, I guess you’d want tools that have transcription functions to just help you out when it comes to user interviews and usability testing. You could end up with so much “um” and “uh” feedback that you don’t want to have to revisit and re-watch your recordings over and over again, which sometimes is necessary but it can be quite difficult to hear yourself speak all the time. I think tools like Microsoft Teams and Zoom work perfectly. 

When it comes to analyzing your data, things like physical Post-its can work. Miro is a fantastic tool for mapping out the data you have – you can download the data from your survey, copy and paste it over into your Miro board, and you have your qualitative and your quantitative findings. From there, you can easily move the Post-its around, do your thematic analysis, identify your groupings and Trends, and form your insights.

I can’t speak highly enough of Miro as an analyzing, but also visualizing tool. Also, when it comes to visualizing the data, I do it quite often manually using Figma.

For card sorting and tree jacketing testing, a tool called Optimal Workshop does fantastic visualization of your data. And then for user speed testing, Maze brings out a really nice research report for you, automatically visualizing the data very, very well.

Dave:

I’ve used so many survey tools and I’m still constantly trying new ones because I enjoy trying new software. I’ve never found a suite of tools that I’m completely happy with. 

For surveys, I’ve used everything from Google Forms, SurveyMonkey and Typeform, up until recently. I switched to Maze because it has a nice UI for users and more flexibility. Some tools don’t let you group multiple opinion scale questions at once, forcing you to do them one at a time. 

For analyzing data, Maze does a good job at visualizing survey results. I’ve also used Optimal Workshop, which is great but pricey. It’s good for coding interview data by turning qualitative data into quantitative data, and you can colour code comments to find patterns and themes.

For visualisation, Figma is the best. I’ve used Miro a bit but Figma is better for me – you’re in it all day anyway! 

For usability testing, we use Lookback, which is good for remote mobile usability testing. It lets you add comments that sync with the video timeline while recording or watching back. My team can observe remotely, take notes, and highlight issues – and we can get a nice report out of it. We also do a lot of manual usability testing, using spreadsheets to map themes and note user issues. We colour code and repeat them under each user’s name. Unfortunately, no single tool is good for everything you need.

Sean:

I enjoyed using EnjoyHQ, an insight repository tool, to store notes but found the experience better for me as the creator than for readers. The data wasn’t as useful or usable for consumers. Dovetail seems very easy to use, though.

What challenges have you faced when conducting UX research in different contexts?

Sean:

Well, I work in B2B software, so often we’ll have simple issues with stakeholders. For example, they may not be comfortable with you speaking to their employees. You might have to change your methodologies due to tight timelines and schedules. UX research, as you can imagine, can be difficult to fit into the scrum process.

Probably the biggest challenges are time restrictions and recruitment. Getting people to commit 20 minutes of their time for research can be difficult without a strong incentive. Technology issues are also common now that we’re all working remotely. You might join a research session only to find your participant is still updating their Microsoft Teams to join the call, and before you know it, 10 minutes of your session is gone.

I’d always recommend booking 10 extra minutes for any research session you’ve planned for 20-30 minutes. You never know what could come up, like their meeting running over or they’re just finishing dinner.

Stakeholder management can also be challenging, as many people have opinions on design. Working with clients to navigate questions like “What will you ask my employees?” or “What feedback do you have on this?” can be difficult.

In summary, the main issues I face are time restrictions, technology issues, and stakeholder management.

Dave:

Yeah, I agree. One of the things I always tell juniors when they start working on research studies where we have to recruit people is that organising the sessions is not as easy as they think. They assume that all you have to do is schedule five or six people for a usability test or whatever. However, it’s actually one of the hardest things to do because you’re trying to coordinate different people and their chaotic schedules. Sometimes people drop out or don’t get back to you, so you have to be really organised. I think having a clear process and a structured way to log all your contacts is crucial. 

A while back we’ve had clients who won’t listen to the data. I remember one client in particular who had this overpriced product that everyone in the usability tests and interviews said nobody would buy. But the client came back and said, ‘Well, I don’t agree with them.’ And it’s like, ‘You can’t just disagree with the data!’ Anyway, within six months, the business went under. Sometimes clients just don’t get the data or respect it, or maybe they think they know better. It’s always a challenge to convince them, and sometimes you just can’t. 

Sean:

I have a very specific challenge that someone might encounter at some point in their UX research career, especially when doing B2B research. 

Recruitment can be difficult, and even during research sessions with these users, a lot of the time the tools you’re testing with them weren’t chosen by the users themselves. They were chosen for them, so you lose that kind of emotional connection that a user might have with a tool they chose themselves. 

For example, if you choose Spotify or Apple Music, you’re automatically making a decision and have done your research or found something you enjoy about the app. There’s an emotional connection there. You kind of lose that a little bit sometimes in B2B, which can make UX research quite difficult. It’s just harder to get feedback from people who don’t really care.

Dave:

Sometimes the hardest thing about research is knowing what questions to ask. It’s only in hindsight that you find out you asked the wrong question and gathered data you didn’t need. I think you really need to think about how to approach research and what types of questions you’re going to ask, how to structure them, and make sure you need the data each question will give you. People always throw in questions like “where do they live?” but then you have to ask yourself, “why do we need to know where they live?” 

What’s the process for recruiting participants for usability testing?

Dave:

For the most part, we tend to recruit participants ourselves. We used to use agencies but found they often gave us people who did not match the demographics we needed, and they are also very expensive. Instead, we use our own networks and the 8-9 people in our office. If each of us refers 10 people, that’s a decent pool to draw from. Surveys are different—for those, we go to our clients’ customers. 

Overall, we qualify people ourselves for user testing or interviews. We have a registration form with a few questions to ensure they match the demographic we need, where we offer options for times/dates and collect their contact info for the study. We invite more people than needed, expecting some will drop out. We make sure participants understand what’s involved so they’re not nervous, which leads to bad results. We email them the details, remind them the day before, and text them on the day of the study. 

This process has evolved over time and works well, though some still don’t show up or take the call somewhere too noisy, even with instructions to find a quiet space and ensure good internet. For emails, keep details concise, assuming people won’t read long messages. Apply UX principles to emails too, to ensure people absorb the key information.

Sean:

Yeah, my process is very similar to that. We largely avoid recruitment panels due to mixed results. Sometimes we get participants who are genuinely interested, but others are just doing it for easy money. 

Instead, we go directly to our clients and email their employees to ask if any would be happy to participate in our research studies. We also have a popup in our platform where people can opt into our research, giving us a list of emails. We then email people from that list whenever we have a study to conduct. 

As with Dave, we email them a scheduling link through Microsoft Bookings so they can book time with us. Of course, there’s no guarantee that everyone will show up, but this process has worked well for us.

Is there an optimal number you should recruit to maximize the benefits of the session?

Dave:

Yeah, we would always do five testers per session for user testing or interviews. I mean, if it’s an app with a lot of features, we’ll split it up into different groups. You know, one group of five might test one half of the app and another group tests the other half. 

We would probably line up maybe six testers with the view that one of them might drop out. Then we’d have a few more testers on the sidelines who have signed up but haven’t been confirmed yet. That way, if someone drops out, we’ll have replacements ready to jump in.

Sean:

Yeah, we usually aim for around 5 to 7 interviews as a minimum. We probably wouldn’t do more than 7 due to time constraints and the fact that we’d likely stop learning anything new after 6 or 7 interviews. 

We might do more interviews if we’re doing discovery-based research, though. In that case, we may need to talk to people from different regions or companies to understand how they use the platform, as company and country culture can impact usage. 

Again, it depends on how much time we have and whether we’re still learning new information. Basically, whether we do more interviews depends on if we’ve gained enough insights to end the study. But when it comes to user testing, 5 to 7 interviews is really the sweet spot.

When testing iterations, would you assess with the same participants or recruit new testers?

Dave:

I always prefer getting fresh perspectives from new participants. Using someone who has already reviewed a previous iteration means they’ve already seen it and will likely just say “this version is better” without a critical analysis. Each new set of eyes provides an opportunity for substantive feedback to improve the work. Fresh perspectives from each review cycle are most valuable.

Sean:

Yeah, I agree with that. People will always feel kind towards those who have obviously put more work in. So, you might not get an honest opinion if you keep testing the same people. Overall, people are kind and will likely feel obligated to provide better feedback if it’s clear you’ve put in more effort.

What challenges have you faced when selling the Research to clients?

Dave:

Fortunately, I don’t do much sales, which is why I have a business partner. He handles sales. I do a little bit of it, but many clients come to us and it’s just a box-checking exercise for them. They’re like, “Oh, UX, we heard we need that. Let’s do some of that.” 

Sometimes we get clients saying they want UX, but then we find out what they really want is UI. It’s a mixed bag. 

Design maturity is improving, I think. Over the last two or three years it’s really leapt ahead, but we’re fortunate now that we can turn away work if we find the client just wants to check a box and won’t listen to data. It’s a waste of time, so we spot them now. Selling is challenging, but the best way is to show metrics. Case studies with clear metrics, like conversion rates of X% before and 10% more after. That really speaks to them – money and numbers, that’s it.

Sean:

Yeah, nowadays there isn’t a huge need to sell the idea of user research to clients. A lot of our clients love the opportunity to get feedback from their employees and understand how they feel. They feel like there’s a better chance of meeting their needs and requests if we directly ask their users.

Convincing individual users to participate can be difficult, but getting client buy-in is usually not an issue. Sometimes clients want to review the questions, which takes a lot of time, but agreeing to do the study is typically not a problem.

Dave:

Our clients are often small-to-medium sized businesses, and some are owner-managers who greatly value their employees’ input. While the research does cost money, they realize the value. They think they know the issues, but say “there’s no harm in confirming it.”

As we do a lot of digital marketing work, many of our clients see UX as a service to improve conversion on their website or app. It’s an easier sell since we already work with them. Now, more clients approach us with a good understanding of UX, making our job easier.

What would you do if your client didn’t care about what their users said?

Dave:

I’ve tried in the past to convince people with that kind of opinion, but now I know it’s not worth the effort. It’s too difficult to change their minds, and you end up wasting a lot of time and stressing yourself out. It’s better to save yourself the trouble. Your own time is limited, so if it’s not worth it, it’s probably best to move on.

What methods are most effective for gathering quantitative data in a user experience study?

Dave:

I’ve used Optimal Workshop for coding interviews on a few projects and found it really helpful. You simply input your recordings or data, and you can code each part and identify patterns to turn qualitative data into quantitative data to some degree. The best methods remain interviews and usability testing.

Sean:

Yeah, surveys and user testing are the two main methods we use to gather feedback. Microsoft Forms and in-person user testing are the best ways to collect quantitative data if you don’t have access to analytics tools.

What methods are most effective for gathering qualitative data in a user experience study?

Sean:

Any moderate user research sessions seem like an opportunity for a lot of feedback. Even some more quantitative methods like usability testing or card sorting, which many people do with online tools, can provide a lot of feedback if you speak with participants one-on-one.

One-on-one conversations are the best way to get qualitative feedback. You gain a lot of insight from discussing with participants in person.

Dave:

I really miss in-person card sorting sessions. They’re great—kind of like a user interview, focus group, and card sorting all in one.

Sean:

Yeah, the last few card sorting tests we’ve run have been unmoderated, and we can definitely see less value there. There’s just confusion—it’s quite chaotic. At least in a moderated session, you have a chance to explain something if the participant gets completely lost. If it’s unmoderated, then the whole session really loses value if they don’t understand what they’re being asked to do.

Dave:

To add, I highly recommend using Optimal Workshop. Their tree testing and data visualisation is excellent, clients love it! The patterns and visuals are so engaging, and tree testing in particular benefits greatly from their tool.

Sean:

I’m not sure about the pricing, but Userlytics also offers card sorting capabilities. While not as advanced for data visualization, it can still be a useful tool for remote card sorting.

In what ways can quantitative data enhance the insights gained from qualitative data in a user experience study?

Sean:

Yeah, you can gain valuable insights from qualitative data if you conduct thorough research sessions and gather qualified insights. Most qualitative research focuses on problem discovery, not solution discovery. So if you gather quality insights throughout, they are all really hypotheses until you build something and test it. 

Once you start gathering quantitative data like time on task, clicks, and drop-off rates, that can help validate and add value to the qualitative insights you’ve gained. Qualitative and quantitative research always complement each other. Quantitative is more powerful at first, but eventually you’ll have to figure out the why.

Dave:

Yeah, I agree, you have to use both quantitative and qualitative methods. One of my mentees was focusing only on quantitative analysis at one point, so I had to steer them toward also doing some qualitative work to cross-check and validate their findings. 

You really need to use a combination of both approaches in a thoughtful way. Simply collecting data isn’t enough – you need a process that brings quantitative and qualitative insights together in a meaningful way.

What biases and preconceptions should you be aware of when conducting your user research (before and after)?

Dave:

I think if you’re in an interview, you really need to dig into why someone has an opinion. Find out if there’s bias or other outside factors influencing them. There’s bias everywhere—subconscious biases people aren’t even aware of. You have to try separating yourself from your preconceptions when thinking about a project or solution. Try as hard as you can not to rely on biases or lean into your own. It’s hard, but you get better at it over time. 

I think bias can be developed by asking leading questions, or the wrong questions. When conducting usability testing, I try to follow a script but often go off on tangents. This can lead to asking leading questions, so I have to be careful. I’m not great at avoiding bias, though I’ve improved. It’s difficult, so I have to check myself.

To address confirmation bias, it helps to write it down at the start of a project. Then I can refer to it often to check my thinking. Writing it down puts it front of mind so I can evaluate everything against the biases I’ve identified.

How do you handle biased stakeholders (small clients’ feedback being less valuable than big clients’ feedback – B2B)?

Sean:

Since we have about three types of clients—buyers, end users, and influencers like managers—we rotate among them. Constantly emailing the same clients with requests is bad for client retention. Our customer teams know not to do that. 

It’s hard to say exactly, but this isn’t really a problem. Of course, if a client makes up a large portion of your user base, you can’t ignore them. But try to limit client requests. You also don’t want to build a tool specifically for one client, since they might not renew and then you’ll have wasted resources.. So if you want to avoid this, warn stakeholders that focusing only on them means the work could be wasted if they leave. We need to apply our insights and feedback more broadly.

What can we do to ensure participants feel comfortable fully expressing themselves in their own words during usability testing sessions?

Sean:

Yeah, there’s something called the Hawthorne effect, which means people naturally change their behavior when they know they’re being observed. That’s unavoidable, no matter what you do. Laboratory settings will always differ from real-world scenarios. So, it’s very important to make participants feel as comfortable as possible. 

What I like to do is start with some casual conversation, not jumping straight into the user testing session. Chat about anything, like how their day is going (obviously avoiding negativity). If you’re in different cities or countries, discuss that. Ease into the actual questions. Don’t start with the objective right away. Ask simple questions they can easily answer, like “tell me about yourself” or “what’s your current role?” or “what’s your favorite app or TV show?” This gets them talking and comfortable, though they may think the casual conversation is still part of the session. 

Explain the session beforehand, even if you’ve emailed details. Say something like, “Step by step, I’ll share my screen and walk you through a prototype.” That way there are no surprises. During the session, make sure they don’t get too frustrated if they get lost in the prototype. Remind them that you’re testing the prototype, not them. People may be aware of user testing but not fully understand what that means. If they do get frustrated, try to remedy the issue, but don’t dwell on it or make them feel silly – it’s a balance. The most important thing is preparing them by starting a casual conversation, not jumping straight into the testing. Discuss light, easy topics to help them relax.

Dave:

I agree with all of that. One thing I try to avoid is using the term “test” or “usability test” because no one knows what a usability test is except us, and the word “test” scares people. They think, “Oh, they’re testing me.”, so I just say, “Look, it’s a research. We just want to get your opinion on an app or website”. If we avoid the word “test,” it’s better. 

I also always tell a white lie and say, “You won’t hurt my feelings if you don’t like this because I’m not working on the project” or “I didn’t design it.” That makes them more comfortable too because if they think you designed something and they’ll hurt your feelings, they’ll just lie.

Sean:

Another way to get honest feedback is to test low-fidelity prototypes. If people think you haven’t put in a huge amount of effort, they’ll be more honest with their feedback. If you show high-fidelity prototypes, they might feel like you’ve spent so long and put your heart into it, so they’ll ease off. Consider starting with low-fidelity wireframes to get some honest feedback.

What’s the final piece of advice you’d like to give the audience?

Sean:

When conducting research for your project, be honest with yourself. You know that whole bias idea? If you discover findings that contradict your assumptions, accept them and change direction. That’s a great approach.

Dave:

Yeah, I think it’s good to keep in mind that when doing a lot of research, you might explore three, four, or five different methods. Don’t forget about the results, though. People often do a survey, analyse the results, and then move on to the next thing. Always go back over the results throughout the entire project.

I’ve seen people start design after research but they have forgotten their research findings. Instead, they design based on their impression of the research. So always reference your data throughout the entire project, up until the end.