10.1 Usability Tests
Developing products is all about making decisions. And as a product manager, you'll help your team make many decisions about how your software or application should be used. But how will you know if your product works the way it's supposed to? You and your team know the product well, and the interactions seem obvious to you, but will end users actually know how to use it?
To increase your confidence in your designs—and to find and fix flaws in what you've built—you can test how people engage with your products. That's where usability tests come in. Usability tests can show you where users succeed, stumble, or get stuck as they try to accomplish tasks within your products.
Like user interviews, usability tests require a structured, strategic approach to get the right answers to your questions. To develop effective usability tests, you'll need to script the tests, recruit participants, run the tests, and analyze the results. In this checkpoint, you'll learn about all of these elements. These tools and methods will ensure that your tests yield useful information.
By the end of this checkpoint, you should be able to do the following:
- Explain the value and goals of usability tests
- Create and conduct basic usability tests
- Effectively analyze usability test results

What is a usability test?
A usability test is a UX research method designed to reveal how people use a product or feature in a specific context. It involves observing users as they go through the process of using the product and examining how they engage with it. Usability tests help you uncover and identify problems that become apparent only when you observe how a user interacts with your product.
For example, imagine you are a product manager working on Facebook Messenger. You've just launched a feature that allows people to pay other people through the messenger app. Your testing went well, but you're not seeing the adoption rates you expected. You decide to conduct a usability test to learn more. You reach out to a few Messenger users who have never tried the feature and observe them go through the process of sending money to someone else.
You quickly notice that several people get stuck at the same spot: when they're asked to add their bank account information. They tell you that they're afraid Facebook will immediately take money from their account. You ask why, and you learn that the text instructions are unclear. What could you do to address this problem?
One possible solution is to change the text to reassure people about what to expect. You could even rewrite the instructions in two or three different ways and ask the users for feedback. Do they think one version is clearer than the other? Does the new text make them feel more comfortable about filling in their account information?
As you can see, watching users use the payment feature revealed a core problem: the misleading text. Of course, you could have interviewed them about their experience. But if you ask people for feedback only after they're done with a task, they may not be able to tell you what tripped them up. In fact, they likely won't even be aware that it was the instructional text that caused their issue. When you watch users use a product in real time, you can ask them on the spot whenever they get stuck and thus reveal these underlying reasons.
Similarly, if you were looking only at your product analytics, you may have an idea of when people are deserting the process (depending on the granularity of the data your product collects), but you would still need additional testing to figure out why people were abandoning the process at the bank information stage.
Usability testing is also a great way to build empathy with your users. You can witness their struggles and successes with your product, discuss their questions and suggestions, and collect stories that you can share with the product team about what is and isn't working. There's nothing like seeing a user interact with your product to help you understand a user's motivation and satisfaction with your product.
For more information about usability testing in product management, check out the videos below.
When should you use a usability test?
A usability test can help you validate the user's experience to ensure that the software works the way you intend it to. You can conduct a usability test with features you're planning, features you've just launched, or even features you launched a long time ago.
A usability test is especially useful when you're seeing results with your product that do not meet your expectations. If your feature adoption is not matching your goals or if you're seeing users drop off at some point in the process, usability testing may be helpful and illuminating.
You can also test a design of your product even before your developers have done any coding. For instance, you can do a walk-through usability test on wireframes or interactive prototypes before a feature is complete. You can also simulate the app by printing your user interface designs on paper and walking people through the workflow to see if it makes sense. As your user moves through the paper prototype, you can act as the application logic, switching out pages as the user chooses actions.
This kind of up-front testing can pay significant dividends. It can help your team get the design right from the start, which may be preferable to building and shipping the product with flaws. By investing the time identifying problems in the design phase, you can actually save a lot of time and resources you would otherwise spend later.
Finally, you can also run usability tests on your competitors' products to see how they compare with yours. If you're curious about what is or isn't working with a comparable product, a usability test on that product can provide valuable insights.
There are four main phases of a usability test: designing the test, recruiting testers, running the test, and analyzing your results. Next, dive into each of these steps in detail.
Designing usability tests
When you design a usability test, your objective is to make sure you test the right parts of your product in ways that simulate real-world usage as much as possible. You'll start the design process by answering a few key questions.
What's the goal of your test?
This is the key question you need to answer before creating a test. Are you looking at how effective the instructional information is? How clear the field and button labels are? Are you validating the order sequence for a workflow or perhaps checking how easily a user recovers from an error? Your primary goal will inform many other parts of the test, so answering this question is paramount.
It's best to phrase your investigation as a question, like this one: "Why are our users not completing the account setup for paying money over Facebook Messenger?" Make it as specific as possible and limit testing scenarios to a single goal at a time. Although you may want to test multiple tasks, it's best to use separate tests for each goal or question to simplify your results. Identify the main task to test, generate your results, and then move to other tests.
Who are your testers?
Your product or feature probably has a target audience, so you need to find the right people to test it. For example, if you want to test the administrative features in Google Analytics, you need to find people who administrate Google Analytics or similar analytics tools. Likewise, you need to know who would be less effective as a test subject and why. For example, if you want to test the mobile messaging for LinkedIn from a recruiter's perspective, asking a regular LinkedIn user—in other words, a non-recruiter—to test the feature won't yield useful results. The closer your testers are to your audience, the better your test results will be.
What tasks should you test?
Remember that your participants' time and attention are limited. Make sure to identify the specific features, flows, or pages that need to be tested. Is there a specific scenario that you need to test because analytics are showing a high abandonment rate involving it? Picking the right task, or series of tasks, is important. If you try to test everything, you may end up with very little useful information.
Building a test plan
Once you know what you want to test and who will test it, take some time to script, prepare, and test your test. Much like with interview guides, this process ensures consistency across your tests and helps keep you prepared and on track.
Script your test
To ensure consistent, reliable results, it's important that each tester receives the same instructions. Write a sequential test script that your testers can read while they work. If a step in the process is supposed to produce a specific result or outcome, you might want to include that information in the script. You might even include questions for the user as part of the script. For instance, you could ask something like, "After completing step three, what was displayed on the screen?" or "After completing step five, what did you think your options were for actions to take next?"
Prep your test
If there's anything you need to ensure that the test goes smoothly, be sure to prepare it in advance. This could include creating logins, crafting dummy data, or providing anything else that will let you and your testers focus exclusively on executing the test. For example, consider the Facebook Messenger payment feature. To test that feature properly, you might need your developers to create a dummy bank account or credit card number that will work with your software so your testers don't have to use their real account information to do the test.
Test your test
Finally, make sure to conduct a dry run with your test to ensure that everything is set up correctly. You don't want to hit any snags when you run the test with real participants. Are there any loose ends that need to be addressed? Does the test run too long? Will it help you find the answers you need? You should plan to record everything, if possible, including the room being tested in (via an audio or video recording) and the device you're running the test on (with screen-capture tools of some sort). Test the equipment to make sure everything works, and ask your participants permission to record them.
Recruiting testers
Once you know what your test plan is and who should participate, you need to go out and find some users. Although you've already learned about finding users for interviews, recruiting for usability tests is a little different.
First, you need to make sure that you can run the test live with participants. That usually means bringing these users into your office or to another site, which can reduce the number of people you can test with. If your participants must be in a specific location for the test, make sure that's clear in your instructions. There are ways to conduct remote testing, such as with conferencing software like Google Hangouts or Zoom. But if you use screen-sharing or video applications, your participants must be able to use that software, too. They'll also need to find a quiet environment where they can complete the test. Share those requirements with your testers to ensure they're eligible.
Second, you may need to consider compensating people for their time. If possible, you should budget for incentives for your participants. Time is money, after all, and you're asking people to spend their time on your product. If you're unsure, offer $20 per hour, but make sure that it's proportional to their wages. If you must have CEOs for your product's usability tests, you'll have to come up with a much larger budget to make this worth their while. You could also consider alternative payments, like free product credits.
Another common testing practice is to rely on your own coworkers to help with testing, especially with early-phase design testing. This comes with a caveat, however: your coworkers are experts at your product, so their experience will likely differ from the experience of your typical user. You could ask people from parts of the company that have nothing to do with your products to be your testers; while they may have less expertise, they will still likely be different than your intended target. This type of testing can provide useful information, of course, but it shouldn't be the only testing you rely on.
Running a test
Many of the interview best practices you've learned about also apply to usability tests. For instance, when a participant arrives for a test, greet them and review the basics of how the test session will proceed. Explain what will happen when the testing is complete; you may want to have a short feedback session or ask the participant to fill out a feedback form before they leave. Here are some other things you should consider, too.
Get the participant's agreement
Make sure the participant understands what's going to happen and why they're at your office or the location of the test. Get their explicit consent about being recorded. If necessary, make them sign a non-disclosure agreement, also known as an NDA, which ensures that they'll keep the contents of the test confidential.
Start the recording
Don't forget to record the test session. Make sure you capture their facial expressions as they're using your product as well as the product activity itself. Use a screen recorder to follow their process flow and engagement with the product.
Testing, not being tested
It's important to let participants know that despite the use of the word testing, they are not being tested. If they feel under scrutiny, they may not tell you when they are confused for fear of being judged as incompetent. You can say something like, "So I know the invitation said testing, but I just wanted to make sure that you understood that you are not being tested here—I am. You are the user of the product, and that makes you the expert. My goal is to learn from you and what you experience on our website. If you find something to be unclear, please let me know. It would help me detect the things that we designed in a confusing way. There are no wrong answers here."
Speak your thoughts
While you're taking the participant through the test, remind them to speak their thoughts out loud. Feel free to ask them questions. What are they doing now? What are they noticing? What just happened? How are they feeling? What seems right or wrong? Invite them to turn their internal monologue into an external dialogue with you, and feel free to gently guide the conversation into something you can use and understand.
Active listening and relevant questions
You need to pay close attention to your participants during testing. If you notice that they're confused, ask them to explain their thoughts. Be careful about asking judgmental questions like, "Why are you confused?" You might be misinterpreting their experience, and you don't want to lead the witness. It's better to ask them to describe their reaction to the product or the issue they're encountering. At this stage, you are only gathering data. Later on, after all of your testers have finished, you will have a chance to interpret it.
Conduct at least five tests
In general, you should aim to find at least five people to execute your test. That will offer you a variety of opinions and observations to draw conclusions from. However, if you can, you should continue to recruit users to test your product until you stop learning new insights. (In most cases, you will hit this point after 8 or 10 users.) If that's not possible, don't worry—you'll still cover a lot of ground if you get feedback from only five people.
Thank your participants
When the test is over, be sure to thank the participants for their time. If possible, you should give them any planned compensation at the end of the session or make it clear when and how they'll receive the payment or perks they are expecting.
Summarize and share the session
Take notes to capture what you learned during the test. Describe what happened and the results. Did you learn something unexpected? Did the outcomes make sense, or did they surprise you? Jot down your thoughts immediately while they're fresh in your mind. Share your notes with team members and other stakeholders so they can offer their feedback and insights.

Analyzing your results
Once you have conducted your test, you need to analyze the results and share them with your team. This is how you transform meaningful insights about your product and its usability into action items (a to-do list) that will improve your product.
Combine your observations
Review your notes from the test and look for themes, patterns, or recurring issues. What was common or consistent across your testers' experiences? What was unique? Was there anything that was especially surprising? Synthesize all the sessions together, and create a summary of the results.
Present your results
When presenting your test results to stakeholders or team members, you should focus on a few core ideas:
- Restate the question or problem you were trying to understand or solve.
- Explain how you conducted your test, who your testers were, and what they were asked to do. Assume that the people you are presenting to are not familiar with how you organized the test.
- Summarize what you learned and your observations, and include information about both common and unique experiences.
- Offer recommendations for changes or improvements to the product along with next steps to take. Make it clear how these are grounded in the information gleaned from testing.
Best practices
Usability tests are always worthwhile. But they can be even more effective if you keep a few best practices in mind.
Test early
The earlier you conduct usability tests, the easier it will be to fix your product. If you can test your paper designs or a prototype, you can identify problems even before your developers get to work. Similarly, it's better to test early versions of your product, even if they're not complete. You can fill in the gaps by asking your testers questions about what they would want or need, such as "What else would you expect to see here?" or "What features are missing?"
Test often
In addition to performing official usability tests, find time to talk to your users on a regular basis. It's easy to test a feature once while you're preparing to launch it. But products—and priorities—change over time, so you should retest tasks or features periodically. A good practice is to conduct a usability test of your main flow every two or three months to uncover problems. It's also important to always test on various browsers, operating systems, and smartphone configurations, as they may render screen forms or formatting differently. You also may need to test more frequently, depending on how often your product changes or how close you are to product-market fit.
Be careful about feelings and facts
Human brains generate emotional reactions first. After that, they come up with reasons to explain them. For that reason, you need to be careful when asking why someone likes or dislikes your product. You should always trust their emotions, but you should also dig into the reason behind their gut reactions. You want to understand why they're feeling what they're feeling. To get to the root of the issue, try asking them to imagine their ideal product: "If you could wave a magic wand and create this application, how would you want it to work? What would it do?"
Alternatives to usability tests
As valuable as they are, usability tests can be difficult and time-consuming to set up. Fortunately, you have a few options to get the information you need if you can't perform a test.
Observe people working
If you can't set up a usability test, you might be able to organize a less structured observation session. In a session like this, you can watch someone use your product in their day-to-day setting. It may require you to go to their location, but if you can manage it, you'll learn a lot by observing how your product is being used in situ.
Guerilla testing
If you can't find the right people to test your product, you can ask a coworker for a quick, informal reaction to your feature. These guerilla tests can be done very easily, and you can usually trust the emotional reaction that people have. Although one-minute guerilla tests won't yield substantial or nuanced feedback, you can still find valuable answers to specific questions. Try to identify a single question that your user can answer in a short period of time, like when you're passing them in the hallway.
Remote testing services
Several companies now offer options for paying other people to test your product. Sites like UserTesting will provide you with the testers from their substantial tester pool. You can set up screening questions or demographic criteria to ensure you get the right testers, and you'll receive videos where testers go through your list of questions and speak their thoughts out loud as they perform tasks on your product. While these services can be expensive, they save you time and energy you'd spend recruiting and running the tests. Make sure to get a demo of those products first to make sure that they provide the test functionality, the level of quality, and the type of result analysis that you need. If you need specific users to test your product or if participants need certain domain expertise, these services may be a poor fit.
Heuristic evaluation
A heuristic evaluation has been mentioned in a previous checkpoint but is worth considering in this context as well. It is a test in which you use a set of rules or principles to evaluate a product. For instance, if you ask enough people to evaluate the product using these guidelines—even if they're already familiar with the product—you can usually come up with several useful improvements. Try using Jakob Nielsen's rules as a starting point. He's an expert at usability and user experience design, and his observations and ideas are worth exploring.
Practice ✍️
Pick a key flow in an app that you're very familiar with (such as viewing the photo feed in Instagram or conducting a search on Google). If you have an idea for a flow that has some issues—even better. You'll be practicing revealing issues, so finding products that are less-than-perfect to test will work best. Run a usability test on the flow you've chosen with a friend or colleague, following the steps described below:
- Determine the specific goal that the user should accomplish.
- Write a script so that you know what you need them to test and how to guide them through the steps (if necessary).
Summarize your results, and submit the summary below. Your submission should include the following:
- What flow did you choose to test? Why?
- What goals did you want the user to accomplish?
- What script did you use?
- What problems did the user encounter?
- What improvements or changes to the product would you recommend based on this test?
Bonus points: If possible, conduct the test with additional people, instead of just one. Synthesize their results, and observe how additional users help you discover new issues or strengthen your understanding of existing ones.
Remember: Most of the learning in this program happens when you do assignments, and this type of work gives you great material you can use to answer job interview questions.