Rules, Techniques, and Tools for Usability Testing

By Rock Health’s Richard Ludlow

In preparation for our upcoming usability testing fair, Jackson Wilkinson (WeSprout) and Ryan Panchadsaram (Pipette) gave a brief talk on how to effectively plan for and execute usability tests. In my notes below, I summarize 8 key rules to keep in mind, and embed Ryan’s presentation with some helpful techniques and tools to take advantage of.


1. The goal of user testing is exposing the flaws in your product.
Try to avoid making your testers feel like they are being tested. The user is always right; it is your product that is being tested. Remember that you are totally biased because you have helped build the product. Resist the urge to ‘sell’ them on the product; just sit back, listen, and learn.

2. 5-8 people are probably enough.
There are diminishing returns from each test. You’ll learn 90% of the lessons from 5-8 people as you would from testing with 75 people. Test a small group, iterate and improve on the issues they brought up, and then test with new people to learn more.

3. Carefully construct realistic tasks you assign users to get the most useful feedback.
Broad tasks help you learn whether your general product overview is going in the right direction. Example: “Let’s plan your next vacation. You have $5,000. Go.”

Specific tasks test specific features of your site. Example: “Plan a trip to New York this weekend. Find the best deal that maximizes your time in New York but gets you back by 11am. Oh, and get a window seat.”

Good tasks should be realistic: once ran a test asking people to find a bookcase people would like to buy through the site, so people naturally typed in “bookcase” and then browsed through the options. However, they learned that these tests weren’t useful since nobody actually starts by typing the word “bookcase” in real life. By changing the test to “buy something to store all your books” they got much more realistic and useful results.

4. Work from a prepared script.
It’s harder than it seems to run a good test. A prepared script leads to smoother tests and lets you have generally consistent tests between all users.

5. Get the participant to talk as much as possible.
Encourage them to “think out loud” so you can what their thought process is as they go through the site.

6. Note your takeaways right after each test.
Don’t try to rely on your notes or memory; debrief as a team in between each test.

7. Adjust your test if you see a given task isn’t useful.
Consistency is preferable, but it’s more important that you are getting useful results. If a bug in your product (or some other issue) is rendering a task useless, then feel free to eliminate or adjust the task.

8. Take note of what you got right as well as what you got wrong.


Ryan’s presentation below provides techniques to get user feedback at any stage of product development, and some useful tools to help you run effective tests: