Usability (or user) testing is one of the most frequently used methods in the UX toolbox. While many valid variations exist, avoiding these 5 mistakes will ensure that information that’s gathered can be confidently used to inform design decisions.
- Testing with the wrong users
Some say that testing with any users is better than not testing at all. Yet sometimes companies take this adage too far by using employees, friends, or professional associates as stand-ins for real users. Others follow the safest route by testing with a small set of the best or most enthusiastic customers. When research participants have a close relationship to the stakeholders or company, the feedback is likely to be biased. The solution is to always test with a representative range of existing customers or users, recruited to match research-based customer profiles or personas.
What if the product doesn’t have customers yet, or the goal is to research the competition? Online recruiting services such as respondent.io or userinterviews.com help pre-screen and schedule test participants for relatively low fees, while professional market research recruiters specialize in finding users that fit a wide variety of very specific demographic, professional, and behavioral criteria.
- Using untrained researchers
Professional user researchers spend years perfecting their interviewing skills, conducting hundreds upon hundreds of research sessions. Research facilitation is logistically and cognitively challenging, and the performance of even a well-meaning novice can deteriorate quickly. At the same time, remaining neutral and unbiased can be difficult for new moderators, especially if they have a personal stake in the success of the design. The good news is that just one experienced moderator can mentor a whole team in applying best practices throughout the research process.
What about those team members who will not be conducting the research? How can they stay engaged if they are not asking the questions? At minimum, they should be observing sessions. Better yet, they can help gather data by taking notes, recording metrics, or thinking ahead about fixing usability problems as they emerge. In contrast, expecting the team to passively consume a final report is much less effective in turning research into action.
- Confusing qualitative and quantitative research
It doesn’t take more than a few people tripping over a rug to figure out that there is a problem, but finding out if the majority prefers that rug to be red or blue will take a much larger sample. It may be tempting to state that 60% of users prefer a certain design in user testing, but this result may be as meaningful as flipping a coin. A small sample works well when discovering pain points or identifying usability problems, but it can’t answer every question about users.
While it’s possible to supplement qualitative research findings with some well-chosen metrics, assuming those results are statistically significant without further analysis is a recipe for trouble. If/when collecting usability metrics, it helps to use validated/benchmarked tools such as System Usability Scale, or calculate confidence intervals for accurate interpretation. Ultimately, the best strategy is to triangulate qualitative research with surveys and analytics, and to continue gathering data on an ongoing basis.
- Waiting too long to get feedback
The design team is best positioned to act on user feedback early on in the process, when the UI is a work in progress. And yet, stakeholders are often concerned about showing an early design or prototype to customers, fearing that it will make them look bad, or offend an important client. Generally, marketing and business stakeholders tend to overestimate the attention users pay to final branding or visual design. Psychologically, showing an early version of the design changes the dynamic from “We want you to approve what we’ve spent a lot of time on,” to “We want your feedback on our ideas before we implement them,” leading to more useful and constructive feedback.
In the last decade, the field of user experience has steadily gravitated toward processes (Design Thinking, Lean UX) that start at low fidelity (level of detail), and iterate through rounds of testing long before a visual designer opens up Sketch or Photoshop. Usability testing of finished designs is the last step in the research process, not the first.
- Not testing at all
Hearing user feedback about a design for the first time can trigger fears and anxieties for many stakeholders. Will the users hate it? Will we find so many issues that we can’t launch on time? Will this ruin customers’ opinion of the company or product? Will the customers share proprietary information with a competitor?
Addressing logistical and legal concerns is easy. Best practices include obtaining consent for audio and video recording, and any applicable non-disclosure agreements. Up-front communication about the format of research, providing fair compensation, and using a moderator who can handle sensitive issues with grace will ensure that participants walk away from the session with a positive impression. There is certainly an element of unpredictability when running research with individual humans, and problems can happen, but an attitude of transparency and respect for participants usually inspires generosity and honesty in return.
The fears of research efforts having a net negative effect are usually put to rest as soon as stakeholders observe their first sessions, and are amazed at the huge amounts of valuable information even a few participants can provide. It is indeed true that everybody loves to be listened to, and customers are no exception. A company that is open to feedback at every stage of the design process inspires a level of customer loyalty that is difficult to gain through marketing efforts alone.
When time or budget are a concern, a number of rapid testing variants exist to ensure that the team is not wasting time waiting for results. In 2019, there really is no valid excuse to design without feedback.