Beyond the five-user assumption.

I often talk about the power gurus have over the intellect of professionals working in usability. 

By doing so I hope to give back the willingness and eagerness to think, instead of believing everything these gurus tell, without using our own brain and our own critical minds. 

Here’s another example of this cerebral anaesthesia: the 5-tester myth… 

In 1993, Jakob Nielsen states in a paper that, according to him, 5 testers are enough to identify 80% of ergonomics problems. 

A luring statement for people who work in usability because it allows them to put only a very limited number of people in front of a screen. Furthermore, it speeds things up considerably. 

A large number of scientists who have conducted studies aimed at measuring the real impact the number of testers has on the performance of a web site, have raised objections against this theory. And companies have also discovered the limits of this myth. 

Here’s an example. During a study conducted by Spool & Schroeder in 2001 (fiveusers.pdf), the first five users only revealed 35% of the ergonomics problems of a website. In this same study, the 13th and 15th tester have identified major issues on the website.

Another test used 18 testers. And they have found more than five new obstacles once the number of testers exceeded the magical number 5 (Perfetti&Landesman, 2002). 

Laura Faulkner, who is a scientist working at the University of Texas in Austin, has conducted a study in which 60 testers were present (faulkner_brmic_vol35.pdf). The 60 testers were grouped randomly in groups of 5, 10, … 

5 users assumption

The results are quite revelatory: 

  1. The 12 groups of 5 testers have found between 55 and 85% of the problems. 
  2. By putting people in groups of 10, the minimum percentage of identified problems raises to 80%. 
  3. By making groups of 20 testers, the minimum percentage of identified problems raises to 95%. 

Using 15 users will allow for the optimum balance between costs and reliability. You will indeed discover between 90 and 97% of problems. After more than 150 projects, my field experience confirms these different scientific results. 

That leaves the question to use techniques allowing you to gather objective data and to avoid subjectivity. We’ll come back to that later… 

Have a good week. Marc 

One Comment

  • Nielsen’s recommendation is very practical in nature, and a very easy one to both explain and sell to executives. As such, it works. It makes usability testing possible.

    You have a problem when you have to explain the 5 users to a marketing person. Marketing people live with statistics and large samples, so they can have hard time buying 5 users.

    It is kind of obvious that testing more users will almost definitely bring out more problems. Nielsen himself states at the end of his post that in certain cases you need more than 5 users. I think that Faulkner’s study confirms Nielsen and Landauer’s conclusion – her study shows that a 5-user sample is likely to bring out ~ 85% of the problems. The chart that you use in this post is misleading – the variance in results is shown in dark gray, which is hard to spot on the dark background.

    Given the state of software and the web, I readily subscribe to the 5-user rule of thumb. I am a practically-minded person. We’ve done hundreds of usability tests for various clients in different domains. When we have one user group, we usually test with 6 people. When we have two groups, we would test 8 or 10 users, and when we have more groups, we aim at 3-4 users per group. We’ve never had a case not to find any usability problems. Our average report lists 50-60 problems, and our clients address 50 to 80 % of these problems. From a business point of view, testing more users does not make sense.

Submit a Comment