The taxonomy of testing and research

Consider this a prologue to talking about the four phases of testing.

Before I go any further I’m going to share with you my taxonomy of testing. This is because the words we use tend to be used interchangeably but they have subtly different meanings.

If you’re new to the industry or just h*ckin’ bamboozled then this bit is defo for you.

The reason why I’ve used testing methods as the parent term is because in pretty much every case you’ll use a testing methodology.

It’s also because the goal of any testing method is to understand something in more detail.

If you stick your finger in an electrical socket and get an electric shock you learn that you didn’t turn off the electricity (and perhaps that you’re a bit of idiot).

Quant and Qual

You’ll hear people refer to testing methods as quant or qual quite often (lot’s of words starting with ‘qu’ there).

I abbreviate them because I can’t spell either without spell checking (they’re my achilles heel).

I’m covering these two first so you’ll understand what I mean when I refer to different testing types as Quant or Qual.

Quantitative testing or research

I abbreviate this to quant.

Quant testing is a numbers game. It largely relates to data or analytics because it relies on large volumes of information.

For example, analytics is a quant method.

A relatively well known “thought leader” in our industry has said that quant data can’t tell you if there’s a problem. What they said is bullshit.

Quant data can and does give insight that there is a problem. I will show you how and why in various posts I have in the backlog.

What it rarely does is tell you exactly why there is a problem.

Qualitative testing or research

I always abbreviate this to qual.

Qual testing or qual research is where you’re collecting detailed insights about something.

A simple way to differentiate between quant and qual is to think of quant as numbers and qual as words.

The two are mixed with some testing methods, for example tree testing relies a mixture of both to give you a statistically viable outcome.


Research is what we do to understand a problem.

It does not require us to have something to put in front of people.

When I’m starting working on something, my initial goal is to get an answer to the following three things:

  • Is there a problem?
  • What is the problem?
  • Who is affected by the problem?

I can’t do anything without answers to the above.

The second bullet point has become a bit of a mantra for me.

What is the problem you are trying to solve?

Let’s be honest, in pretty much every area of our lives we are doing something in response to solving a problem.

Why should we try to do our jobs without understanding the problem first?

When it comes to navigation design, there are usually two types of research.

  • Bench or desk research – which is where you, the navigation designer, reads a shit ton of stuff to understand something.
  • Qual research – where you are talking to people who are experiencing the problem you need to solve.


Benchmarking gives you an understanding of performance.

It’s a comparative tool. You are comparing how something is performing now to how it’s performed in the past, or because you want to compare it in the future.

If you’re comparing present to past you’re likely to be using benchmarking to help you identify if you’ve got a problem.

If you’re comparing present to future your benchmarking isa key performance indicator (KPI) or a success metric. Essentially you don’t want what you’ve done to be worse than what you’ve got now (unless you’re the current British government that is).

Benchmarking is usually results in quant insights. There are some exceptions but being able to quantify something as a percentage when it comes to performance is an easy way for non-design stakeholders to get their head around things.


Testing is what you do to test if something does or doesn’t work.

It is not, nor should it ever be validation testing. Please read The cult of validation to understand why.

Nor, I should add is it ever user testing. We are not testing the people who use the thing. We are testing a thing.

A mandatory requirement of testing is having something physical or digital to put in front of people.

Testing is done at any stage of the design process. The earlier we can get something in front of people, and the more consistently we do it, the better.

We can test what we have now, and the goal is to understand if something is wrong and what is wrong with it.

We can also test what we’re creating to understand if it does or doesn’t work.

Testing is largely qual.

The exceptions to this are when you’re using a platform to do card sorting, tree testing or first click testing.


Research and testing generate insights. We then use them to help us inform our design process.

We also use insights after something has launched. This shows us how well it is performing.

Why I find them incredibly valuable as a navigation designer is because it’s incredibly hard to nail navigation first time.

This is because every person who uses your product or service will interact with your navigation.

If you have a large number of people using it you’re never going to get the numbers you need to understand just how well it will perform.

Insights are invaluable because it gives us a view of how the product or service is used.

They’re also largely unbiased, because you’re watching how people interact with something.

Onsite search data and analytics platforms give us insights.

Insights are quant data.

It’s important to note that insights will give us an indication of performance of something. To understand if something is or isn’t performing well, you have to define what good looks like.

Don’t worry. I will cover this subject.

More like this


Benchmarking is a bit like putting a stake in the