The four phases of navigation design testing

I probably jumped the gun by talking about card sorting last week. So let’s take things back to basics with the four phases of navigation design testing.

If you haven’t already, please read the taxonomy of testing and research. It’s helpful to understand the language I use in this post.

This is the first in a series of posts that talk about the four phases of navigation design testing and how they’re used. It will give you some insight into how I use them.

If you want to get behind the scenes info on exactly how I use them, sign-up for my newsletter at the bottom of this post. Each Wednesday I’ll be taking these articles and talking about how they apply to my day-to-day work.

The four phases of testing

There are generally four phases of testing that you’ll use in navigation design.

In theory it should be a lifecycle, but if you work in a product or service design environment you probably won’t use all of these.

Understanding the problem

Don’t solutionise before you’ve defined the problem.

Any design approach needs to be designing around a problem.

When you know the problem and who it affects, it makes it a LOT easier to narrow down a solution.


When you’ve understood what the problem is and who it affects, benchmark.

I think of this as sticking a stake in the ground so I know what the impact of any changes are.

For example, what percentage of people are clicking on the primary navigation, using search or leaving the site to use Google to navigate.

Anything I do needs to result in an improvement in how people find information.

Please note: when you are doing comparative benchmarking you must use identical defined parameters. For example, if you used a calendar month’s worth of data, you must compare using a full calendar month.


When there’s something that I can put in front of people, I start testing.

This implies that I’ve got something new or improved for people to test.

Whether that’s a tree to test or a taxonomy embedded in a navigation. I want people to test it so I can understand how well it works.

Again, please remember, there is no place for validation testing in the world of Web Whisperer.

Testing helps me to understand the outcome of incremental changes that I make, and whether something is good enough to launch.


Once something is launched I use insight to tell me how well it’s performing.

The insight can tell me that good enough is close to perfect.

Or, it can tell me it didn’t hit the mark.

The insight is also used to measure against the benchmark.

Insight is also a warning flag to tell you that you have a problem. Which is when you loop back to understanding the problem.

More like this


Benchmarking is a bit like putting a stake in the