About

Friday, December 3, 2021

Test Your Site With Real Users

A few years ago, there was this French book publisher. They specialize in technical books and published an author who wrote a book about CSS3, HTML5 and jQuery. The final version, however, a glaring typo on the cover where “HTML5” was displayed as “HTLM5.” Read that twice. Yes. “HTLM5.” (Note that it was also missing the capitalized “Q” in jQuery in one version.)

Image of the book containing the typo. It has three cartoonish figures on it dressed as superheroes, then a product description of the book to the right of the cover.

I don’t know how many people are involved in publishing and printing a book. I bet quite a few. Yet, it looked like none of the people involved saw the typo. It made it to the printer, after all.

And this kind of thing happens all the time on projects. One of my favorite French expressions is avoir la tête dans le guidon. A literal translation is “having your head in the handlebar.” (The English official version is having your nose in the grindstone.) It comes from cycling. When cyclists are trying to win a race, at some point, they end up with their nose so close to the handlebar that nothing else around them matters. They are hyper focused on the road ahead. They can’t see anything else around anymore.

Photo of a cyclist in a black helmet and red jacket on a black and blue racing bike riding through a busy intersection with a blurry backdrop indicating a fast speed.
Credit: Max Bender via Unsplash

And this is exactly what happens to us quite often on projects. We and our teams are so focused at some point on shipping the site (or printing the book) that we get blindfolded and fail to see little (or big) details anymore. This is how you ship a book about “HTLM5” and a website with navigation issues and dead ends in user flows, or features no one needs.

Gaining an external view with user testing

If you want to avoid these sorts of things, you need an external view of your site, product or service. And the best way to gain that view is to test it with people who are not on the team. We call this usability or user testing. I have to confess that I’m biased here since part of my job is to perform user testing on websites. So, I have to say that, ideally, you want to test with your target audience — the people who actually use your website, product, or service. But, if (and this is a big if) you can’t find any users, at least have a first round of tests with people who did not work directly on the project.

You also want to test with people with different impairments to make sure the end result is as accessible as possible.

When should I start testing my project?

In a perfect world, you test as soon and as often as possible. Testing prototypes built in design tools before starting development is cheaper. If the concept doesn’t work, at least you did not invest three months of development into an ineffective feature.

You can also test HTML/CSS/JavaScript prototypes with fake data built for the tests — or test once the feature or website is developed. This does mean, though, that any changes are more complex and expensive.

Define what you want to test

The first step is to define what specific tasks or activities you want to test. Usually, you want a set of different actions with a user goal at the end. For example:

  • an account creation process
  • a whole checkout process
  • a search process from the homepage to the final blog post, etc.

List the tasks and activities the user needs to accomplish in the form of questions. We call this a creating a test script. You can find an example here from 18F.

Be careful not to bias users. This is the tricky part. For example, if you want to test an account creation flow and the button says “Sign up,” then avoid asking your test users to “sign up” because the first thing they will do is search for a button with the same verb on the screen. You could ask them to “create an account” instead and gain better insights.

Screenshots of Axure and Word side by side.
Example of a protype build in Axure and a test script

Then prepare the prototype you want to test. As mentioned before, it can range from mockups with a click-through prototype to a fully-developed prototype with real data. It’s totally up to you and how far you are in the project. Just make sure it works (technically, I mean).

Recruit participants

You know who your users are on most of your projects. The question is: how can you reach out to them? There’s plenty of ways. You might go through support or salespeople with lists of possible participants. If it’s a broad target audience, you could recruit testers right where they are. Working on an e-commerce website that sells plants? Why not try visiting actual physical shops, online communities for gardeners, community gardens, Facebook groups, etc.

You can use social media to recruit participants as long as you recruit the right people who are prospective users of the site. This is why UX professionals use screeners. A screener is a set of questions you while recruiting (and when starting the test), to make sure you are working with someone who is in the target audience.

Note that participants are usually compensated for their time. It can be gift cards, maybe getting of your product, some really nice chocolate — something that encourages people to spend time with you in a way that thanks them.

If you struggle recruiting and have a budget, you can use professional user research recruitment websites like userinterviews.com or testingtime.com.

Schedule, set up, prepare

Once you successfully recruit participants for testing, schedule a meeting with them, including the testing date, time, and place. The test can be remote or face to face. I won’t detail the logistics here, but at some point, you will need help to set up an actual room or a virtual space for the testing. If it’s a physical room, make sure it’s calm and accessible for your users. If it’s remote, make sure the tools are accessible and people can install them if needed on their computers.

Schedule some emails in advance to remind participants the day before the test, just in case.

Last but not least: do a dry run of your test using people from your team. This helps avoid typos in the scripts and prototypes. You want to make sure the prototype works properly, that there are no technical issues, etc. You want to avoid anything that could bias the test.

Facilitate the test

You need two testers to conduct a usability test. One person facilitates. The other takes care of the logistics and notes.

Welcome the participant. You can find a lot of templates for usability testing over at usability.gov, including consent forms, email template examples, and much more.

Start the recording, but only if they give you permission to do so, of course. Explain that you are testing the site, not them, and that there are no right or wrong answers. Encourage them to think out loud, and to tell you exactly what they do, see, and think.

Put them at ease by starting with a few soft questions to get them to talk. Then follow your script.

The most important thing: don’t help users accomplish the tasks. I know, this is hard. We don’t like to see people struggle. But if you help them, you will bias the results. Of course, if they struggle for five minutes and you need them to accomplish the task to go to the next one, you can unlock them. Mark that particular task as “failed.”

Once testing is finished, thank the test user for their time and offer them the compensation (or tell them how to get compensated if it was a remote test).

Get the recording, upload it somewhere in the cloud so there is a backup. Same for your notes. Trust me on that, there’s nothing worse than losing some data because the computer crashed.

Analyze and document the results

After the test, I usually like to put together a quick “first draft” of the analysis for a given participant because the testing is still fresh in my mind.

Some people do this in shared documents or Excel sheets. My favorite method is using the actual screens that were used for testing in a Miro board. And I put digital sticky notes on them with the test’s main findings. I use different colors for different types of feedback, like a user comment, feature request, usability issue, etc.

When multiple users give the same feedback or experience the same issue, I add a small dot on the note. This way, I have a visual summary of everything that happened during all the tests.

Screenshot of mockup screens in Miro with notes attached to various areas of the screens. There are 13 total screens, each with different layouts and content.

And then? Learn, iterate, improve.

We don’t test for the fun of testing. We test to improve things. So, the next step is to learn from those tests. What worked well? What can be improved? How might we improve? Of course, you might not have the time and budget to improve everything at once. My advice is to prioritize and iterate. Fix the biggest issues first. “Big” is a relative term, of course, and that depends on your project or KPIs. It could mean “most users have this issue.” Or it could mean, “if this doesn’t get fixed, we will lose users and revenue.” This is when it becomes again, a team sport.

In conclusion

I hope I’ve convinced you to test your site soon and often. This was just a small introduction to the world of testing with real users. I simplified a lot in here to give you an idea of what goes into user testing. Proper professional usability testing is more complex, especially on large projects. While I always favor hiring someone dedicated to user research and testing, I also understand that it might be complicated for smaller projects.

If you want to go further, I recommend checking out the following resources:



source https://css-tricks.com/test-your-site-with-real-users/

No comments:

Post a Comment