SCAN Usability Study

Role: Product Designer
Timeline: 10 weeks
Team Members: 4 Product Designers
Advisors: Ava Sazanami, Evan Sarantinos

This past Spring, I was a part of a Directed Research Group (DRG) within the University of Washington Department of Human Centered Design & Engineering (HCDE) and the Seattle Coronavirus Assessment Network (SCAN). SCAN offers a COVID-19 at-home test kit service which delivers to residents of King and Pierce County in the Seattle area.

The purpose of this usability study was to understand barriers and pain points within the kit activation step of the SCAN study, and generate suggestions for improving the kit activation rate amongst multilingual participants aged 31 to 51.
I helped conduct user research by analyzing previous demographic data on SCAN participants to identify patterns within certain communities, conducted usability tests with selected participants and coded the data to spot trends in areas of confusion, generated potential solutions and redesigns with their pros and cons based on those findings, and presented those findings and recommendations to the wider SCAN team.


🔎 The Kit Activation Step did not have a 100% completion rate

The Kit Activation Step ensures that there will not be any mix-ups within the lab with the test sample, and is a crucial part of the process. In this step, users have to take a picture of the barcode on their test tube, and fill out an online form with their personal information and barcode number.

User Problem

If mix-ups in the lab occur, it makes it more difficult to identify whose samples are whose without activating their kit. It would take longer for the user to get their results as well.

Business Problem

SCAN wants the testing process to be as quick as possible for their users, including when samples are being analyzed. It delays the process if they are not able to verify a user's results, and may give false results as well.

According to SCAN's data taken from comparing their RedCap form submissions and the kits ordered, we found that 20% of test-takers did not complete this step in the process, a relatively large number for such an important step. From this, we asked ourselves the question:

How can we better understand the kit activation process and improve the kit activation rate?
Figure 1: An image of the "Activate Kit" step of the process, which 20% of users failed to complete.


📝 Why are people not completing this step?

🔎 Exploratory Phase

Personal Testing

To understand the kit activation and overall procedure of the SCAN study, the entire research team participated in the SCAN study by receiving COVID-19 test kits and completing it as if we were a participant in the SCAN study. I took note of pain points I noticed during the kit activation process, such as a tendency to want to push it off to later in the process or skim the instructions, and noted these for questions within the study design and research with participants.

(Fun fact: although I was well aware of the problem we were trying to solve, I actually became one of our target users as I almost forgot to activate my own kit... good for our research, bad for my ego when I admitted it to the team.)

Demographic Overview

To target participant recruitment to those who have historically found issues within the kit activation process we analyzed the summary of SCAN’s Kit Activation Compliance breaking it down by demographic data. Through this process we identified a higher incompletion rate between people of color and participants age 50 and up as seen in Figure 1 and 2.

Figure 2: SCAN's Kit Activation Dashboard, logging in how many users successfully activate their kit as a whole as well as in relation to gender, race, and age.
Figure 4: This graph shows the demographic overview of incomplete kit activation based on race, with a noticeable increase for Asian, Black/African, and Hawaiian/Pacific Islander participants.
Figure 3: This graph shows the demographic overview of incomplete kit activation based on age, with a noticeable upward trend following age 50.

Usability Study

We conducted this usability study to observe how users interact with our boxes and learn more about possible reasons why they may be skipping the Activate Kit step. We had users complete the entire testing process - opening the box, taking their COVID test, etc - on video with us and we asked them questions post-test.


We coded each interview in an Excel sheet to identify insights and key themes in the data. We did this by organizing demographic data of each participant alongside reasons for enrolling in the study, suggestions about the kit activation step, pain points, when the activation step was noticed, reasons for activation/non-activation, the first thing they noticed, and positive points.

This data was then organized into insights sorted by how often each insight showed up as shown in Figure 5. These insights were then categorized into key findings to begin generating recommendations.

On the left is insights from the organized data and on the right is the number of participants that identified with each insight, color coded by significance. (yellow = low, orange = medium, pink = high). These insights were later grouped into key findings.


🗣 The failure to complete the step was attributed to its wording and the kit itself


🧐 The wording of the Kit Activation Step was confusing, leading people to make it the last step instead of the first step.

We observed in our interviews that participants would read but misinterpret the purpose of the “activate kit” step. This issue we believe comes from the current wording of the "Activate Kit step" in the pamphlet, which can be seen in Figure 6.

Kit Activation Step in the Test Kit Pamphlet

The current wording makes the step sound like participants need to activate the kit in order to receive their results, which makes it a later priority to the user.


We recommended changing the wording of this step in order to add emphasis that this step should be taken immediately, not just when the participant is looking for results.

This could be done by using immediate wording such as “Activate Now”, to create urgency and bring attention to the step.


Switching the order of steps one and two would ensure that participants do not forget to take the barcode photo, so even if they do choose to activate later, they will have the information they need.


🧪 The order of the materials in the box and the person's experience with COVID tests contributed to how they approached the test kit.

When opening the kit, participants tended to focus on the swab and tube in the kit first (as their goal as a user was to simply take the test), as opposed to activating their kit and reading the instructions first.

Two participants did not read the Quick Start Guide, and we believe that this was more likely if they already have experience with COVID tests (participants said this), especially self-swab tests, as they would skim the Quick Start Guide in order to get the test over with quickly.

An image of the inside of a kit, with the swab and tube on top and the Quick Start Guide underneath them.


A recommendation is to add a breakable sticker around the opening of the tube as another reminder to activate the kit. That way, the users would have yet another chance to activate their kit before opening the tube and taking the test.


Participants interacted with the materials in the kit based on the order each item was placed in the box, as two participants first noticed the materials in the kit before the activation step since the tube and swab are placed on top of the instructions.

We recommended changing the order of the materials placed in the kit so that it is in the order they are used in during the test. For example, having the instructions on top, then the tube and swab, and the rest of the items below those.


To increase the perceived importance of the Quick Start Guide, we recommended adding a brightly colored activation “card” to place on top of all of the materials in the box. By adding the card in, the test-takers would be forced to interact with the reminder and it prevents them from getting tunnel vision and going for the swab and tube.

However, it should be noted that this suggestion may not be accessible for individuals with color blindness (1-8% of the population), and would need to be tested as it might not have the intended effect.


🏠 The majority of our test-takers faced a limitation and chose SCAN as a convenient option, so our process should be as streamlined and simple as possible.

All of our participants answered that they chose SCAN as it is a convenient and comfortable option for them to be able to take the test at home. Some of our participants faced different limitations that prevented them from getting an in-person test, as shown in the figure below:

Table showing limitations participants had that caused them to choose SCAN


Because of limitations like these, we recommended that providing outreach to communities currently benefiting from the at-home tests, including bilingual and multicultural communities, would be beneficial for them.

In addition, it's important to avoid overwhelming them with texts and reducing content overload. Being spammed with texts notably made the users more stressed out about the entire process, when it should be as easy and relaxed as possible.


💬 Translated kit options need to be more accessible for multilingual test-takers.

The participants in our study were primarily bilingual, exposing possible language barriers present in the kit activation process. Despite being bilingual, all of our participants chose English as the language of their kits, and we found that one participant wasn’t even aware that a translated kit was even an option.


This language barrier could be a cause of misinterpretation of the kit activation step, and by more effectively advertising translated kits we can provide the option of a better user experience for bilingual individuals. By completing the kit in their primarily spoken language, we can limit the translation errors that may occur when using the English kits.

It is also important to ensure the kit activation step is being properly translated to once again limit errors due to a language barrier. Because a language barrier was not the primary focus of our study, we believe that this demographic of multilingual participants should be studied further in order to better understand their specific needs and pain points in the SCAN at-home testing process.


👍 Despite the Kit Activation Step, the overall collection process was easy to understand(!)

Despite the confusion associated with the kit activation step, participants expressed that overall the steps to collect a nasal swab were easy to understand and complete. We also observed this in our interviews, where participants skimmed over the text and felt comfortable enough to complete the test without fully reading the pamphlet. We had participants even express that the process could be done in less steps, and that “if anyone is able to read, it is clear”.


We believe that there is still room to improve in the process overall. By smoothing out issues with kit activation, we can reduce any room for error for the participant. We also recommended reducing the use of text in the pamphlet, in order to keep participants from skimming and ensure the process is as simplified as possible.


⌛️ Encountering four limitations in our study

1. Few Interview Slots

  • Due to scheduling constraints by the courier service used by SCAN, we were unable to offer participants interviews outside of a 2pm - 3-4pm window, as it was not guaranteed that the kit would arrive before 2pm and it would be picked up at the designated 3/4pm time.
  • We planned for each interview to be approximately 20 minutes, and having so few slots, all at the same time of day, made it difficult to schedule participants who showed interest in the study but could not be available between 2-4 pm.

2. Short Testing Window

  • Participants needed to have their kits ready for pick up very shortly after their interviews, which we believe caused some participants to feel pressure to complete their test kits as fast as possible. This pressure might have caused participants to miss important information that they would have otherwise seen or caused them to make mistakes that would not have occurred if they had more time.
  • Additionally, a few of the participants experienced technical difficulties trying to get on Zoom (possibly due to their wireless quality) which further reduced the amount of time they had to complete their kits and answer our questions.

3. Inconsistency in Kit Completion

  • Two participants did not complete their kits while on calls with us.
  • One participant had their courier arrive earlier than expected, so she completed her kit before her scheduled interview time.
  • The other participant who did not complete their kit misunderstood that the kit was supposed to be done while on the call.
  • Both participants who completed their kits without us present to watch were asked to recollect their experiences in addition to answering our predetermined questions (that were asked to all participants). Because we did not directly observe the participants completing their kits, we cannot determine with absolute certainty that they recollected their experiences accurately.

4. Possible Moderator Bias

Finally, we believe that some moderator bias likely affected how participants acted while on calls with us. As with all usability studies, the Hawthorne Effect – which states that people act differently when they are aware they are being watched – might have caused participants to act how they believed we wanted them to act.


📊 Measuring the success of this study

Since the study was only for the length of a quarter, it was difficult to find out any data from implementing our recommendations into the kit. Thinking towards the future, if I was still on the team, our success metrics might include:

1. The number of kits successfully registered on RedCap

When activating their kit, users will have to fill out a form on RedCap, so we can evaluate how many kits were sent in and compare it to the activation responses.

2. Amount of people visiting the RedCap form

This would show whether or not people are visiting the form and still putting it at a lower priority/too much work, or if people are simply not seeing the form/brushing it off from the initial glance.


Imposter syndrome is tough but that means there's a lot to learn.

This was my first time participating in research, so I felt a bit of imposter syndrome joining this research group as a freshman in a group of sophomores and juniors already studying Human Centered Design & Engineering. At first, it was difficult to ask questions and be more open about my inexperience in user testing, since it felt like everyone already knew what they were doing. However, as I put in more effort to be an active participant in our discussions and be more open with my team members, I realized that even the sophomores or juniors aren't completely familiar with user research. It is normal and valid to be intimidated by a group of upperclassmen, but I knew it was important to remind myself that we are all here to learn and it's only detrimental to continue to be intimidated by them.

I am also proud of myself for how much I learned in this study this past quarter — I gained a lot of insight into how usability tests are run and the different methods we used to analyze data and come up with solutions. One of the reasons why I love design is because of the real-world impact it can have, and in this study, I felt that I had made meaningful contributions to my community! I hope that SCAN will have positive results once implementing our recommendations into their COVID-19 at-home test kits.