With Userlytics’ easy-to-use test builder, you can create custom UX tests for your own participants or participants from our global panel of over 2 million participants.
The Activities tab on your test builder allows you to choose from a variety of different task and question types for your usability test.
Each task comes with a generic template that you can adjust and edit as you please. Strategically choosing tasks that will help draw the insights you are looking for is key to conducting a successful usability study.
Below, we go over 11 different types of tasks, and how they can help you get the UX insights you are looking for.
Scenario
The “Scenario” task sets the stage for your test by asking participants to imagine they are doing something specific.
For example, if you are testing an e-commerce website for pet products, the scenario task might be: “Imagine you are browsing for dog food made from natural ingredients.” A well-written scenario helps participants provide better feedback about your product.
In addition to setting the stage, the scenario task typically reminds participants to speak out loud throughout the test and share their thoughts. This provides valuable insights and helps you draw powerful conclusions about your product’s overall usability.
Task / Verbal Response
The “Verbal/Task” activity will prompt your participant to either answer a question out loud, or perform an action that can be measured. Which one should you choose?
Verbal Response (recommended in most cases) : Ensure participants can answer questions while following the instructions alongside the digital asset.
Task: Choose this activity type when it’s essential that participants start the task only after thoroughly reading the instructions. This approach is useful for accurately measuring the time taken to complete the task itself, excluding the time spent reading instructions. It ensures that the recorded timing reflects only the task completion duration.
When you select the ‘task’ activity type, the process for participants is as follows:
- Read the Question: Participants will first be shown the task question in full screen.
- Click Start: They must click the “Start” button to begin.
- Minimized Task Box: The task box or instruction box will be minimized after clicking “Start.”
- Full-Screen Prototype: Participants will then see the prototype/asset in full screen, ensuring they read the instructions first and have the full screen available for task completion.
- Once they have completed the task, participants will have to expand the task box and click “Next” to jump to the next question.
Using this method ensures participants are well-prepared before starting the task and that the recorded time accurately reflects their task performance.
Single / Multiple Choice
The “Single / Multiple Choice” task gives you the option of:
- Creating questions with only one correct answer
- Asking checkbox style questions where the participant can select more than one answer.
Single-choice questions are useful for determining if an aspect of your platform or prototype is clear to the consumer. With only one correct answer and the rest serving as distractors, participants must fully understand the content to select the correct option. Each participant can provide only one response to a single-choice question.
Multiple-choice questions are effective for gaining insights into the overall sentiments of your audience. If your results show that two or three answer choices are more frequently selected, it can indicate which aspects of your platform are successful and which may need improvement. For multiple-choice questions, participants can choose from one up to the total number of specified options. For instance, if you create a multiple-choice question with five possible answers, participant responses might include one, two, three, four, or all five of those choices.
You will also notice that you have the option to use question logic, enabling a tester to either advance or be disqualified from a test based on your requirements.
Rating
“Rating” questions use a scale to measure opinions in a group.
We use rating-scale questions to measure attitudes, beliefs, preferences, and self-reported behavior. This data helps us understand how users see our product or service.
A common rating task is asking participants to rate an aspect of your website, app, or prototype on a scale, with the lowest number being the worst and the highest number being the best.
Rating questions can have as many answer choices and display values (numbers, words, etc.) as the test creator wants. For example, with five answer choices, each participant will give one response between “1” and “5.”
Write-in Response
“Write-in Response” tasks let you ask the participant an open-ended question and get a written answer. Some participants may feel more comfortable answering honestly in writing than with verbal prompts.
Net Promoter Score (NPS)
The “Net Promoter Score (NPS)” task measures the percentage of customers who would recommend a company, product, or service to a friend or colleague. According to Netpromoter.com, you calculate your NPS using answers to a key question on a 0-10 scale: How likely are you to recommend [brand] to a friend or colleague?
Respondents are grouped as follows:
- Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, driving growth.
- Passives (score 7-8) are satisfied but unenthusiastic customers who are open to competitive offers.
- Detractors (score 0-6) are unhappy customers who can harm your brand and slow growth through negative word-of-mouth.
For this task, each participant’s answer is a number between 0 and 10. Subtracting the percentage of Detractors from the percentage of Promoters gives the Net Promoter Score (NPS). This score can range from -100 (if all customers are Detractors) to 100 (if all customers are Promoters).
If all customers give a score of 6 or lower, the NPS is -100. If all customers give a score of 9 or 10, the NPS is 100. The NPS helps you determine if you have correctly identified your customer’s needs in your prototype or platform.
System Usability Scale (SUS)
The “System Usability Scale (SUS)” task is a quick, reliable tool for measuring product usability. It includes a 10-item questionnaire with five response options ranging from “Strongly agree” to “Strongly disagree”. SUS can evaluate various products and services, including hardware, software, mobile devices, websites, and applications. Our platform automatically generates the SUS questionnaire, allowing you to specify the type of asset your participants are testing.
Benefits of using SUS include:
- It is easy to administer.
- It provides reliable results even with small sample sizes.
- It is valid and can differentiate between usable and unusable systems.
SUS questions have five answer options, with “1” being the most negative response and “5” being the most positive. Each participant will provide one response per question between “1” and “5.”
To calculate the cumulative SUS score:
- Convert the scale into numbers for each of the 10 questions:
- Strongly Disagree: 1 point
- Disagree: 2 points
- Neutral: 3 points
- Agree: 4 points
- Strongly Agree: 5 points
- Calculate:
- X = Sum of the points for all odd-numbered questions
- Y = 25 – Sum of the points for all even-numbered questions
- SUS Score = (X + Y) x 2.5
The rationale is intuitive: the total score is 100, and each question has a weight of 10 points.
The average SUS score is 68, meaning a score of 68 places you at the 50th percentile. Here is a general guideline for interpreting SUS scores:
Your SUS task will appear to your participants as follows:
Card Sorting
Card sorting is a popular method for understanding the user’s mental model. Instead of organizing a website by your corporate structure, you base it on how users think by having them sort items into categories.
You can create three different types of card sorting tasks:
- Open Card Sort: Participants create and name their own categories.
- Closed Card Sort: You predetermine the categories for sorting.
- Hybrid Card Sort: You predetermine some categories and allow participants to create additional categories if they wish.
Your participants’ card sorting results will be available in the “metrics” section of your Userlytics dashboard. This section includes three parts: Cards, Categories, and Recommendations.
Card Metrics
Clicking “Cards” on the left-hand side will show you how participants sorted each of your task cards. You can use the left and right arrows to navigate through each card and see the categories your participants chose for them.
Categories Metrics
The middle section, “Categories,” provides information about the categories your participants created, the percentage of cards listed under each category, and more. There are two ways to view this section:
- View as Cards: This view shows which cards were placed in each category by the majority of participants and which cards were placed in each category at least once. Clicking on any of these metrics will expand them and give you greater detail on the percentage of participants that placed specific Cards in each Category.
2. View as Table: In this viewing mode, a table will appear showing the number of test participants who placed each card within the given categories. Any boxes that appear in red signify that all participants placed that card within the same category. The colors within the table become lighter as fewer participants place those cards within a specific category. This table provides a visual representation of how your test participants interpreted your digital asset’s informational structure.
Recommendations Metrics
The third metrics section on the right-hand side, “Recommendations,” offers suggestions on categories and cards you should consider removing or revising.
Our algorithm generates warnings for specific cards and categories that may need to be removed or renamed to promote clarity and consistency within your brand asset. The elements in the warnings section were placed by participants but lack sufficient data to support them.
These recommendations help you eliminate confusing or irrelevant aspects of your website and strengthen your website’s organization.
Tree Testing
A “Tree test” is another term for a reverse card sort. It evaluates a hierarchical category structure, or tree, by having users find the locations where specific tasks can be completed. Including a tree testing task in your usability test can help verify the validity of the information structure and the findability of different products or themes on a site, mobile app, or prototype.
To build your Tree Testing task:
- Customize the text your testers will see. Define the product or theme your participants will sort into a specific category.
- Define your categories or menu hierarchy. You can also define subcategories to get more specific.
- Participants will sort through these categories and choose where they believe the specified product fits best.
After naming your categories, click the toggle bar on the right-hand side to select the correct response to the task. This will allow you to view the success rate of your Tree Testing task once your participants complete the usability test.
Your participants’ tree testing results will be available in the “metrics” section of your Userlytics dashboard. This section includes three parts: Option Selected, First Click, and Other Info.
Option Selected
Clicking “Option Selected” on the left-hand side will display the percentage of participants who chose each answer option for your Tree Testing question.
First Click
The middle section, “First Click,” provides information about whether any participants initially clicked on one category but then switched to another before making a final decision.
The pie chart shows the percentage of participants who exited a path after entering that tree. This information is helpful because it indicates if participants hesitated before selecting a final category, suggesting that your categories may need fine-tuning to be more clear and intuitive.
Other Info
The third section on the right-hand side, “Other Info,” provides data about the overall results of your Tree Testing task. This includes the percentage of participants who found the correct path without any backtracking, the number of participants who selected the correct option on the tree, and the average time participants spent on the task.
Depending on the results, you may see a red highlighted “WARNING” button. Clicking this button will alert you if the success rate of your task is lower than the industry average. If so, you may need to consider revising your categories for added clarity.
Matrix Questions
A matrix question groups together relevant questions on a particular topic in a row-column format. This simplified format allows participants to view and answer survey questions at a glance. Matrix-style answer choices are often offered on a scale.
Here’s why you should consider using Matrix questions for your study:
- It saves space: Matrix questions display multiple questions and answers on one page, reducing lengthy surveys.
- Reduced response time: Consistent response options make it easier and quicker for participants to answer.
- Increased responses: Clear layout and simple answer options encourage more participants to complete the survey.
- Eliminates monotony: Groups related questions together, reducing the repetitive nature of surveys.
- Insight on sub-topics: Matrix questions provide detailed views on specific aspects of your product or service.
You can analyze the results of your matrix question within the “Metrics” of your completed study.
X-Second Test
The X-second test quickly captures people’s immediate reactions. You show them something for a brief period, usually 5 seconds, and then ask for their thoughts or feelings right away. By uploading images and screenshots to the Userlytics platform, this method helps understand what people first notice and their initial, often unconscious, responses.
Once you select this activity, a new window will appear prompting you to confirm your choice. Click “Add”.
Once this is done, click on the activity you just created and fill out the details. These include the question that will appear on the participant’s screen, the file you want them to see, and the duration in seconds.
First Click Testing
This activity type helps you understand user behavior by tracking where they click first on an image.
By analyzing these initial clicks, you can see if your design is intuitive and easy to navigate.
First Click Testing works smoothly on both mobile and desktop platforms, and is available in all Userlytics subscription plans.
It supports every testing approach, from unmoderated to moderated, making First Click Testing a versatile tool for all your UX research needs.
It works by uploading an image or screenshot of your website, app, or prototype. You’ll receive a comprehensive heatmap and click map showing where users clicked on specific areas.
After setting up your First Click activity test, you can view the results on your study Metrics dashboard, which will display them as a heatmap.
Hovering over the heat-zones with your mouse will provide a detailed breakdown of the number of participants who clicked in that specific location.
Remember to Preview Tasks!
After building out your test tasks, be sure to preview each one to ensure they are easy to understand, well-written, and work well with your test asset.
You can choose to run a “Quick Preview,” which allows you to check the flow of tasks and questions through an interactive HTML interface without downloading the Userlytics Recorder or displaying your test asset.
Alternatively, you can run a “Recorder Preview,” which uses the Userlytics Recorder to simulate a real test without any actual recording or uploading.