Once your study is complete, it's now time to dive into the data collected. Quantitative methodologies are meant to provide researchers with objective, measurable results. Many of the activities your participants complete will provide you with concrete data on what’s working and what isn’t working within your platform.
Some of the quantitative metrics Userlytics offers are the System Usability Scale (SUS), Net Promoter Score (NPS), Single Ease Question (SEQ), success/failure questions, rating questions, time on task, card sorting, tree testing and more. Our dashboard allows you to view all these metrics in one easy-to-view place, allowing you to gather insights to improve your brand’s customer experience.
This guide will walk you through how to visualize each activity’s metrics and, where applicable, offer additional tips on analyzing them.
To get started, locate your latest study and navigate to the left-hand panel. From there, you'll find the metrics section, which contains all the data associated with the study you ran.
-
Scenario
The scenario activity will provide the metric associated with the average time it took participants to complete the activity. To get the results, simply click on the activity the following box will appear showing the average time on task.
Scenario/Instructions
To view the average time per participant, click on the “All Participants” panel and choose a participant.
Once you’ve chosen a participant, you can visualize their average time on task.
-
Task/ Verbal Response
This activity involves asking a tester to complete an action, therefore there is no metric associated with this activity, per se.
However, when setting up the activity in the Study Builder, there will be an option to add a question regarding the success or failure in answering the verbal task. If the tester is able to complete it, they will mark it as “Success” and otherwise, as “Failure”. In the metrics section of the activity, you can visualize the testers’ answers in a pie chart. You can click on each participant's name to access their recorded answer to this question.
Additionally, the arrows next to the testers’ names allow you to move participants from one response bucket to the other and vice versa, in case they answered the question wrong as evidenced by the test results.
For both Task and Verbal response, you will be able to visualize the “Time on Task”, including average, max and min time.
Finally, the SEQ section will display visuals reflecting how testers rated the task's difficulty, with 7 being the hardest and 0 the easiest.
-
Single / Multiple Choice
With Single and Multiple-choice questions, you can easily visualize responses through the graphs below.
This chart represents responses to a single-choice question, meaning each participant selected only one option.
This chart visualizes responses to the multiple-choice question. Each color in the legend corresponds to a different response option and the bar heights represent the number of respondents who selected each answer.
-
Rating
The Rating activity works similarly to the Single Choice activity, as participants select a single response. It’s a great way to gauge user sentiment on a website, interface, app, or prototype. The results are displayed as a color-coded pie chart that always sums to 100%, making trends easy to interpret at a glance.
-
Write-in Response
When participants provide a written response, the Metrics section will display their answers as the example below shows. To view responses from a specific participant, use the filter at the top of the Metrics section. If a participant did not answer, their name would appear under 'NO ANSWER'.
-
Net promoter Score (NPS)
The NPS (Net Promoter Score) gauges how likely testers are to recommend the product they are testing—whether it’s a website, app, or another platform. The score reflects the overall sentiment towards the product based on their feedback.
Explanation:
- A positive NPS indicates a strong base of loyal customers who are likely to recommend the product.
- A negative NPS suggests more detractors than promoters, pointing to potential dissatisfaction.
- An NPS score of 0 signals a neutral stance, where users are on the fence about the product.
To boost the NPS, focus on turning Neutrals (7-8 scores) into Promoters (9-10) by improving the user experience, resolving pain points, and engaging with feedback.
-
System Usability Scale (SUS)
Here's a refined version of your explanation with a more structured flow and added clarity:
System Usability Scale (SUS)
The System Usability Scale (SUS) is a standardized tool used to evaluate the usability of a platform, app, or prototype. It is based on a 10-question survey that gathers user feedback on various aspects of usability, such as ease of use, efficiency, and overall satisfaction.
SUS scores are calculated using a specific formula to generate a usability rating between 0 and 100.
Understanding the Graph
- The graph displays three key SUS values:
- Min (Pink Bar - 45): The lowest recorded usability score.
- Average (Blue Bar - 50.5): The average usability score across all test participants.
- Max (Gray Bar - 57.5): The highest recorded usability score.
- The average score (50.5) is the most important metric, as it is used for comparison against other studies, industry benchmarks, and usability standards.
- A SUS score above 68 is generally considered good, while anything below that indicates usability issues.
- In this case, since all recorded values are below 68, the results suggest that the system may have significant usability challenges that should be addressed.
-
Card Sorting
Clicking “Cards” on the left-hand side will show you how participants sorted each of your task cards. You can use the left and right arrows to navigate through each card and see the categories your participants chose for them.
Categories Metrics
The middle section, “Categories,” provides information about the categories your participants created, the percentage of cards listed under each category, and more. There are two ways to view this section:
- View as Cards: This view shows which cards were placed in each category by the majority of participants and which cards were placed in each category at least once. Clicking on any of these metrics will expand them and give you greater detail on the percentage of participants that placed specific Cards in each Category.
2. View as Table: In this viewing mode, a table will appear showing the number of test participants who placed each card within the given categories. Any boxes that appear in red signify that all participants placed that card within the same category. The colors within the table become lighter as fewer participants place those cards within a specific category. This table provides a visual representation of how your test participants interpreted your digital asset’s informational structure.
Recommendations Metrics
The third metrics section on the right-hand side, “Recommendations,” offers suggestions on categories and cards you should consider removing or revising.
Our algorithm generates warnings for specific cards and categories that may need to be removed or renamed to promote clarity and consistency within your brand asset. The elements in the warnings section were placed by participants but lack sufficient data to support them.
These recommendations help you eliminate confusing or irrelevant aspects of your website and strengthen your website’s organization.
9. Tree Testing
After your participants have completed your user experience test, you can go into your dashboard and view and interpret the results of their Tree Testing task. First, find the “Metrics” option on your Userlytics dashboard. Then, click on the Tree Testing task within your test to view detailed metrics from your participants’ results.
Here, you will be able to see detailed reports that give you suggestions on how you should arrange and organize your digital asset based on your participant results. Under your metrics, there will be three sections: Option Selected, First Click and Other Info.
1) Option Selected
Clicking “Option Selected” on the left-hand side will show you the percent of participants who chose each answer choice to your Tree Testing question.
2) First Click
The middle section, “First Click,” gives you information about whether or not any of your participants clicked on one category option, and then switched it to another option before making a final decision. The pie chart shows the percentage of participants who exited from a path after entering that tree. The information here is helpful because it can indicate whether or not your participants hesitated before selecting a final category, indicating your categories may need some fine-tuning to be more clear and intuitive.
3) Other Info
The third section on the right-hand side, “Other Info” gives you data about the overall results of your Tree Testing task. Information given included the percentage of participants that located the correct path without any backtracking, the number of participants who selected the correct option on the tree, and the average time your participants spent on the task.
Depending on the results of your Tree Testing task, you may receive a red highlighted “WARNING” button. Clicking on the button will warn you if the success rate of your task is lower than the industry average. If so, that means that you may need to consider revising your categories for added clarity.
10. Matrix Questions
Results for a Matrix question present user feedback on various features of your website, app, or prototype. It visually highlights which features users find most useful based on percentage responses.
The color map visually represents participant responses, highlighting which column received the most selections for each question (row).
- Darker or highlighted areas indicate features that participants find most valuable, offering insights into areas of high engagement.
- Variations in response patterns highlight differences in user preferences, helping identify opportunities for tailored experiences or customization.
It's important to note that the CSV file you receive with this data simply records each participant's response to each question and does not include any aggregated or visualized data from the color map.
Matrix questions are a great way to prioritize key improvements and align features with customer expectations.
11. X-Second Test
In the X-Second Test, the metric itself isn’t the primary focus—it serves mainly to maintain consistency across sessions.
The most valuable insights will come from the next task (First Click Testing), where you’ll capture participants' first impressions and understand their initial reactions in more depth.
12. First Click Testing
The results will be displayed as a heatmap showing the areas where participants clicked most frequently.
Heat-zones on the heatmap will be color-coded to represent click density, with warmer colors (e.g., red, orange) indicating higher concentrations of clicks.
You can hover your mouse over any of the heat-zones to see a breakdown of the number of participants who clicked in that specific location. This breakdown will help you understand how users are interacting with different elements on the page.
You can also view the average first click time, which shows how quickly participants are engaging with elements on the page.
Use this data to identify potential areas for optimization or to confirm whether users are following the desired flow.
To download the study metrics, navigate to the Activities tab and click the download button on the right-hand side of the study. This will generate a .CSV file with the data.