f1studioz-logo

Benchmarks of Quality in UX Design

A Beginners Guide.

So you’ve completed all your designs and handed them to development, and are ready to kick back and start the binge-fest when that damn dreadful question suddenly hits you- “Was my design good enough?” and within moments one question spirals into an existential crisis that we all are too familiar with – “Am I a good designer? Am I just pretending to know what I’m doing? All I’m doing is playing with shapes on the screen”. The crippling feeling of impostor syndrome overpowers your binge-fest and stops it even before starting.

WELL!! Let me stop your overthinking right there. What if I told you there IS a way to measure the quality of your work and an industry-tested method at that!

“Design isn’t finished until somebody is using it.” — Brenda Laurel

Here is a beginner’s guide to understanding Benchmarks of Quality in UX Design!

An illustration of a girl sitting on a sofa with a bowl of popcorn, but with a perplexed expression and thoughts of self doubt surrounding her

First things first, let’s start with understanding what Benchmarking is and how it helps designers ascertain the quality of their work.

What is Benchmarking?

It is a process that uses various metrics to gauge a design’s relative performance against a meaningful standard. The “meaningful standard” here can pertain to an earlier version of the same product, an industry average, or even a competitor.

So How Does Benchmarking Help?

It allows designers and stakeholders to track improvements of a product over time and assess if the required progress has been achieved. Most importantly, it can show the design’s impact, be it time-related or fiscal costs.

Metrics for Benchmarking

There are many metrics for benchmarking. What you end up using for your product may be an amalgamation of various metrics, depending on what’s relevant to your product specifically. In general, there are some common categories that you can refer to, to compute where your product is succeeding and failing.

The Benchmarking Journey

  • The measure of new feature acceptability.

When a new feature is introduced or an old one is revised, the first metric to measure would be how well the feature is being accepted. What are the sales and conversion rates since the introduction of the new feature? Have there been more visitors? Has the overall product engagement improved?

  • The measure of user involvement.

The next metric to be measured would be the level of user involvement while using the product. Is the user putting in a lot of effort to use it? or has the average time on the tasks decreased?

An illustration of the different benchmarks that appear while going through the journey of a product that has been released. The steps are as following- Acceptance of Product, Level of User Involvement, User Happiness, Efficiency of Product, Retention Rate of Product
  • The measure of user happiness.

Once you measure user involvement, you can move on to user happiness or measuring the user’s perceptions. Are they satisfied with the product and its ease of use? Does the product solve the problem in the long term?

  • The measure of product efficiency.

The answers to the question of user happiness lead us to the next metric, the efficiency of the product. What is the error count of the product? Is it as efficient as it was meant to be?

  • The measure of product retention.

Lastly, one would need to measure the retention rate of the product. Does it bring loyal and returning customers to the company? Has the renewal rate of the product increased?

All the points mentioned above are a broad scope of how you can look at your product critically and estimate its quality.

Now, let us look at some of these metrics in detail and study some of the methods used to measure UX quality.

  1. Satisfaction rating (CSAT)

Customer Satisfaction can be measured using a score called the Customer Satisfaction Score (or CSAT).

Customers are asked to choose their level of satisfaction usually on a scale of 1 to 5, 1 being the least satisfied and 5 being the most satisfied. The number of 4 and 5 ratings are added and divided by the total rating received. Which is then multiplied by 100 to get the final Customer Satisfaction Score.

Many apps like Swiggy, Zomato, Uber, etc., calculate this score by asking the customer to rate their service on a scale of 1–5 stars.

Screenshot images from the Uber and Swiggy app which show the rating system once the service is completed.
Swiggy asks users to rate the food as well as the service. This helps them gauge customer satisfaction about both food quality as well as customer service.

“Above all else, align with customers. Win when they win. Win only when they win.” — Jeff Bezos

2. Emotional Rating

Similar to the CSAT, customers are asked about their feelings while using a product. Was it a breezy process or a tiresome one? Was the customer happy or frustrated after completing the task? This is usually done by showing the user a range of emojis that express different emotions and asking the user to choose the relevant emoji.

Example images of how apps ask about your emotional stance on the service or product after you are done using it. The first image shows the question “How was your ride” followed by a slider where you can choose a sad face, a happy face, or anything in between. The second image shows the interface of a cab app with the question “How would you rate your driver” and has 3 emojis to depict bad, neutral, and good emotions.
Emotional Rating also helps in user empathy and understanding the user’s feelings better during the time they made use of the product or service.

3. Customer Effort Score (CES)

The customer effort score measures how easy/hard it was for the user to perform a particular task. It is usually asked in the form of questions like “How easy was it to buy this product/use this service on our website today?”

Example images of different customer effort questions. One of the images has the question “The company made it easy for me to handle my issues” and the answers range from ‘Strongly Disagree’ to ‘Strongly Agree’. The other image is an illustration of gym weights for the user to select how hard or ‘heavy’ the task was, ranging from 15kg to 1kg dumbells

“Ease of use may be invisible, but its absence sure isn’t.” — IBM

A metric that is closely related to this one is the Average Time on Task. This is the measure of how much time a user spends on a task. If the time spent on a task has decreased after redesigning a flow, one can ascertain that the design has had efficacious results. This is a vital metric, especially in the case of a revision of an earlier product.

The image explains about how to measure the ‘Time on Task’ metric. The steps are as follows- 1. Establish an average time accepted per task 2. Analyze whether users complete tasks faster and faster 3. Identify what users remember from previous experience
The process of measuring Time on Task.

4. Net Promoter Score

While the customer satisfaction score and the customer effort score measure the customers’ satisfaction and usability, the Net Promoter Score measures long-term customer loyalty and satisfaction. Questions are in the form of “How likely are you to use this service again” or “How likely are you to recommend this product based on your experience”.

The image is a screenshot of a survey question from the app ‘Slack’. The question is an example of ‘NPS’ and reads- “How likely are you to recommend Slack to a friend?” and the options range from 0 (not likely) to 10 (most likely)

There are some other metrics that are related to the Net Promoter Score, that would help us determine the long-term success of the product and customer satisfaction.

  • Renewal rate: This is the rate of renewal of a particular subscription or service. A high renewal rate is indicative of a good value. The company is more likely to maintain customer interest and generate long-term revenue.
  • Conversion rate: The conversion rate is the percentage of users who take the desired action — for example, the percentage of website visitors who buy something on an e-commerce site. The more the conversion rate, the more successful the product is.
  • Error Count: Errors in design are common and anticipated. However, minimizing those errors will evidently improve the user experience and success rate of a product. An iterative approach to design and continuous user testing would help minimize errors.
The image is an overview of CSAT, CES and NPS. CST is for tracking support quality, CES for making it easier to be a customer, and NPS is for building a loyal customer base
An overview of the most important metrics

People ignore design that ignores people.” — Frank Chimero

With this overused (but true) design quote, we come to the end of this article 😉 As an intern who has been in the professional UX domain for only 2 months, writing this article taught me a lot about the various metrics of UX that help in determining its overall quality, and I hope you learned at least as much as I did!

Now, what are you still waiting for? Get that binge-fest started!!

Table of Contents

You may also like
Other Categories
Related Posts
Shares