A Beginners Guide.
So you’ve completed all your designs and handed them to development, and are ready to kick back and start the binge-fest when that damn dreadful question suddenly hits you- “Was my design good enough?” and within moments one question spirals into an existential crisis that we all are too familiar with – “Am I a good designer? Am I just pretending to know what I’m doing? All I’m doing is playing with shapes on the screen”. The crippling feeling of impostor syndrome overpowers your binge-fest and stops it even before starting.
WELL!! Let me stop your overthinking right there. What if I told you there IS a way to measure the quality of your work and an industry-tested method at that!
“Design isn’t finished until somebody is using it.” — Brenda Laurel
Here is a beginner’s guide to understanding Benchmarks of Quality in UX Design!
First things first, let’s start with understanding what Benchmarking is and how it helps designers ascertain the quality of their work.
What is Benchmarking?
It is a process that uses various metrics to gauge a design’s relative performance against a meaningful standard. The “meaningful standard” here can pertain to an earlier version of the same product, an industry average, or even a competitor.
So How Does Benchmarking Help?
It allows designers and stakeholders to track improvements of a product over time and assess if the required progress has been achieved. Most importantly, it can show the design’s impact, be it time-related or fiscal costs.
Metrics for Benchmarking
There are many metrics for benchmarking. What you end up using for your product may be an amalgamation of various metrics, depending on what’s relevant to your product specifically. In general, there are some common categories that you can refer to, to compute where your product is succeeding and failing.
The Benchmarking Journey
- The measure of new feature acceptability.
When a new feature is introduced or an old one is revised, the first metric to measure would be how well the feature is being accepted. What are the sales and conversion rates since the introduction of the new feature? Have there been more visitors? Has the overall product engagement improved?
- The measure of user involvement.
The next metric to be measured would be the level of user involvement while using the product. Is the user putting in a lot of effort to use it? or has the average time on the tasks decreased?
- The measure of user happiness.
Once you measure user involvement, you can move on to user happiness or measuring the user’s perceptions. Are they satisfied with the product and its ease of use? Does the product solve the problem in the long term?
- The measure of product efficiency.
The answers to the question of user happiness lead us to the next metric, the efficiency of the product. What is the error count of the product? Is it as efficient as it was meant to be?
- The measure of product retention.
Lastly, one would need to measure the retention rate of the product. Does it bring loyal and returning customers to the company? Has the renewal rate of the product increased?
All the points mentioned above are a broad scope of how you can look at your product critically and estimate its quality.
Now, let us look at some of these metrics in detail and study some of the methods used to measure UX quality.
- Satisfaction rating (CSAT)
Customer Satisfaction can be measured using a score called the Customer Satisfaction Score (or CSAT).
Customers are asked to choose their level of satisfaction usually on a scale of 1 to 5, 1 being the least satisfied and 5 being the most satisfied. The number of 4 and 5 ratings are added and divided by the total rating received. Which is then multiplied by 100 to get the final Customer Satisfaction Score.
Many apps like Swiggy, Zomato, Uber, etc., calculate this score by asking the customer to rate their service on a scale of 1–5 stars.
“Above all else, align with customers. Win when they win. Win only when they win.” — Jeff Bezos
2. Emotional Rating
Similar to the CSAT, customers are asked about their feelings while using a product. Was it a breezy process or a tiresome one? Was the customer happy or frustrated after completing the task? This is usually done by showing the user a range of emojis that express different emotions and asking the user to choose the relevant emoji.
3. Customer Effort Score (CES)
The customer effort score measures how easy/hard it was for the user to perform a particular task. It is usually asked in the form of questions like “How easy was it to buy this product/use this service on our website today?”
“Ease of use may be invisible, but its absence sure isn’t.” — IBM
A metric that is closely related to this one is the Average Time on Task. This is the measure of how much time a user spends on a task. If the time spent on a task has decreased after redesigning a flow, one can ascertain that the design has had efficacious results. This is a vital metric, especially in the case of a revision of an earlier product.
4. Net Promoter Score
While the customer satisfaction score and the customer effort score measure the customers’ satisfaction and usability, the Net Promoter Score measures long-term customer loyalty and satisfaction. Questions are in the form of “How likely are you to use this service again” or “How likely are you to recommend this product based on your experience”.
There are some other metrics that are related to the Net Promoter Score, that would help us determine the long-term success of the product and customer satisfaction.
- Renewal rate: This is the rate of renewal of a particular subscription or service. A high renewal rate is indicative of a good value. The company is more likely to maintain customer interest and generate long-term revenue.
- Conversion rate: The conversion rate is the percentage of users who take the desired action — for example, the percentage of website visitors who buy something on an e-commerce site. The more the conversion rate, the more successful the product is.
- Error Count: Errors in design are common and anticipated. However, minimizing those errors will evidently improve the user experience and success rate of a product. An iterative approach to design and continuous user testing would help minimize errors.
“People ignore design that ignores people.” — Frank Chimero
With this overused (but true) design quote, we come to the end of this article 😉 As an intern who has been in the professional UX domain for only 2 months, writing this article taught me a lot about the various metrics of UX that help in determining its overall quality, and I hope you learned at least as much as I did!
Now, what are you still waiting for? Get that binge-fest started!!