ask them if your product helped in accomplishing what they wanted to!
It’s only natural for businesses to be curious about user satisfaction, but what is the best way to ask for feedback? The answer depends on two things:
Why are we looking for feedback?
How do we plan to use this feedback?
If the goal is simply to have “a number to report,” one could choose from the many approaches out there like the Net Promoter Score (NPS), Customer Effort Score (CES) or the more traditional Customer Satisfaction Score (CSAT). Each one of these gives us a user satisfaction rating. But customer satisfaction feedback is much more than just reporting.
“Reporting Focused” vs “Improvement and Outcome Focused” Feedback System
User feedback shouldn’t be “reporting focused.” This is a business centric view. Instead, feedback should be “improvement and outcome focused.” Using feedback as a way to improve the user experience and the product is a step towards becoming more user centric.
To capture such feedback, we need to think in four dimensions:
What do we want feedback about?
Who do we want this feedback from?
When in the customer journey, do we want this feedback? and
How do we plan to ask for feedback?
The rest of this post discusses these dimensions in greater detail.
1. WHAT DO WE WANT FEEDBACK ABOUT?
Our feedback needs to be specific. We must ask people specific questions like “did the product help you in accomplishing what you wanted to?” Feedback always needs be contextual i.e. it should be focused on the action that the user just performed.
Data shows that switching from general satisfaction questions to specific task-based questions improves survey response rate. In the end we capture more responses and more actionable responses.
Here’s an example of how Amazon asks for specific feedback. In the example below [See images below: Step 1 - 3], the feedback prompt about satisfaction with the customer care representative is both contextual, and specific.
On top of being contextual and specific Amazon’s feedback is also well integrated. Parameters like “Friendly”, “Attentive”, “Knowledgeable”, “Easy to understand,” and “Overall service” are customer focused and can be tied directly to the customer care representative’s individual Key Performance Indicators (KPIs). Such integration, between customer care team’s performance and the customer satisfaction outcomes, is vital for a more consistent and measurable user experience.
2. WHO DO WE WANT FEEDBACK FROM?
It’s a good practice to capture feedback from different types of users so that we have a broader understanding of how our product is performing across different user groups. We can segment users in terms of their relationship with the product and past purchase behavior. For example, these segments could be:
Loyal users (members of your loyalty program)
New users
Lapsed users/ rejectors (users dropping off/ not completing a flow)
Because users in different segments are in different stages of their relationship with the product, their expectations, behaviors and thus feedback tends to be different. Whenever possible, the design of our feedback should be tailored to each of these user types. For this to happen, we should have a deeper understanding of all kinds of people who are using our product.
Finally, we should segment the feedback that we receive. This will help us in developing a deeper understanding of how our product is performing across various types of users.
3. WHEN DO WE WANT TO ASK FOR FEEDBACK?
Our feedback needs to be timed smartly. Feedback captured too early in the user flow is not relevant and might distract the user. Feedback captured too late could be biased (because the user might have completed the task successfully).
To avoid this, we need to know the most common “time to task completion” for our product. I.e. the time it takes majority of users to complete the task that the product is being used for. Generally, introducing the question at different points in time during the flow is a good practice. This helps us in understanding the user sentiment across the flow. Quality of responses can also tell us when during the journey is the best time to ask for feedback.
The following example from Amazon illustrates why the timing of feedback is critical. Amazon often asks for feedback about package delivery. While it is understandable why they do it (get feedback the moment the shipment is delivered), from a user’s standpoint asking for feedback soon after a product is delivered could be misunderstood. Some people could be more focused on the quality of the product that was delivered and not so much on how and when it was delivered. From this standpoint, a request for feedback soon after the shipment is delivered is too early for any kind of useful feedback. See image [2] below. This is an example of a company centric feedback because Amazon might view itself as a fulfillment company, but people might not.
Just the way asking for feedback too early isn’t useful, asking for it too late also doesn’t help. Here’s an example of a company asking for feedback almost eight weeks after the service was delivered. This is a long time and the user might not recall the quality of service. In some cases, the user might even forget about the transaction and thus might not even recognize the message from the company. See image [3].
4. HOW DO WE WANT TO ASK FOR FEEDBACK?
We must make it effortless for people to share feedback. Asking fewer questions, focusing on user’s most recent experience with the product, and giving them the freedom to answer the question directly, without having to categorize or qualify their response, are just some of the ways of making it easy for the user to share feedback.
Here’s an example of how Costco asks users to classify their feedback [see image 4 on next page]. This can be avoided. The four options (“Order Confirmation”; “Ship Confirmation”; “Refund Notification”; and “Cancellation Confirmation”) probably represent four different work streams inside Costco. While it makes sense for the company to have user feedback categorized by work streams and departments, they need not ask the user to do this categorization for them. Adding more steps like these can lead to unnecessary cognitive overload and hinder free-flowing user feedback. To overcome this, the company could simply use natural language processing (NLP) to interpret user’s response and categorize it automatically. See image [4].
[A note about using NLP: It works for tagging feedback only when it is powerful and thus accurate in interpreting the feedback.]
By reducing the number of steps before people can share feedback, we can reduce the incidence of feedback abandonment. Fewer steps also help in improving the quality of feedback because users are able to share their feelings in the moment and with minimal interruption.
5. BONUS TIP: CLOSE THE LOOP
Here’s a way to go above and beyond. If you want your feedback channels to also be nodes of customer delight and loyalty, you can give users an option to leave their contact details at the end of the feedback. Why? So that you can update them about the action that was taken based on their feedback.
This makes things more personal. This also adds transparency and accountability to our product design decisions. And from an organization standpoint, this helps in bringing together product and user feedback teams.
So, what can you do now?
Go and check how your organization captures user feedback. Think about ways of simplifying it and making it more user centric. Remember it’s about them, not you. And finally, make it specific by asking people if they are able to accomplish what they wanted to.
The more specific and user centric the feedback is, the greater are the chances that you will ship a better product in your next development cycle.
[Related to this: Connected products are wired to be outcome oriented. It's up to us how effectively we harness their capabilities and help users in achieving these outcomes.]
Note: Views expressed here are personal. This content is not endorsed by my current or past employers.
All products, logos, images, company names are trademarks™ or registered® trademarks of their respective owners. Using these here does not imply any affiliation with or endorsement by them. No ownership or affiliation is claimed.
Comments