Experiments in Customer Success Scoring
We’re always experimenting with ways to improve and enhance our customers’ experience with SenseiOS™. We always start with the data. We know from using SenseiOS every day that by arming ourselves with high quality data, we’re able to make smarter, more informed decisions.
Recently, we applied this thinking to our Customer Success team’s approach to Customer Success scoring. The first step was to evaluate existing ways of collecting data and scoring methods used to assess both team and platform impact. We read a lot of articles, consulted industry experts, and had countless internal discussions about the various scoring techniques being used in the SaaS market today. Here’s where we landed.
The three most commonly used scoring methods weren’t sufficient to meet our needs:
Net Promoter Score (NPS)
NPS is an industry standard meant to gauge loyalty. Using a ten-point scale and a standardized question format, the NPS asks if respondents would recommend a product or service to a friend or colleague. High scores are labeled Promoters and are thought to be more likely to create value for a brand, while low scores are labeled Detractors who could possibly cause damage to a brand through negative word-of-mouth.
For us, this question is too narrow and doesn’t provide the type of insight that we think is most meaningful to our customers. In addition, it’s difficult to benchmark the NPS against similar SaaS products.
Customer Satisfaction Score (CSAT)
CSAT is a simple, straight-forward way of measuring overall satisfaction. It asks customers to rate how satisfied they are with an interaction or event.
We always want to know how satisfied our customers are, but we’re also keen to know how they feel. The CSAT reflects a short term sentiment and doesn’t provide us with the guidance necessary to drive decisions.
Customer Effort Score (CES)
CES is based on the concept that customers will be more happy if their CS team can reduce the amount of effort a customer needs to put into their interactions. CES either asks customers if they agree/disagree that their partner has made it easy to handle an issue, or to rate the amount of effort it took to handle a request, from very low effort to very high effort.
The phrasing of the CES question implies a discrete and isolated user experience. In contrast, we’d like to understand more about how our product impacts our customers’ workflow rather than one single isolated experience. In addition, the CES method helps identify areas where issues arise, but doesn’t give insight into the root cause of the issues.
Overall, our takeaways were that there are a lot of great techniques out there, but none of the existing methods met all of our needs, so we decided to create our own. We combined the elements that we liked from each of the three techniques and changed the wording to give us more insight while leveraging the best approaches in the industry.
We knew that we had to craft something simple and intuitive to make it easy for our customers to complete, but that still delivered the insights that would help us make our customer experience even more exceptional.
What we crafted is the Customer Success Index (CSI)! With just three questions, the CSI is brief, but gets to the core of what our customers are feeling and how they view our platform.
The Sensei Labs Customer Success Index
I find SenseiOS easy to use and understand
I would recommend SenseiOS to a colleague or client
Using SenseiOS has improved the way I work
By combining the best schools of thought on Customer Success scoring, we’ll be able to add to our already robust adoption analytics. With a full view of not only how our customers use SenseiOS but also how they feel will provide us with the ability to make informed decisions, and define areas for improvement both from a product development standpoint and in how we onboard our customers.