Podcast

Net Promoter Score or Product Market Fit? (Ben Grynol, Chris Jones, Josh Mohrer & Scott Klein)

Episode introduction

Show Notes

For many companies, NPS is a true barometer of success. NPS, or Net Promoter Score, serves to calculate how passionate users are about your product. But for some companies, a better focus of time and energy is PMF, or product-market fit. In this episode, four members of the Levels team (Head of Growth, Ben Grynol; Head of Member Experience, Chris Jones; Head of Global Operations, Josh Mohrer; and Head of Product, Scott Klein) discuss their outlook on these two metrics, and the pros and cons for each.

Key Takeaways

04:36 – What is NPS?

Chris explained the meaning of NPS, or Net Promoter Score, and why it’s a metric that matters.

So NPS, net promoter score. At the highest level, you ask the question of how likely are you to recommend a product or a service to your friends or colleagues? Probably, all of us have seen this survey a thousand times from all the products we use all the time. It’s really common. And what you’re doing is you’re looking for people—you want more people that are promoters of your brand or your product than are willing to bash you online. So how it works is on an 11 point scale. You take your 9sand 10s, which are your promoters, the people that are advocating for your product, or at least say they are, and you take that percentage minus the 0-6s, which are your detractors. Those are the people that are likely going to be saying, don’t get it. It’s too expensive. Go somewhere else. And you’re looking for your promoters as a percentage, minus your detractors that come up with your net promoter score. So typically, the range is negative to a hundred. I’ve actually worked at several companies where they had a negative NPS, and that’s always a fun challenge. But in the end, usually they say the average is about a 30, 35, regardless of industry. You start getting pretty good when you get above 40. Above 60 is really good. And you get above 80 and you are in the elite status.

07:39 – The importance of product-market fit

Scott defined PMF, or product-market fit, and why it’s critical to gain inward-focused NPS insights that help to quantify value.

I’ll kick off with the product-market fit. The biggest defining characteristic of it is that it takes what was NPS in a very outward focus, which is to say, are you willing to talk to this service about your friends? The subtext is, do you love this thing so much that you’re going to go out of your way to recommend it to people? And product-market fit switches it to be much more inwardly focused questions. The question of product-market fit is how would you feel if we just took this product that you’re using away from you? There’s three answer modes. You’ve got, I would be extremely disappointed. I would be disappointed, or I wouldn’t care and I would be fine with it. And the math is to just look at the top answer and there’s no subtraction going on. It’s just what percentage of the users are asking I would be extremely disappointed…It’s just an alternative narrative to how do we come up with a quantified value of understanding, are we on the right track to get to product-market fit? Are we finding people that are loving our product so much that they’re not liking it in a lukewarm way?

12:46 – Why a popular product scored a low NPS

Chris says that it’s not just how much people use a product that’s important, but how they feel about it.

The example I’ve had at one company was our most popular games, our most profitable games, had negative NPS, because they were trapped in this almost gaming/gambling addiction, and they couldn’t get out of it. They felt bad about it. So it was good for us to not purely just look at the whole users’ engagement, how often they’re playing, but how they felt about it, of where we had to making the planet a better thing or helping them improve their lives. So I index a lot more with JM on looking at the data we do have from a behavioral log file, but adding occasional surveys just to provide more color on it.

13:26 – The role of surveys

Scott said that a free-form response field at the end of a survey can be more work to analyze, but can serve to provide valuable user information.

There’s one more, I think, piece of information that’s important to line up here. And that is that usually when these surveys are offered up, they will ask an optional follow-up question that’s just a free form text input. And I think that that is a point that is often missed, but should not be because you start to line up scores with specific verbiage that you can then tie back to why people are enjoying their experience or they’re not. And so I think that regardless of what we do around asking the question to get to on a quantified angle, Chris, I think you’re right. There’s demographic data that comes into these answers. There’s a lot of metadata just surrounding it in general. I guess we could say in the more general case.

14:41 – Action is more important than metrics

Chris said that customers can tell you how they’re feeling all day long. But if you don’t do anything meaningful with the data you collect, it’s a waste of time for everyone involved.

When we think about what metrics should people report in, I care less around what metric it is. I think what’s more important is what you do next, what you ask next, what you link it to, what you do with the data is actually more important. I don’t care if you have a high CSAT or NPS or product-market fit. If it’s not driving action within your company, then why are you doing it? Why are you wasting members’ times, bothering the survey, interrupting their flow, making them answer a long format. If you’re just doing it and using it as a vanity metric, then I would say, shame on you. People are taking time to answer questions to fill out. They spend energy to give you feedback around what they think about your product or service. And if you’re not taking action on that as a company, then don’t ask them at all.

27:06 – Startup should prioritize product-market fit

Chris said that when you’re just starting out, NPS should not be a major focus. Instead, it’s all about dialing in product-market fit.

I would say as for startups, product-market fit is much more important than NPS. NPS is really just to be used after you’ve established product-market fit. It’s more around continuous improvement to continue to grow, to scale, but where we are at, we should be spending probably more time on product-market fit and using NPS as more of just a catchall and use it just to make sure we’re not missing anything. But startups should definitely be focused on product-market fit first, NPS second.

37:59 – Some products & services take time to stick

Josh said that when he worked at Uber, they noticed usage stuck when it happened organically, and more than once.

At Uber we found—this is the seeing how people use the service—we found that after their second ride, someone would really stick. First ride is like, I’m trying out this new thing or I push a button and a car comes, but when I do my second ride, that’s when I’m really locked in. And that strikes me as a bit of an output. It’s like, oh, we noticed that if someone likes the first ride enough and takes the second ride, that’s an indicator that we’re all good. That we’re going in the right direction. So then we started or someone started saying, okay, well then let’s discount the second ride also. Let’s make the first ride less expensive and the second ride to get people to that second ride metric faster, because then they’ll stick. But the reality is the metric only worked when it was organic, when people were deciding to take the second ride and it became an indicator.

47:33 – An NPS unicorn

Ben explained that at Levels, they want to do whatever they can to avoid hacks and truly delight users.

So the way it came about was it was outlined as one of the goals or the values of growth. And it maybe is looked at as more of a literal metric as opposed to a philosophy. So the philosophy is, what are we going to do to delight people? What are we going to do to do the right thing and to have that be our north star, as opposed to saying—what is the opposite of an NPS unicorn? It’s like, well, just hacking attention and doing all these undercutting things and growing it all costs. There are all these things that are the opposite of being an NPS unicorn. And so the idea is, what can we philosophically do as a company, as a brand, from a growth standpoint, from an op standpoint, from a product standpoint? What can we do that is delightful and that it is steering people in the right direction?

53:01 – Focusing on PMF over NPS

Scott mused that for Levels, turning off NPS could make sense in the short term, as there is such a small window to get feedback from survey-fatigued customers.

When I think about other products like Peloton, I will really go out of my way to talk about Peloton, because I feel like my identity is as a Peloton rider. That is part of who I am as Scott. And so I think that maybe when we think about NPS and PMF, it might be worth it for us in the short term to turn off NPS, because we have such a limited window of attention. People are very survey fatigued. We’ve got few shots to actually get some actionable feedback from people. And I think using those on NPS might not be the best use of our time right now. And then as we layer on more of the social pieces as time goes on, bringing back NPS to specifically test for those might be a good bellwether for us or a good indicator for us on if we’re doing the social stuff, the identity stuff, the belonging stuff, if we’re doing it well.

Episode Transcript

Chris Jones (00:00):

When we think about what metrics should people report in, I care less around what metric it is. I think what’s more important is what you do next, what you ask next, what you link it to, what you do with the data is actually more important. I don’t care if you have a high CSAT or NPS or product market fit. If it’s not driving action within your company, then why are you doing it? Why are you wasting members’ times, bothering the survey, interrupting their flow, making them answer a long format? If you’re just doing it and using it as a vanity metric, then I would say shame on you.

Ben Grynol (00:45):

I’m Ben Grynol, part of the early startup team here at Levels. We’re building tech that helps people to understand their metabolic health. And this is your front row seat to everything we do. This is a whole new level. In any company, you can track as many or as few metrics as you want. In many cases, companies will track things like revenue. That’s an obvious one. If they’re a consumer product, a social network where they’re focused mostly in the interim, when they’re getting going on things like daily active users, how engaged are people, you can focus on cohort retention and the list goes on.

Ben Grynol (01:34):

There’s also another metric where you think about how willing are people to refer the product to someone else? Well, the two metrics are PMF, that’s product market fit. How sticky is a product. And NPS, net promoter score. How willing are people to suggest your product to others? Well, when thinking about metrics, you have to wonder, are they mutually exclusive? Should you track one? Should you track both? Can they be used together? What’s the outlook on it? Well, these are things that we’ve talked about internally for some time now. So it was time that we sat down and recorded a podcast about it.

Ben Grynol (02:09):

A few of us, Chris Jones, Head of Member Experience. Josh Moore, Head of Global Operations, and Scott Klein, Head of Product, we all sat down and we had a roundtable discussion, really somewhat of a debate, which should our outlook be on these metrics? How should we track them? How should we think about using them and taking that information and doing things with it? It was a great conversation. Here’s where we kick things off. So this has been a long time coming. This is the NPS, PMF, and any other acronym you feel like bringing up, podcast where we’re going to talk about the idea of net promoter score and product market fit, and which to measure and why.

Ben Grynol (02:59):

There’s not necessarily a right answer, but I think everybody has a different outlook of what we should do and why. So we’ll leave it there. Scotty, what are you thinking? I know you’re pretty deep on this idea of product market fit. And Chris Jones is a little bit deeper on NPS.

Scott Klein (03:16):

I feel like we’re all about to bet on a single horse, and then watch them all race and win. One of them’s going to win. No, to give some context, this all got kicked off because we wanted to write a memo about building what’s called an NPS unicorn. And I’ve since found out that the framing of that NPS unicorn is in contrast to building a valuation unicorn, which I think that’s a great rallying point to be focused on, on membership and retention, as opposed to just building something that’s good at raising money.

Scott Klein (03:48):

So I think the unicorn part, I actually like, but I think we’ve had … I’ve actually, at least personally had enough time with NPS to find it to be a bit subpar and, and sometimes distracting in a way. And so I think the thrust of what we want to hash out today is what people’s experience has been with NPS. What can we learn from it? And when we talk about other measurement devices, product market fit’s the one that I suggested. And I think Chris has another one that I hadn’t even heard of. So I’m excited to hear it on the fly. What do these measurement devices mean for us? What can we take from them? Which situations are they applicable, and which do we go forward with?

Ben Grynol (04:22):

Let’s set the table with rewind all the way. What is NPS? What is product market fit and why do they matter? Because I think that’s something that can be missed sometimes. So Chris, you can dive in on NPS.

Chris Jones (04:36):

Sure. So NPS, net promoter score. At the highest level, you ask the question of how likely are you to recommend a product or a service to your friends or colleagues? Probably, all of us have seen this survey a thousand times from all the products we use all the time. It’s really common. And what you’re doing is you’re looking for people … You want more people that are promoters of your brand or your product than are willing to bash you online. So how it works is on an 11 point scale. You take your nines and 10s, which are your promoters, the people that are advocating for your product, or at least say they are, and you take that percentage minus the zero to sixes, which is your detractors.

Chris Jones (05:17):

Those are the people that are likely going to be saying, don’t get it. It’s too expensive. Go somewhere else. And you’re looking for your promoters as a percentage, minus your detractors that come up with your net promoter score. So typically, the range is negative to a hundred. I’ve actually worked at several companies where they had a negative NPS, and that’s always a fun challenge. But in the end, usually they say the average is about a 30, 35, regardless of industry. You start getting pretty good when you get above 40. Above 60 is really good. And you get above 80 and you are in the elite status.

Ben Grynol (05:55):

And that’s that idea of NPS unicorn, where there are only a couple companies in the world that have hit this 90 and above status. And I think that’s where we started to think through what does that actually mean if we were to be an NPS unicorn, as we referred to it? The companies that fall into that bucket, I think are Tesla and Peloton as a benchmark. Apple is what, below 50, around 50?

Chris Jones (06:19):

I’d say about 60-

Ben Grynol (06:19):

Chris Jones (06:20):

A lot of times it depends on who’s doing the survey, who they’re surveying and through what mechanism. So one company could be running 10 or 15 different NPS surveys and get 10 different answers, depending upon how they’re doing it, who they’re interviewing, where it’s posted, time of year. Like any survey, it’s super easy to manipulate, just upon like, “Oh, let me get a higher score. Well, let me survey the people that are more engaged with the product or service,” and naturally your scores are going to go up.

Ben Grynol (06:50):

Why don’t we go into this idea of PMF, to product market fit, what it is, what it means, why it’s important and decipher between all of these things. There’s a time and place for NPS. There’s a time and place for product market fit. And there’s a case where some products might actually have neither or both. And so those are all things that we can dive into once we get through it. But that’s why there might not be a right metric, but there are ones that might be more applicable at different stages of companies. So we’ll have to go into … Scott, you can dive into product market fit and then jamb saw ground floor Uber to global Uber, where I’m sure there were things that mattered at different times in a rapidly scaling company that grew very, very quickly.

Scott Klein (07:39):

I’ll kick off with the product market fit. The biggest defining characteristic of it is that it takes what was NPS in a very outward focus, which is to say, are you willing to talk to this service about your friends? The subtext is, do you love this thing so much that you’re going to go out of your way to recommend it to people? And product market fit switches it to be much more inwardly focused questions. The question of product market fit is how would you feel if we just took this product that you’re using away from you?

Scott Klein (08:06):

There’s three answer modes. You’ve got, I would be extremely disappointed. I would be disappointed, or I wouldn’t care and I would be fine with it. And the math is to just look at the top answer and there’s no subtraction going on. It’s just what percentage of the users are asking I would be extremely disappointed. This was popularized by, I think, Rahul Vohra, who created it as part of founding of his company, Superhuman. It was featured on the First Round blog most recently. And so it’s just an alternative narrative to how do we come up with a quantified value of understanding, are we on the right track to get to product market fit?

Scott Klein (08:43):

Are we finding people that are loving our product so much that they’re not liking it in a lukewarm way? I think for most startups, they’re focused more on getting a rabbit initial fanbase than they are a much larger lukewarm fanbase, which is likely to just fall out quickly over time.

Josh Mohrer (09:01):

So you noted that that’s a rather new metric. NPS has been around since the ’90s. And what they both have in common is that they’re essentially asking a question one step removed, because really we want to know, how’s it going? How are we doing? Do you like this? But instead of asking that, the first thing we ask, how likely are you to refer this to a friend? And that’s useful because you can compare other industries and other companies. As you noted, Ben, there are certain stalwarts of industry that are 90s and above. And Apples in the 70s at one point or maybe a little bit less now. So it’s good to compare one thing to another. And airlines and utilities and the like are always very, very low.

Josh Mohrer (09:46):

And then the newer one is like, well, I want to know if I’m on the right track. NPS is like, you’ve used the product. You have consumed it and you understand it. Are you going to share it? And it measures virality and how likely people are to refer to a friend, whereas this new metric is like, we’re still figuring this out. We’re not United Airlines, we’re rapidly iterating software. And we want to know how sad you’ll be if we take it away, which is clever. It is another riff on how are we doing? How’s it going? And I’ll speak for a bit more if it’s okay why I’m here. This originated, I think, as a chat, either on notion or threads about what the right thing to measure is.

Josh Mohrer (10:31):

And I’m on neither side. I’m actually in some ways on the side of measure everything, but that actually these two things might be suboptimal given the data that we have. So if you’re United Airline and someone uses you once in a while, you might want to say, how is that experience? Or are you likely to tell friends about it in a good or bad way? But for technology where people are going to be interacting with it on a daily or hourly basis, I think we can do better than asking them a roundabout question like, hey, if you were a different person or in this hypothetical situation, if we give you this roundabout thing, can we shake some information out of you?

Josh Mohrer (11:11):

Really, instead we can see, are you logging food frequently? Are you opening the app? Are you engaging with the software? Have you stopped doing that? Those sorts of metrics in my mind are more interesting because they’re direct. They are like literally, do you like it? I don’t have to ask you. I know if you’re liking it because you’re using the app or not. So I’m in that third bucket.

Ben Grynol (11:33):

The micro-metrics, we’ll call them.

Chris Jones (11:36):

I really enjoy living in the hybrid world. So having leveraged CSAT, NPS, lots of metrics at half a dozen companies … And for context, I’ve probably done 20 million NPS surveys across 14 languages at a half dozen companies. So lots of war stories and wounds, but regardless to the metric, where I had the most success is when we merged it with the log data to actually get a lot of more of, when someone says that they’re satisfied, or they’re happy, or they’re enjoying the product, what did they do? And let’s not do it by asking them 50 questions in an hour long survey. Let’s actually bounce it up against their log file data to be like, do people who log food, are they having a better experience? Do people that have more friends, are they enjoying the game more?

Chris Jones (12:29):

So working for a lot of analytical companies where you have all this log data to say, let’s look at all the data of all of our members or all of our users and gain it and just using something like NPS or CSAT or a thumbs up, thumbs down, just to add a little bit of color around how they’re feeling. Because the example I’ve had at one company was our most popular games, our most profitable games had negative NPS, because they were trapped in this almost gaming gambling addiction, and they couldn’t get out of it. They felt bad about it.

Chris Jones (13:03):

So it was good for us to not purely just look at the whole users’ engagement, how often they’re playing, but how they felt about it, of where we had to making the planet a better thing or helping them improve their lives. So I index a lot more with JM on looking at the data we do have from a behavioral log file, but adding occasional surveys just to provide more color on it.

Scott Klein (13:26):

There’s one more, I think, piece of information that’s important to line up here. And that is that usually when these surveys are offered up, they will ask an optional follow-up question that’s just a free form text input. And I think that that is a point that is often missed, but should not be because you start to line up scores with specific verbiage that you can then tie back to why people are enjoying their experience or they’re not. And so I think that regardless of what we do around asking the question to get to on a quantified angle, Chris, I think you’re right. There’s demographic data that comes into these answers. There’s a lot of metadata just surrounding it in general. I guess we could say in the more general case.

Scott Klein (14:03):

But then there’s also an opportunity to ask maybe one follow-up question that could be highly targeted or highly free form so that we can try to line up, if people are upset with the experience, is there a clear reason why? I think both of these numbers are really couched in the reality that users, oftentimes they’re good at expressing their emotional state, but they’re not good at articulating why with words, if that makes sense. So they can tell you from a scale of one to 10 how they’re feeling temperature wise. But if you ask them to articulate what would make that be better, they’re not product people. They’re generally just going to tell you faster horses, and that’s not what we’re here to build.

Chris Jones (14:40):

I would absolutely agree, Scott. When we think about what metrics should people report in, I care less around what metric it is. I think what’s more important is what you do next, what you ask next, what you link it to, what you do with the data is actually more important. I don’t care if you have a high CSAT or NPS or product market fit. If it’s not driving action within your company, then why are you doing it? Why are you wasting members’ times, bothering the survey, interrupting their flow, making them answer a long format. If you’re just doing it and using it as a vanity metric, then I would say, shame on you.

Chris Jones (15:15):

People are taking time to answer questions to fill out … They spend energy to give you feedback around what they think about your product or service. And if you’re not taking action on that as a company, then don’t ask them at all.

Ben Grynol (15:29):

One of the hardest things about metrics too, especially in the space we’re in is our product has a very wide scope as far as segments go. And so there could be the jobs to be done of Levels for a certain segment. Let’s say people who are focused on weight loss, the value prop of the product is entirely different than somebody like a bio-hacker who wants accountability. And so their outlook on product market fit as a metric, NPS, as a metric, what they actually do in product. When you start to get to some of the micro-metrics that JM was referring to, it’s very wide.

Ben Grynol (16:03):

And so we have to start to link all these things and say, is the product serving the right purpose for this segment? And then from a product standpoint, back to the work Scott’s ding day in, day out, it’s like, what do we actually need to do to change the product so that it doesn’t become the Homer Simpson car? It’s like, we just bolt on all these things and then it doesn’t really work for anyone.

Josh Mohrer (16:23):

I think what you’re saying is that the context of who the person is, if it’s a weight loss app, someone who has no interest in weight loss will not be devastated if it goes away, whereas someone who’s finding a lot of utility in the weight loss features might. So there’s-

Ben Grynol (16:40):

Totally.

Josh Mohrer (16:41):

… the context of who the person is. And particularly with something like Levels, which is appealing to a large group of people with all different needs, there’s some noise in there for sure, right?

Ben Grynol (16:52):

Yeah. There’s one thing to touch on, which I think it came up in notion or in threads, wherever the conversation was about the idea of some products can have a high product market fit and a low NPS. Some can have a high NPS and a low PMF. We’ll go to acronyms now. And some can have both. So this is personal experience, and again, anecdotal. Huge fan of Mini Coopers. Love them. So high product market fit for me personally. Would never recommend it to any of my friends. They’ve said, “I’m going to go buy one,” and I’m like, “No, don’t.” Meanwhile, I’ve owned a couple of them. And I will go get another one because I just love it so much. And devastated if it went away. So it’s like high product market fit, terrible NPS.

Ben Grynol (17:39):

And then other products, personally, for me like WHOOP have a high NPS. Very willing to recommend it to people, but lower product market fit where if I’m not using it, because I don’t like wearing things on my wrist, I actually don’t miss it that much. And then the middle products are ones that have both. I’s like AirPods. Recommend them to everyone and super devastated if I don’t have them. So it’s like, which is the right metric to use? And you’re like, well, if many started paying attention to people not recommending their vehicle because they’re mechanically the worst. Like I hate to dunk on you Mini because I love you so dearly, but mechanically, they’re terrible.

Josh Mohrer (18:17):

I’m not going to recommend you to a friend.

Ben Grynol (18:19):

So they should figure out why. And then WHOOP, it’s like, well what is not making the product sticky from a product market fit standpoint? And you can extrapolate this to segments far and wide, but there is merit in both, I think in some respects.

Josh Mohrer (18:33):

Right. I would abstract that to say that they basically measure different things. If something has this halo around it, it is very popular to like that thing. You want to be seen with that thing. That might be different than I drive personally utility from it, is what you’re saying, right? And so you want to be both. I think having one and not the other is risky. I think if everyone is willing to refer you to a friend, but no one would be devastated if you went away, I think that suggests you have some short-term magic, but you have a problem because that won’t have staying power once you’re out of vogue, I guess.

Ben Grynol (19:14):

The divergence is where it gets scary.

Chris Jones (19:16):

So Ben, when you brought up the Mini Cooper example, it resonated with me because I had a Mini Cooper for 10 years and I loved it, but it was a love, hate relationship. My wife hated the car because it was too loud, made noises, it was in the shop all the time. But I loved it because it was unique and drove around like a golf car. No other car operated like it. So when we moved to Montana, she was like, “You don’t need a Mini Cooper in Montana. Where are you going to drive it?” So I gave it to my nephew, a 16-year-old boy thinking going, now I’m going to be the cool uncle, giving him a Mini Cooper-

Josh Mohrer (19:47):

Uncle Chris.

Chris Jones (19:48):

That’s a stick. And it was my biggest recommending the car to him. It lasted six months, and my sister said, “Yeah, he’s going to get rid of it because it’s been in the shop twice and he can’t afford the repairs.” And I’m like, “Okay, yeah. I get it. No harm, no foul.” I love it, but you have to baby that thing and know what to do and listen to the little [inaudible 00:20:09] to know, is that really going to go in the shop for a $2,000 repair or not?

Ben Grynol (20:13):

We’ll have to share Mini Cooper stories. I’m sure we’ve got a few road trips to share.

Josh Mohrer (20:18):

Right. And I think you’re talking about a divergence because of a love, hate relationship and the utility versus how functional it is. But I think there’s so many other ways that that could happen. This is a repeat of what I just said earlier. But if something is cool seeming, or maybe even the reverse. There’s something that I really like using, like my MacBook M1 Max or whatever. I love it. It’s the best printer I’ve ever owned. I don’t know necessarily that I’m going to refer friends to it because it was expensive. And I don’t want to say you should go buy the top of the line MacBook. So that’s the flip. And the other way is if something is cool, it’s super cool, but you don’t really care. I think those are the risks as well.

Scott Klein (21:03):

I think to set some more context for why we’re thinking deeply about this right now or trying to settle on what’s going to work for us as a company going forward, Levels has been around for a couple years. Our NPS is actually doing quite well right now. I think we’re in the 50s or something like that. It’s high enough that I would consider us to be really, really, really a well received product.

Ben Grynol (21:23):

A semi-corn.

Scott Klein (21:25):

A semi-corn. That’s going to be the next laptop sticker. It’s just like a weird looking half their unicorn and semi-corn.

Ben Grynol (21:37):

You heard it here folks.

Scott Klein (21:40):

As a product, we’re off to a good start. But when you look at our business model, we’re in the process of pivoting to what we’re calling our membership model. So we’re going to give you access to a bunch of products and services at cost. We’re going to charge you a yearly membership fee. When you look at membership based businesses, retention is the single biggest factor that’s going to be the predictor of the health of your business. And so when we are looking at NPS and we’re looking at product market fit or CSAT or some other representation of a metric, the one that I really want us to drive toward, at least in the product organization is what is going to be a leading indicator of stickiness.

Scott Klein (22:19):

So with NPS, our setup right now, I love it. We’ve got a great NPS score, but we’re starting to get these little insights into people that are maybe high on the PMF score, but low on the NPS score. So some examples are we get people that might do NPS at a five or six, which is usually a detractor, but they really would be upset if we took Levels away. And when we ask them why, they would say, “Well, you know what, it’s a health product and I don’t want to recommend health things to my friends.” There’s just some structural reasons why this may be the case. It costs $400 for the first month of Levels right now.

Scott Klein (22:54):

That’s maybe a bit of a status thing that people might not want to wait into. In other cases, you might have somebody who’s treating symptoms for PCOS, or a much more intimate health problem that they’re trying to just deal with on their own. They’re going to be really low on the NPS side. And that doesn’t necessarily mean that we’re doing a bad thing. It may be even quite the opposite. So I think it’s important for us to pick a metric that’s going to be a leading indicator of stickiness. Because if we were to look at these people’s NPS scores, we would be rightly concerned.

Scott Klein (23:23):

We would call them up and ask, what can we do differently? And the answer would be nothing. I love the product and I want you to keep doing it. And so I want us to have these leading indicators that can give us reason to act and that we can ideally slice the data up in a way that we can take action to improve. And I’m just not sure NPS is that for the product like product market fit is for the product.

Chris Jones (23:45):

With the lens of product, I fully agree with you. And I think product market fit for the product team is where we should be spending more of our time. I think having mobile metrics is great, because I’ll use an example of, if I put my support hat on, if we only had product market fit and we’re driving, even open-ended comments, we’re never going to hear comments about customer support or the reason that I’m disappointed that Levels isn’t around is because of the level of customer support. It’s likely never going to come up in even the verbatims.

Chris Jones (24:18):

So NPS, to me, can actually cover a lot more areas than just product, around marketing, around what we’re doing holistically as a company. But I agree, if I was in your shoes as a head of product, product market fit is going to be more tailored towards your needs and likely more actionable, or NPS is going to be a much broader thing that can cover on lots of areas.

Scott Klein (24:40):

It’s interesting. I don’t know that that framing came up when we had that discussion. But what I’m hearing from you is that when people get asked the NPS question, they’re taking a more holistic approach to the entire company, as opposed to specifically their interaction with the product.

Chris Jones (24:54):

Yes.

Scott Klein (24:55):

Interesting.

Ben Grynol (24:57):

Let’s go deep into thought exercise, and there’s a couple things to riff on here. The first is the old adage of if Henry Ford asked people what they wanted, it would be a faster horse. We’ve heard that so many times, but let’s use the horse as an analogy. So you ask somebody, “Would you recommend horses?” There’s horses and cars that are just coming out. So the Model T is getting born as far as the first highly produced production vehicle. And you ask people, “Would you recommend a horse to your friend?” And they’re like, “Yeah, first sure.” “Would you be upset if the horse went away?” And you’re like, “Yeah.” And then the idea is, because if your horse went away, there’d be a car. And they’re like, “I don’t want a car. There’s no roads.”

Ben Grynol (25:36):

So it’s going to get a high product market fit score, because people are like, there’s nothing better than a horse. And it’s because sometimes people can’t see the future as it’s being built. And that’s fair enough. So let’s use this thought exercise of going back to something JM said at the beginning of this, which was what happens if we just pay attention to product metrics? What are people actually doing in the product? And maybe it’s a stage-based thing. Imagine we didn’t measure NPS, we didn’t measure product market fit, but we paid a lot of attention to what people were actually doing in the product, maybe more so than we do now.

Ben Grynol (26:12):

One analogy or something to dig deeper into is something Chris brought up in his ops deck today, which was how are people interacting with content in product? So we have this heuristic right now that content is very important in product. And it might be because way longer conversation of like format links type of content. Let’s not go there. But imagine if we didn’t pay attention to those micro-metrics from a product standpoint and we started doing all this product work to be, everyone wants content in product.

Ben Grynol (26:42):

We’ve heard it qualitatively. We know that that’s what’s going to give us good product market fit. And then we go and we build against it. But the metrics that are … What are people actually doing or showing us it doesn’t actually matter, right? So there’s probably a case for everything, but it’s more a matter of what do you guys think about stage? Is there something that’s more important at certain stages of a company and some things that matter less?

Chris Jones (27:06):

I would say as for startups, product market fit is much more important than NPS. NPS is really just to be used after you’ve established product market fit. It’s more around continuous improvement to continue to grow, to scale, but where we are at, we should be spending probably more time on product market fit and using NPS as more of just a catchall and use it just to make sure we’re not missing anything, but startups should definitely be focused on product market fit first, NPS second.

Scott Klein (27:36):

Chris, what do you think the criteria are or the stage are? Is it just a feel where we feel like, okay, now we’re ready for NPS? Let’s just say we threw the entire thing away and we just did a detox and we only had three months of the next three months we’re just going to do product market fit. At what point do we go, All right, we’re ready? We’ve graduated into this stage of company where we need NPS. What are the criteria for that?

Chris Jones (27:59):

Less of a criteria, more of an observation of when your product and your service starts to stabilize where it’s like you’re making less and less dramatic changes to it and you feel like, we now know what our product and services are, we are changing and experimenting with what is it, should we have blood panels? Should we have dieticians? We’re trying to throwing a lot of stuff against the wall to figure what sticks. And every time we do an experiment, we get feedback. Our product is likely going to look nothing like it does today a year from now, versus someone like, okay, yes, we definitely have carved off our niche. We’re making minor tweaks to the product and tweak services. So if you’re really having some big swings from your product roadmap, definitely not the time for NPS, much more around product market fit. So stabilization of the product roadmap would be my milestone.

Ben Grynol (28:49):

The micro-metrics are an interesting thing, because if you only pay attention to that, then you might not find what people actually want that’s going to make the product sticky. So let’s use Superhuman, where the main value prop, when Rahul did that exercise with the team, speed was the number one thing. And so then you go in to extrapolate. What does speed mean? Is speed the speed at which you can use shortcuts to draft emails? Is it the browser being able to process and triage things faster?

Ben Grynol (29:21):

There are all these things that speed could be, but they actually took all those into account. And they said, how do you create a very fast email experience? If they didn’t do that, if they didn’t look at speed being this factor, they might have been building against features. Let’s say they only paid attention to metrics and they’re like, well, a better way of scheduling meetings, calendaring integration, but it’s still slow and clunky. You don’t get to product market fit. So it is tough because you can also get so deep into analysis that you spend less time experimenting and building and more time analyzing, but there’s a fine line of you need enough insight to say, what is actually going on? How are things tracking? And then, what are we going to do about it?

Josh Mohrer (30:04):

They all do different things. I think it would be hard to figure out that speed is the number one thing that email users are looking for in that segment without explicitly asking them.

Ben Grynol (30:16):

Totally.

Josh Mohrer (30:16):

You’re not going to be able to infer that from usage metrics, but in the Superhuman onboarding that I did back in the day, one thing they had me do is put the doc icon on my phone and delete the other one or move the other one so that it would always be there easily. They did something similar on my Mac. And they set me up with a bunch of things that would get me to open the app a lot. And so monitoring simply how much I use it versus not. They know I’m going to be on email almost every day. What percentage of my email share do they have?

Josh Mohrer (30:52):

That’s a super useful metric. And I’m not saying that they should do that or something like that and not the other things, but I think there’s a place for all of them. As you just said, based on the company’s stage, there are different appropriate things. And I think that’s right. You’re just going to learn different things for each one. In some ways, if you crush it on NPS, that might imply a high rate of referrals. It might make acquisition easier. People are sharing on social and the like.

Josh Mohrer (31:18):

Product market fit is more about the retention. It’s about, is someone going to stick around? Are they into what we’re doing enough literally that they’d be upset if we went away? We’re not going away, so are they going away really? Is what the question is. So they’re just different. And I think for food logs, and I think we could probably argue that it’s not up for debate that food logging is going to be part of the Levels experience forever or for a while. Seeing if people do food logs is also useful, because that’s a primary feature of the app. Are they opening the app? Are they doing food logs? Are they engaging? Are they looking at the scores? Are they immersing themselves in the value that we create or not? I think there’s some signal in there as well. And so just to be clear, I am not advocating for dumping one over the other. I think they’re all super useful for the right time and place.

Scott Klein (32:09):

I think my bias on some of this is just to ensure that as we grow, we don’t send people on wild goose chases for week-to-week variations in our NPS or our PPF score. I’ve seen many, many analysts spend two or three weeks of their working life, trying to track down a three and a half point drop in NPS, going from 10 to six and a half or something like that. And you see them disheveled in the hallway and you’re like, “What is going on?” And you’re like, “Well, the CEO is really upset about the NPS drop from 10 to six and a half. And I’ve been spending the last three weeks of my life, nights and weekends included trying to track down why.”

Scott Klein (32:49):

And this goose chase turns up, no geese at all, ever. It never happens. And so I think regardless of what we end up using, if want to use all of them, I’m not opposed to that if we can have the discipline and the value system set up to just not chase things. One of the examples that Rahul gave that I thought was great was they use PMF primarily, but any metric that you use is going to go through these fluctuations and those are natural and should be supported. And he gave the example of as you go through the technology adoption life cycle, you carve off the next cohort of users that you start to market to, that you’re trying to acquire. And they get into the app and they don’t quite get it yet. It hasn’t quite settled in.

Scott Klein (33:29):

And so their PMF scores are naturally going to come in lower than the people that have already been adopted into your system, that are really flourishing and doing well. And if you over rotate and over-react of these sorts of things, and you’re not diligent about cutting the data up in a way that you understand which cohort of users is actually scoring, you’re going to think that you’re doing something wrong when in fact you’re just going through a normal company building process and getting into a new set of users that don’t quite get your thing quite yet, but they do.

Scott Klein (33:57):

And so they’re going to stick around, but their scores are lower, whereas otherwise a naive director or executive might tell you, “This is bad. Our PMF scores going down. We have to go fix it. You need to go figure out why it’s happening,” when in actuality you just get this rolling hill type of situation where these new customers come in, they’re not super thrilled. You build some features, they get thrilled, their PMF scores come up and it starts all over again as you get into the next cohort of users.

Chris Jones (34:23):

I think you described 20% of my career of getting beat down in the NPS around like, Chris, go find out why this dropped by three points or four points or five points and what should we go do to fix it? One, I really enjoy-

Josh Mohrer (34:36):

Hey, Chris.

Chris Jones (34:37):

… trying to find that.

Josh Mohrer (34:39):

Have you seen NPS … Is it possible to move NPS in a material way once you’re at 10 or you’re at six? Is that possible in your experience?

Chris Jones (34:50):

Yeah. I’ve seen jumps within a product or a service or a game of 30 points over a quarter.

Josh Mohrer (35:00):

Wow.

Chris Jones (35:01):

So you can do a lot to move it. Now, you can move it either direction. But I also got asked a lot at several different companies from the executives, of how much more money are we going to get if we increase our company or our product NPS by one point? Just so they can quantify, what does one point mean? So I did this at two companies. And the data I found was really representative of the industry at norm where they talk about, promoters spend on average 25% more than attractors. And I found two companies that our data was right in line with that.

Chris Jones (35:35):

So I did the example like, let’s say we go from a plus 40 to a plus 60, which is a big jump and can take companies three, six, nine months of focus to move it that much. You’ve only moved 10% of your customer base to that 25%, which means you’ve increased your revenue by 2.5%. And all of that effort to raise your NPS by 20 points. At the same companies, the level of effort to change the buy button from make it bigger or make it a deeper shade of green or doing just an AB test on the website of the homepage would’ve been an easier lift to get the same revenue. So sometimes-

Josh Mohrer (36:15):

It’s got to be a longer term view, right?

Chris Jones (36:17):

Yeah. You get executives that get fascinated by the, my number has to be higher because I win better than the competition. I want to use it as a vanity metric. But sometimes the effort to really move it, you’re like, well, what are we solving for? Are we solving for retention? Are we solving for revenue? We have better ways to AB test that to actually get quicker results faster through experimentation. Trying to drive NPS as your way to drive company metrics is usually the wrong way to go after it, because it’s going to take you way more effort.

Josh Mohrer (36:47):

It’s an output, basically, not an input.

Chris Jones (36:49):

Yes. Yes.

Josh Mohrer (36:50):

It’s the result of your work and of the quality of the business and the product and all these things.

Chris Jones (36:56):

Absolutely.

Josh Mohrer (36:57):

But you wouldn’t go and set out to raise it. You go out to be great and be better and make people happier, and then it goes up.

Ben Grynol (37:05):

Chris was the analyst that Scott saw on the hallway that was on a wild goose chase, but he didn’t have his binoculars.

Josh Mohrer (37:11):

I don’t think Chris has ever … He’s never been disheveled in your life.

Chris Jones (37:14):

No, I haven’t. [crosstalk 00:37:15].

Ben Grynol (37:16):

Chris just needed to know that all the geese had flown south and that was it. They weren’t even around one. One of the things you touched on JM, which is good to riff on is the idea of input metrics versus output metrics. So PMF, NPS. Anything that is a metric that we’re looking at that is a result of multiple factors actually aren’t the input metrics. So why don’t we frame, what is an output metric? What is an input metric? And why should people focus … Why is it good to focus on input metrics sometimes more than just output metrics?

Josh Mohrer (37:51):

I think I made those terms up. Is that a real thing? If it is then it’s an accident, but let me give an example. Add Uber, since you mentioned it. Did you know I used to work at Uber? So at Uber we found—this is the seeing how people use the service—we found that after their second ride, someone would really stick. First ride is like, I’m trying out this new thing or I push a button and a car comes, but when I do my second ride, that’s when I’m really locked in.

Josh Mohrer (38:20):

And that strikes me as a bit of an output. It’s like, oh, we noticed that if someone likes the first ride enough and takes the second ride, that’s an indicator that we’re all good. That we’re going in the right direction. So then we started or someone started saying, okay, well then let’s discount the second ride also. Let’s make the first ride less expensive and the second ride to get people to that second ride metric faster, because then they’ll stick. But the reality is the metric only worked when it was organic, when people were deciding to take the second ride and it became an indicator.

Josh Mohrer (38:55):

And so I guess I would relate that like you can manipulate the stats, but then they might not mean the same thing. You know what I mean? So for example, with NPS, I don’t think this is the case anymore, but there was a time when we only sent a survey to people who finished the program, and not everyone did that. Some people would, during the second sensor, maybe stop, or 27 days in our system only acknowledged it after a full month. This is pretty early days, I think that this was happening. And that is not a correct way to do NPS, because it’s not hitting everybody. You’d want to touch everybody ideally. And when you do the scores, likely to adjust. And so I just think the metrics tell you things, but it has to be done right, and it has to be viewed organically. You can’t really manipulate it, if that makes sense.

Ben Grynol (39:41):

So the input metrics are, people would only take … We’re going to just use this as a thought exercise. So whether or not it’s true. But the input metrics in this case, we’re going to say is driver density in a certain geographic area. And so people only get to this second ride if there’s enough density and drivers available, where it’s a great experience. And so hypothetically, people were willing to take a second ride. They go to hail a ride. They hit the button and call the driver and they’re like, “Man, nothing’s happening.” The input metric is, we need more drivers to get people to the second ride. We need more geographic density so it’s a great experience.

Ben Grynol (40:16):

The output metric is looking like it’s terrible, and so then you come up with these other ideas like, “Hey, let’s discount the second ride to incentivize people.” But that’s not what’s broken. The input metric is the upstream thing that’s broken, that’s not leading to the downstream effect. That is what people want to achieve.

Josh Mohrer (40:35):

That’s right. If it’s reliable, if the cost is right, if it just fits someone’s life, they have the need for it, they can afford it, they just want to do it. If they want to do it, then they’ll take the second ride. And that’s an indicator to us that they’re probably going to stick. Whereas most people took their … I think the majority of riders who took their first ride didn’t never come back, which is not necessarily surprising. Maybe they’re traveling or for whatever reason. It’s a once in a while thing. But if you start to meddle with this underlying thing, then it might not still mean the same thing anymore, if that makes sense.

Scott Klein (41:10):

I think for us, what I would like to see us get to is to start to view these metrics more as an early warning system than an actual weather forecast, if that makes sense. So if we see a rise or a drop in any of these metrics, it could be because we shipped a code update that made the app markedly slower and people are upset about that. Could mean we have a new flock of customers that we ran a promotion for that are now in the system that maybe don’t quite understand it as much as the other ones do. It’s just an alarm bell to get your ears to perk up to see, “Hey, is something amiss that we need to go chase down? Is there something we can dig into to learn from?”

Scott Klein (41:52):

And not necessarily that something’s going wrong, because I think we’ve all seen … Chris, just like your game example. You’ve got people helplessly addicted to a negative NPS game. There’s examples on the other sides of it. So I think just building the discipline to not over-rotate and overreact, to be able to attribute specific things to the rises and falls in NPS or PMF, but just to have it as a early trigger to go check things out.

Ben Grynol (42:19):

Chris reported that with the scoring update, when that was a push, where it was like, scoring update was pushed and all of a sudden … What happened, Chris? I don’t remember. I remember things happened, but the way that we framed it with what that whole thing was, because I didn’t set that up very well.

Chris Jones (42:36):

In the app we changed how we weighted the day score. So in terms of the input that changed it, and also the scale at which we had. So people came in and all of a sudden their score changed or maybe it was a lot lower than they used to and they didn’t understand why. Now, for context on that, a lot of the data we were getting was straight from support. So when I met with Sam and Scott and the team on this, I prefaced it like, “Hey, my biggest ammo that I’ve got right now is all support data, which is only going to be negative.” And I’m looking at how big a spike did we get because of this? Because people aren’t going to log to support to say, “Oh, I love the new scoring algorithm. It’s great.”

Chris Jones (43:18):

And it’s going to take a lot of surveys for that to … The likelihood that would’ve shown up in an NPS survey is just really unlikely unless we actually guided them down that path through a structured survey. So in that case, those surveys a lot of times they’re very bad at picking up feature Levels. You have to look at more of the micro things, or you have to instruct your survey to specifically ask about that new feature, and is it helpful or is it not? But for that one, we actually didn’t have an NPS or a CSAT read because we were reacting to it within a matter of days from the release, because of the spike of support calls.

Chris Jones (43:52):

But I didn’t have the other side of it, of people that I’d say said, I actually think this is better. I like it more. I’ve only got the bad news. I don’t have the flip side, because we were trying to react to it pretty quickly. I don’t know if Scott felt that way during the conversation, because he probably felt a little beat up by it, but-

Ben Grynol (44:11):

There’s also times where NPS … Let’s use, companies are constantly shipping new features, they’re constantly shipping new products, like ancillary products, which are still related. So hypothetically, let’s use Coke, classic example. Coke shipped a product, I think it was the late ’80s, which was Coke. They went to Coke, or in early 2000s, they Clear Coke or something weird like that. That product, I don’t think is on the shelves anymore, but that would be a case where people might really like Coke and it’s got a high NPS and a high product market fit with certain people, but then there’s this … Well, let’s actually break this whole thing apart. JM loves a certain type of Coke that’s sugar-free or something, or zero, or-

Josh Mohrer (44:56):

I love Diet Coke. It’s a terrible addiction.

Ben Grynol (44:59):

You might be upset if that went away, but then there’s a new Coke product that comes out. There’s Clear Coke, and that doesn’t do super well. So Coke has to go down this path of figuring out what is broken, even though other things might indicate that they’re doing really well. So companies should probably keep this lens that everything they’re doing always needs to be broken down to these micro levels to say, is this new thing adding the source of value that we thought it would’ve before we went down the path of building or launching it?

Chris Jones (45:28):

I likely kicked the Coke and Diet Coke habit years ago. And I now only drink Athletic Greens. I would recommend it to everyone. It’s the best ever.

Josh Mohrer (45:38):

This podcast is sponsored by Athletic Greens. I got to be honest with you, Chris, I signed up for it yesterday.

Chris Jones (45:45):

Did you?

Josh Mohrer (45:45):

I used Athletic Greens way back in the day before it was called AG1 or whatever. But now they come in individual packet, which just for my life, the messes that I make is going to be great. And I ordered it yesterday. I’m excited about it, and we want to thank them for not being a sponsor of this podcast.

Chris Jones (46:02):

When I had a bunch of guys visit the ranch last month. Because I was drinking, they all gave me the whole, what are you drinking, some green healthy drink? And they gave me a hard time about it. And I was like, “Try it.” And then three out of the four that tried it, they sat down like, “Yeah, this is great. I’m going to order it.” So I was lightly promoting it by more like, “Here, take the Pepsi challenge. You try it. If you like it, then grade it.” So it is more for them about a free sample plus an endorsement. And they’re like, “Oh yeah, I’m ordering it tomorrow.”

Josh Mohrer (46:30):

Very nice.

Ben Grynol (46:31):

There you have it. If we’re looking at what we should do moving forward, it sounds like there’s no reason we should abort anything.

Scott Klein (46:39):

Maybe let me bring it home, because we’ve been measuring NPS so far. We just started with PMF pretty recently. I took the physician that we need to stop measuring NPS because I’m not sure that it’s actionable for us right now. So maybe we can talk about that. But I think more broadly, I would like to know if we have a strategy of building an NPS unicorn, do we ever think we’re going to get to something like 90? I think that structurally, my opinion is that we’re not going to. And I just want be clear about that, not necessarily being a bad thing because I don’t want it to push us into weird behavior to try to jump through hoops to make it when the nature of our product is just that we’re not going to get to a 90. So maybe we can talk about those two things-

Josh Mohrer (47:22):

What does NPS unicorn mean though? Obviously, it’s a number. It’s being one in a million in that regard, but I feel like it’s more of a philosophy that I think Ben could probably expound upon a little.

Ben Grynol (47:33):

So the way it came about was it was outlined as one of the goals or the values of growth. And it maybe is looked at as more of a literal metric as opposed to a philosophy. So the philosophy is, what are we going to do to delight people? What are we going to do to do the right thing and to have that be our north star, as opposed to saying … What is the opposite of an NPS unicorn? It’s like, well, just hacking attention and doing all these undercutting things and growing it all costs. There are all these things that are the opposite of being an NPS unicorn.

Ben Grynol (48:12):

And so the idea is, what can we philosophically do as a company, as a brand, from a growth standpoint, from an op standpoint, from a product standpoint? What can we do that is delightful and that it is steering people in the right direction, as opposed to … Let’s use product. There are probably things we could do that would make the product highly addictive, but that wouldn’t be the right thing to do even though it might give us really good stickiness or retention metrics. So it’s like, what do we need to do to actually impact a billion people in the world? What do we need to do to educate the world about metabolic health?

Ben Grynol (48:46):

That’s a philosophy of this idea of NPS unicorn, where it’s related to NPS and it is doing the right thing, but it’s not necessarily it. We want to be thoughtful as an aspirational brand as opposed to a company that everyone goes like, they’re big, but I don’t really like them. And I think that’s where it all comes from.

Scott Klein (49:06):

So maybe the unicorn part, I’m curious, because the determinant is very overloaded in tech startup land. So is there room for us to call it something else, like an NPS shining star, for example? I don’t know what goes after NPS.

Ben Grynol (49:21):

No, no, we are determined. NPS semi-corn, we’re going with.

Scott Klein (49:25):

Oh, that’s right. That’s right. I forgot about semi-corn. Look, I agree with the thrust of your comments. I think that getting a team focused on building a delightful experience across an entire company is very admirable. My worry and the initial thing that I set out to do with my comments around this whole discussion was I don’t know that we’re ever going to get to 90, just because we deal with a health focused product. And I don’t know that that’s a bad thing. So if it’s not 90, then what is it? What is the bellwether number that we can say, hey, if we clear this, we can know we’re doing amazing? What are we trying to answer there? Maybe I’m still not clear on that.

Josh Mohrer (50:04):

For me, this relates to the idea that I think is fake and I made up about input and output metrics. There isn’t anything specifically about the high number that we want. We want a business that people feel so great about that we get that output. So for me, it is more philosophical. It’s about building in a way that will be delightful and long-term focused and not tricking people into making incremental purchases and not just thinking about short-term money stuff, but being long-term focused, giving away as much value as we do and making people love us by being worthy of their love.

Josh Mohrer (50:44):

And then down the line, and I think it takes years to get there, you hope that that comes out in the scores. That if people feel that strongly about it, that it bears out in whatever feedback they give us, whether it’s numerical or constant usage of the app. But I like the term because it points people simply in that direction. This is about delighting folks for the long term, because it’s such an important thing.

Chris Jones (51:12):

Scott, your question around being in the healthcare space. The benchmark for NPS, for healthcare companies for last year was a plus 38. So you may say, okay, being in this space is going to have a natural drag. Obviously, we’re not in the airline or cable space, but that could be the … If we fall below general healthcare, then that’s a flag, to me in general. I like having them as some type of barometer, but I don’t think we should be solving for the number and for the metric. I think, to your point, it should be if we do a great job, these numbers will naturally come and get higher by us doing great work and solving important problems of having people have better health outcomes.

Chris Jones (51:54):

And if we do that and we do it well, and we do it at a great price point, then these scores will come. Right now our biggest reason for detraction is price. And we know we’re doing lots of things to actually work on that, but if we made the product free tomorrow, our score would probably jump 30 points.

Scott Klein (52:11):

We’d have a true NPS unicorn. Well, let’s talk about, this is a much more tactical question. Maybe where do we go from here? The big question for me going forward, now that we have both NPS and PMF is, do we keep NPS around? Is there any reason for us to keep it around right now when we’re still searching for the ability to cross the chasm effectively? I would argue, no. I will steelman another side of the argument, which is to say during the discussion that we had on notion about NPS and PMF, the comment was brought up, and I liked it a lot that NPS might be more of a way for us to know if we’re nailing the community, the identity, the belonging piece of Levels.

Scott Klein (53:01):

When I think about other products like Peloton, I will really go out of my way to talk about Peloton, because I feel like my identity is as a Peloton rider. That is part of who I am as Scott. And so I think that maybe when we think about NPS and PMF, it might be worth it for us in the short term to turn off NPS, because we have such a limited window of attention. People are very survey fatigued. We’ve got few shots to actually get some actionable feedback from people. And I think using those on NPS might not be the best use of our time right now.

Scott Klein (53:33):

And then as we layer on more of the social pieces as time goes on, bringing back NPS to specifically test for those might be a good bellwether for us or a good indicator for us on if we’re doing the social stuff, the identity stuff, the belonging stuff, if we’re doing it well. Chris, do you have any thoughts on whether we should keep it around or not? You look at these numbers more than I do.

Chris Jones (53:57):

I have an action item that I think I talked to you Scott about. I wanted to, for weeks one, two and three, where I had say, put the survey in the email. I’m going to completely use that as a PMF plus a couple of extra product questions to go deeper on it and remove NPS from weeks one, two and three. That was an experiment and I learned what’s the drag on NPS for those three weeks. Taking it out of the end of the month program is more difficult, because that was engineering built custom code, and that would take engineering things.

Chris Jones (54:27):

So it’s not just me swapping out a survey link. So it might be easier for us to say, let’s reduce the number of samples for NPS, ramp up PMF, focus on PMF and still do both as we work on some of these things that take more engineering to actually swap it out. Because David and team actually built their own survey and then just pass the data through the API. It not a link to a survey that I can just swap out like a type form.

Ben Grynol (54:52):

And keep getting deeper on product metrics.

Chris Jones (54:54):

And they also pass a bunch of data through it. My ability to link gender, age and other things to it for slicing is super easy that I don’t get when I do that the week one, two and three, because I just threw a survey link in versus a deep integration.

Ben Grynol (55:11):

Got it.

Chris Jones (55:11):

So I think there’s short term, and then we think about long term, like what’s the right approach?

Scott Klein (55:23):

Chris, do you have a closet you can go to? It’s very serious. Like a closet clothes hanging up. Go in there.

Chris Jones (55:29):

Now I’m in my bedroom on a bed with carpet on the floor.

Scott Klein (55:33):

Way better, way better.

Chris Jones (55:34):

Wow.

Scott Klein (55:35):

You sound way better.