Categories
Personas

The KLAD method: a quick way to analyze an audience based on personas

View the original post

Hi friends! In this article, I’m going to share my author’s method for quickly segmenting audiences and testing market volume. I would put this method on a par with Guerrilla Testing. The article is simplified as much as possible, but there will still be some numbers. Here we go.

A little fundamental. If your product has a very vast audience, then by the central limit theorem, the sample mean should have a normal distribution. There are currently 7.9 Billion people on the planet, and that’s a big sample.

Let me explain. What is the normal distribution? All people as a population are roughly evenly distributed in all their skills. Body temperature, People’s Heights, Car mileage, IQ scores all fall into the normal distribution. Take extroverts and introverts, and ambiverts for sure. Well, the ambiverts are in the majority. And there are far fewer of the extroverts and introverts who are very pronounced. Similarly works with water quality, the cost of smartphones, people’s health, the desire to work.

If you don’t have a non-normal distribution, you can reduce the data to a more normal form by logarithmizing for convenience.

The whole audience is a quantitative description. But designers are more used to working with qualitative descriptions, such as personas. We all know that to identify needs, designers conduct dozens or hundreds of interviews with a represenative, homogeneous sample and create a general portrait of the user. And one of the popular problems with personas: if we have more than 1,000,000 people in our service, obviously not all of a million people will match the interview-based portraits exactly. And a large company with diversified products is guaranteed to have a diverse audience. In such cases sometimes JTBD helps, but we all understand that it works at too high a level of abstraction. The conclusion is that in addition to all of the above, you have to divide the personas into subgroups. Without complicated ML.

Step 1 — Just for understanding

So we have a target portrait of the user. And we want to quickly break it down into many subgroups. Based on my practice of more than 10 years, I have empirically concluded that any user cohort is divided by 7 from the median. That is: 0←8←15←22←29←36←43←43←50→57→64→71→78→85→92→100. We get 14 cohorts (0% to 8%, 8% to 15%, and so on). Our key persona can be divided into 14 different people with varying degrees of the same characteristics.

For example, our person is a patriot of his country. But everyone’s level of patriotism is expressed differently, from 0 to 1. From the graph below, we can assume that the majority is the peak of the distribution (segment 0.36 to 0.64, total 68.2%) and they are simply patriotic. And only on the segment 0.85 to 1 are blind patriots, but there are few of them.

Convert the percentages to z-score. Remember that the total area under the normal curve is 1 (or 100 percent if we are working with percentages). Our sections 0←8←15←22←29←36←43←50→57→64→71→78→85→92→100 can be represented as 0←0.08←0.15←0.22←0.29←0.36←0.43←0.5–0.57→0.64→0.71→0.78→0.85→0.92→1. And tt is usually calculated between 0 and 1 (100%).

No one is stopping you from specifying your own percents, you don’t have to stick to my 7%. In general, this approach is based on a rule of thumb from mathematical statistics 68% — 95% — 99.7%. A range of 68% from the center (34% each to the left and right) is one standard deviation. 95% is two standard deviations. In my case, we are just using seven standard deviations. This is called z-scores and it will help you calculate 7% left and 7% right from the center.

Step 2 — Still just for understanding

Next, thanks to a function in Excel, we easily find the inverse standard normal distribution for the specified value = NORMSINV(.92) and get a z-score of 1.40507156.0.08 = -1.4050716

  • 0.15 = -1.0364334
  • 0.22 = -0.7721932
  • 0.29 = -0.5533847
  • 0.36 = -0.3584588
  • 0.43 = -0.1763742
  • 0.5 = 0
  • 0.57 = 0.17637416
  • 0.64 = 0.35845879
  • 0.71 = 0.55338472
  • 0.78 = 0.77219321
  • 0.85 = 1.03643339
  • 0.92 = 1.40507156

Step 3 — Finally the Case

Take chefs, for example. There are millions of chefs in the world. Some work in MICHELIN Restaurants or the personal chef of a billionaire, some work in a diner and simply pour a lot of ketchup on moldy bread. And our dev team creates an app for all chefs in the world (at least that’s what C-level thinks).

We don’t have the time or, let’s face it, the budget to do an in-depth and complex and lengthy research. And we’re not going to do it, because deep and thorough research is way more expensive than launching a startup MVP. But we need to cluster our audience somehow.

We need to find out the number of chefs in the world. And we just google, that is, do a desk-research. How many working cooks are there globally? Most likely, the Internet already has an information of the total audience, or it’s not hard to calculate it yourself. Refer to ready-made studies like this one, we got 506k chefs. PAM (Potential Available Market) = 506,000.

We take a ready graph of the normal distribution, which is already divided into 14 equal fractions with percentages. And what conclusions can we draw from the graph?

So, we have 38.2% of the most ordinary cooks (blue highlighted in the chart above). After the interview, we conclude that these guys work in an ordinary café and can cook. This is definitely the audience that came to us for interviews and our persona is formed by them. Because they’re much more frequent. And now we realize that we’re only reaching less than half of the entire potential audience. And all our valuable insights from the interviews cover less than half of the PAM. Which means we can earn less than half the PAM, only 193,292.

When we pitched to investors that 506,000 cooks would pay us a dollar each and we’d get half a million dollars per month, now we have less than $200,000 at best. Oops.

Let’s look at the graph next. Another 15% slightly better and another 15% slightly worse than our audience. Most likely a couple of these guys dropped by for interviews and we treated them as an unrepresentative sample. Because their answers were slightly different from the other 38.2%. But not necessary.

And then it gets more interesting: the 9.2% range on the chart to the right are already even talented guys who make noticeably better-than-average quality dishes. Chefs in big restaurants, maybe even your-town stars. Conversely, there’s the 9.2% of chefs who generally frustrate their diners and don’t necessarily cook better than your 15-year-old daughter. (Those in the 0.29–0.36 range).

Then it gets even more interesting, 4.4% of the chefs are already very good, they make works of art, and eating at their establishments is already expensive in experience and most likely in money. Additionally, 1.7% have an amazing level and are the very people who are single-handedly changing the industry and creating new knowledge. And the last cohort, the 0.1% and 0.5%, the top stars like Alain Ducasse, Eric Frechon, Paul Bocuse, Georges Blanc. Truly haute cuisine, opinion leaders. The same guys who work as personal chefs for the richest people and own their own restaurants with an average check of $300-$400. We could split the 0.1% and 0.5% into two cohorts, but that’s usually too much detail.

Unfortunately, a similar thing works in the opposite direction: 4.4% + 1.7% + 0.5% + 0.1% = 6.7% of cooks are probably doing their jobs as poorly as possible. Moldy bread instead of moldy cheese, very leavened pizza, collapsing pasta with mustard, soup without a bay leaf (because clients don’t eat it!). I’m going to assume it might be an army mess hall. They probably shouldn’t even be included in the CA of our app. Or it could be that they need training courses in our app. Now it’s our responsibility to this, what to do with this guys.

But let’s go back to the best and do a little fact-checking. Our total audience is 506,000, so 0, 1% = 5060. Google and, as of November 2020, there are currently 2,651 with Michelin stars in the world. Assume that exactly that many have not yet received stars, but are potentially worthy, because there should be a 5060. Or they have chosen a career as a private chef, like Andre Rush. The question we need to ask ourselves is, what is the value of our app for these 5000+ chefs and do we need these chefs? Will they cover the cost of cramming the feature under them?

So we were able to segment our persona into several groups, which are separated by skill level. With this, it is already easier for us to form JTBD. And most importantly, we got a lot of new input for creating hypotheses and continuing research, and for market analysis.

Warning

My method (I call it KLAD), like any messy method of hypothesis testing, does not give a 100% guarantee of the result. It is possible, if you have an audience that does not fall within the normal distribution. That’s what happened to me recently: front-end developers were not within the normal distribution. After dramatic complication of front-end during last 5 years there are less good specialists + a lot of people of third world countries became a junior-level specialists. So the distribution has been shifted to the left: a lot of perspective ones, not many already good ones. Anyway, you can notice this, being an expert in the subject area for which you are doing the application. Or in interviews with such experts. Don’t be data-stuck, be data-driven.


The KLAD method: a quick way to analyze an audience based on personas was originally published in Prototypr on Medium, where people are continuing the conversation by highlighting and responding to this story.