Exotic Metrics: a Parable for Product People

Exotic Metrics: a Parable for Product People

Exotic Metrics is a disease that affects many product teams and companies.

Exotic Metrics make us feel smart, but if you really zoom out and are honest to yourself, it makes us act pretty stupid. It is highly contagious and chronic.

Let me explain, with a story.

--

Say you are working on a product that helps startups.

Maybe it helps them with their financial operations, maybe it helps them set up & manage bank accounts more easily, maybe it helps them hire high quality offshore engineering talent, or something else like that.

You now need to define a metric to measure your success / impact / progress towards your mission.

--

Say you come up with this metric:

Number of startups that are actively using our product.

Sounds reasonable? Sure, when it's just us two, it sounds eminently reasonable.

You now take this idea to a colleague, Bob.

This is where the trouble starts.

--

Bob, genuinely wanting to help, says:

This is a fine metric, but I am concerned it can be gamed.

You see, the definition of a "startup" can be very vague.

What if you / Sales just sign up a bunch of solo practitioners who are not really a startup.

You could inflate this number quite a lot, hit the OKR out of the park, and still miss the spirit of our mission.

--

Bob sounds smart. Bob sounds genuine. And you certainly don't want to come across as someone who'd game the system.

So you come up with a revised metric:

Number of venture-backed startups who are using our solution.

--

Sounds very reasonable. Only now, you will need to add some complexity to the computation of your metric. You will need to look up some external data sources for VC funding.

Oh, but there isn’t 1 great data source for this, so you do some Google research to identify 3-4 data sources that together should be fairly comprehensive.

Only problem: they publish data in different formats, frequency, accuracy. Sometimes data for the same company is contradictory.

--

Don’t worry though. Your data science counterpart, Dana, loves these kinds of data problems, so she gets to work on aggregating a bunch of external data sources for VC funding.

Alas, some of these data sources require licensing. Yet another problem to solve.

--

But not to worry. You’re a PM. PM stands for Problem-solving Machine, doesn’t it? No one actually knows, but it kinda sounds cool so we’ll go with it.

You look up the data licensing requirements, and then engage your Biz Dev counterpart to go figure out the best terms for being able to get access to this data.

--

You start to worry that this is starting to get a bit out of hand.

Scope is creeping.

But you reason: well, this is our North Star Metric. And you remember reading somewhere that a team without a great NSM is a like a ship without a rudder.

And you really don’t want your CEO to point out flaws in your metric at next month’s QBR.

--

So you give the green light to Biz Dev, while Dana has embarked on a 2 week project to normalize the data from these various sources. If it’s worth doing, it’s worth doing right - you reassure yourself.

You are scheduled to meet your manager and skip manager this Friday, so you can’t wait to give them an update on your beautiful metric.

--

Friday arrives. You present this metric as your team’s North Star Metric. You are confident that they will be impressed. Except, your skip manager, Beth, doesn’t seem quite convinced.

She says, very encouragingly:

This is a reasonable metric.

Although one issue I see is that we are not discriminating on the type of startups that we are helping.

You see, what we really want is for our customers – these startups – to succeed more than they would without our product.

So I think a real North Star Metric should reflect something about the quality / success of the startups we serve.

Your manager is quick to chime in:

I agree with Beth.

Perhaps our North Star Metric should be “startups that have been funded by Tier 1 VCs” since that can be a proxy for quality.

Or better still, how about just using total valuation? Since that is ultimately what signals how successful our customers are.

You now feel compelled to play along, so you say:

That makes sense.

Perhaps the right metric is “Total valuation of startups that use our product”.

You see nods of approval.

Towards the end of the conversation, your manager says:

I like this metric. For our next 1:1, I’d love to see what goals and targets you’d set for this metric over the next 3 months, 6 months, and 12 months.

You nod.

--

You walk away happy that you were able to get consensus, but nervous about how you’re actually going to measure this.

You debrief with your data science counterpart Dana and share this revised metric. Dana looks very excited about this additional complexity and reassures you that everything will be fine.

--

The big day is here. You do the warrior pose in the bathroom stall just before your QBR and then walk into the conference room, beaming with confidence.

At the right time, you share the proposed North Star Metric for this important initiative. You make it a point to clarify to your CEO that you wanted a metric that is

1) not game-able

2) reflects impact on your customers

3) is a stable success measure

Your CEO’s eyes light up.

CEO says:

I like this metric. Too often teams propose metrics that don’t really reflect true customer impact.

I just have one question: since startup valuations follow the power law, does it make sense to narrow the metric such that it reflects the impact of the top startups we serve?

Beth is quick to chime in:

That’s a great point. As such we want all our customers to succeed wildly.

But I think a metric that focuses on the impact on our top 20 startups or top 10% of startups would further incentivize the team to build for our biggest customers.

Everyone nods in agreement.

You feel like you have to nod too.

So you do.

--

The group then discusses some targets for this metric. Your eng counterpart Eddie is quick to point out that the targets will seem small in the near term because your customers will take some time to grow their revenues & valuation. Everyone agrees.

Meeting ends, with all participants feeling that sense of smug satisfaction that is all too familiar in the corporate world: we are all smart, we are being smart, we are super-thoughtful about the metrics we set.

--

Of course, what follows after this should also be painfully obvious, because we’ve all seen some versions of this play out (even if we haven’t had the courage to accept it):

- This metric is very difficult to calculate

- So dozens of assumptions have to be made in its calculations

- Which in turn reduces the accuracy of the metric

- This metric is very difficult to move in the near-term

- So targets get set somewhat randomly

- Some quarters, you drastically outperform

- You rejoice. You’re doing it right!

- You make sure everyone knows that.

- Some quarters, you underperform

- So you ask for more resources and x-fn alignment at the next QBR

- You created a fancy dashboard to track the metric

- But now no one looks at that dashboard

- Because everyone knows there will be nothing new to see, nothing insightful to conclude, nothing you can actually act on

- You keep setting new quarterly targets for the metric, but deep down no one on the team actually has any confidence that the projects you’re prioritizing will help hit those targets

--

At some point, you say enough is enough

You go back to the drawing board with your new eng counterpart

And you jointly arrive at a better metric

Something that reflects proximate customer adoption, not approximate customer impact

Something that is a leading indicator, not a lagging indicator

And here it is:

Number of startups actively using our product.

--

You present this new proposal to your manager and your skip manager.

They agree that the old metric is the wrong one.

The next 30 minutes are spent poking holes in your proposed metric.

At some point, your manager says:

My main concern is that this metric can be gamed. We need a metric that represents the impact of our product.

How about:

% of startups with a >20% cost savings due to our product.

You see your skip manager nod.

Your gut knows this is not the right metric.

But your head nods, almost involuntarily.

The chronic disease of Exotic Metrics strikes again.

And so the suffering repeats.

It is your destiny.

THE END.

T. Scott Clendaniel

85K | Director, Data & Analytics/ AI @ Gartner

5mo

Truly insightful parable for Exotic Metrics and the slippery slope that leads us there! Perhaps the biggest lesson is that there isn't a perfect metric. And if we follow Michael Porter's research on business strategy, where many different positioning can work in the same industry, many different metrics might work depending on the organization's needs at a given time. Thanks, Shreyas Doshi!

  • No alternative text description for this image
Like
Reply
Derek Yang

Group Product Manager @ Discord | derekyang.substack.com

5mo

ok but the real north star question is: what's the engagement rate on this post from people with a "senior product manager" title, 5000+ LinkedIn followers, and last name starting with X?

Like
Reply
Ankur Pal

Chief Data Scientist @Aplazo | 10 Best Tech Leaders in India 2024 | AI Thought Leader | Forbes Technology Council | Angel Investor and Advisor | Keynote Speaker

6mo

Absolutely, Shreyas Doshi! It's like chasing a shiny object, isn't it? We get so caught up in these fancy metrics that we forget the basics. It's always good to take a step back and ask ourselves: 'Does this really matter to our product?' I've seen teams focusing on user acquisition but neglecting user retention. That's like filling water in a leaky bucket!

Sam Szamocki

Senior Product Manager, Data / AI / Ops | INSEAD

6mo

Ida Giardini Apostolos Georgiadis DM success metric...sound familiar?😅

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics