TL;DR: Why INVEST in a user story that’s not valuable? Good question. Here’s a pragmatic how-to guide to making the Value magic happen in your PBIs.
Step 1 — Avoid Mission Statement Metrics
One of my favorite quotes about corporate mission statements comes from Vincent Flanders, who reminds us that most can be summarized generically as “All babies must eat.” Unfortunately, I think many articles I’ve read about the value proposition in user stories read simliarly.
Mike Cohn at Mountain Goat Software expresses similar frustrations in his blog post ‘The Problems with Estimating Business Value.’
I want to avoid this by rewarding your readership with a more hands-on and actionable approach to value scoring individual user stories. Something lightweight, something you can easily add to your process to help aid in with the sizing and prioritization of your individual user stories. Something easy so you don’t find yourself abandoning the process a month later.
To get there let’s first talk analytics, because after all … without data … you’re just another person with an opinion. I can think of no better definition of Analytics than this quote by Ben Yoskovitz from chapter 2 of his book ‘Lean Analytics.’
“Analytics is about tracking the metrics that are critical to your business. Usually, those metrics matter because they relate to your business model — where money comes from, how much things cost, how many customers you have, and the effectiveness of your customer acquisition strategies.”
Step 2 —Define Valuable
I remember it like it was yesterday. The year was 2006. It was when I was leading product at SchoolDude.com (now Dude Industries) that I asked if our IIS logs could be formatted to adhere to the Apache Log format.
Why? Two things drove this. There weren’t as many easy and affordable analytics options available then as there are now. That, and because I found myself continually having to push back on well-meaning but not-so-data driven requests … such as, and I’m not making this up … please to extend advanced UX treatments to include IE5 on the Mac.
Having whet my whistle exploring the earliest version of Google Analytics earlier that year, I took said IIS logs, pre-processed them with some hacky PERL, and then plugged them into the open source log analyzer AWStats.
The effort was well worth it as the visualizations the tool offered helped identify what was valuable, and what wasn’t. Sure the pie and bar charts of these lagging indicators we’re crudely rendered, however they were a huge help in not only understanding how users were visiting our SaaS product, but which product areas were the most active … and when. #Value!
How do we define valuable? Start with analytics that can help us plan and prioritize PBIs on either working features on which we want to expand, or not-so-adopted features from which we need to pivot.
Step 3- Identify Metrics Matter
With insights into how your customers use your product and why, we then focus on prioritizing those feature requests that align with what is critical for the business.
Unfortunately, you may already see some long abandoned attempts to do just this. Usually this is in the form of JIRA tickets swimming in an alphabet soup of blank KPI fields bearing cryptic titles such as ARR, CAC, CCR, CPL, CPA, CTR, DAU, MAU, LTV, SNF, and NPS.
I think Analytics guru Avinash Kaushik sums up the product owner user story-level value proposition predicament I’m trying to paint here when he writes:
“We have access to more data than God wants anyone to have. Thus it is not surprising that we feel overwhelmed, and rather than being data driven we just get paralyzed. Life does not have to be that scary…”
Which metrics matter? Let’s start with a loose aggregation of any measures that help you plan and prioritize PBIs in a world of unlimited opportunities tempered by limited resources.
Step 4 — Loosely Aggregate Three
So Which Metrics Do I Loosely Aggregate? Three. Pick 3.
Remember, this blog post is about applying the INVEST principle to your user stories, in this case answering the question “Is it valuable?” And we’re asking this question with the idea that a product backlog item is “a promise to a conversation.”
So in the context of a conversation about sizing up a single feature of value, I’m going to suggest a variation on a page out of Gary Keller’s playbook and identify the 1 most important thing … immediately followed by identifying the next 2 most important things … that all are related to your company’s business model and together paint a contextual picture.
I believe that makes three (3). Feel free to model more as needed.
For me, I like to loosely aggregate some the aforementioned executive metrics into three generalized areas:
- Creates New Business — this can range from new customers consuming existing products coupled to creating new segments via experimentation and validated learning. If you need an example of some measures to help frame this, get over to your company’s CRM and check out measures such as CAC, MG, CR, CPL, CTR, and so on.
- Expands The Current Business — here you may want to aggregate measures of adoption to abandonment, and perhaps combine this data with up-sells against churn. Some example metrics that you could add to this mix include ARR, CRR, and CLV.
- Reduces Internal Operations Costs — if a penny saved is a penny earned, then let’s go ahead and consider technical debt retirement, reductions in cloud provider costs, and/or the awesomeness of automation.
Don’t like these? Here’s another three that worked better when I was delivering newspapers:
- Increases User Engagement — this is less about page hits and more about activity. For example, the ultimate measure could be the time and depth of participation by return visitors whose use metrics include feedback from goals, funnels, and/or attribution (think PVBR, SNF, BE).
- Increases Customer Conversions — this could be ‘CR’ metrics reflecting the rate of trial users upgrading to paid, or about moving ‘NPS’ detractors to passive and passives to promoters.
- Increases Pre-emptive Issue Resolution — how many support calls have we avoided by adding deep learning to proactively detect anomalies? How many support calls did we close or avert by self-help or defensive web design?
So which metrics do we loosely aggregate? Any three general areas of measure that help move forward the sizing and prioritization of a promise to a conversation … especially those general areas that cover our organization’s business model.
Step 5 — Making the Magic Happen
So Pragmatically, how do I put all this theory into action?
As the end game here is to more quickly and easily valuate and stack rank your product backlog items (a.k.a. user stories, JIRAs, PBIs, tickets, cards, what-have-you), I’d like to offer these ground rules so you don’t turn these ‘loosely guesstimated aggregates’ into time-consuming, over-ritualized science projects that ultimately lead to process abandonment:
- First and foremost, remember that this is a SWAG, not an exercise in acute data science, so please do not obsess over hyper-accuracy. If you do you introduce the risk of process abandonment by your team.
- Second, decide what works best for you in terms of value scoring. For example, you can score these with your team during estimations. Or you can score them via a “three amigos” gathering prior to refinement. Or you can go it alone as the stories come into existence. The end game is to assist, not define prioritization.
- Third, if you eventually find other measures that matter more, swap them in and out, but try and keep the count of value factors to 3, at least for starters. Less is more.
- Fourth, these measures can be a mix and/or loosely aggregate of qualitative and quantitative measures. And again, they are not intended to be hyper-accurate, just a guide. … or you risk abandoning their use.
Now let’s talk about the actual scoring.
- First and foremost, remember that this is a SWAG, not an exercise in acute data science, so please do not obsess over hyper-accuracy. If you do you introduce the risk of process abandonment by your team.
- We start out with each individual metric defaulting to the neutral value of 3. We move it closer to 1 if the feature is a detractor. We move the individual metric value to 5 if it increases the measure.
- Stories whose sum of the 3 measures scoring below a threshold, like say sum(Engagement + Conversion + Resolution) <= 5 are probably candidates for removal from the backlog.
- Stories possessing a total value score ranges between 6 and 9 indicate a potential need for further discussions with the team, other product owners, stakeholders, and/or customers to ensure we’re not missing something.
- Stories scoring 10 and up, well I shouldn’t have to say this, but size’m already !-)
Some caveats.
- First and foremost, remember that this is a SWAG, not an exercise in acute data science, so please do not obsess over hyper-accuracy. Ergo my overuse of the phrase ‘loosely aggregate’ in this article.
- If you find yourself abandoning the process, make the value measures simpler, and re-apply them in a swift, happy, ‘guesstimation’ type of approach.
Nice to haves.
- Periodically, you should be revisited during the MVP process and then again after production delivery to see just how good we are at spotting winners.
- In a dream world, figure out which measures from your analytics tools can be piped back into the user story further identify the stellar from the stinkers.
Oh hey, and just like story points, the more you do this as a team, the better the team will get at doing this.
So pragmatically, how do I make this happen? At the measure level: 1 = bah! 3 = meh. 5 = awesome. At the story level <6 dump it, >5 and <10 discuss it, > 10 size it.
Are We There Yet?
Now I can imagine some of those of you with a data science and/or analytics background are about to light me up with a highly technical comment … MBAs with a stern email. I’m up for that, as I’m up for all such learning.
Just please keep in mind, what I’m talking about here is a SWAG at the story level that helps us answer the questions “Why are we doing this? Can it wait, or do we have to do it now?”
And like I said, just like story points, the more you do this as a team, the better the team will get at doing this.
Heck, as your team matures in this area, they may suggest dumping this practice as such value propositions become an ingrained and integral part of any and every user story. And I’m cool with that.
YMMV
And now for the useful URLs portion of this post, which includes some articles and books I’ve used to help me deal with the product owner PBI value proposition predicament.
- Best Web Metrics / KPIs for a Small, Medium or Large Sized Business by Avinash Kaushik
- Lean Canvas (based on Osterwalder’s Business Model Canvas for Lean) by Ash Maurya
- How to measure your success: The key marketplace metrics by Juho Makkonen
- The Problems with Estimating Business Value — Mike Cohn, Mountain Goat Software
- The ONE Thing: The Surprisingly Simple Truth Behind Extraordinary Results — Gary Keller
- How to Measure Anything: Finding the Value of Intangibles in Business
- Lean Analytics: Use Data to Build a Better Startup Faster — Alistair Croll & Benjamin Yoskovitz
- Why it pays to INVEST in your user stories, part 1 of 6: I is for Independent
- Why it pays to INVEST in your user stories part 2 of 6: Negotiable