It's important to develop a team-wide understanding of how to rank a feature against your prioritization system of choice so that you can trust your team to prioritize consistently. The easiest way to do that is to include rubrics in the descriptions of your drivers and scores so that everyone can access and align behind them.
In this article:
How to add rubrics
- Click on the name of a driver or prioritization score to open its Details panel. You can do this from anywhere the driver or score appears.
- You'll see a description field at the top of the details panel, just below the name. Insert your rubric for how the driver or score should be treated in this field.
If you use a common prioritization framework or need some inspiration, feel free to copy any of the rubrics below, paste them into your workspace, and use or modify them as needed.
Note: Most things in Productboard have description fields. Not all of them need rubrics, but drivers and scores benefit greatly from them, as do any fields where input is needed (like objectives, health updates, and custom fields.
Rubric examples for drivers
How many people will this feature affect within a given time period?
Use User Impact Score to assess volume.
🟦⬜️⬜️⬜️⬜️ 1-50 customers
🟦🟦⬜️⬜️⬜️ 50-100 customers
🟦🟦🟦⬜️⬜️ 100-250 customers
🟦🟦🟦🟦⬜️ 250-500 customers
🟦🟦🟦🟦🟦 All customers
To avoid bias towards features you’d use yourself, estimate how many people each customer journey will affect within a given period. For us, it's how many customers will this journey impact over a single quarter?
As much as possible, use real measurements from product metrics instead of pulling numbers from a hat.
How to score
🟦⬜️⬜️⬜️⬜️ Very Low Reach: 30% and less of our customer base
🟦🟦⬜️⬜️⬜️ Low Reach: 30-70% of our customer base
🟦🟦🟦⬜️⬜️ Medium Reach: 50-70% of our customer base
🟦🟦🟦🟦⬜️ High Reach: 70-90% of our customer base
🟦🟦🟦🟦🟦 Very High Reach: 90% and more of our customer base
Example
- Customer Journey 1: 54% of our customers reach this point in XXX process each month. The reach is Medium.
- Customer Journey 2: Every customer who uses this feature each quarter will see this change. The reach is Very High.
- Customer Journey 3: This change will have a one-time effect on XXX existing customers, with no ongoing effect. The reach is Very Low.
How much will this impact individual users?
Use User Impact Score to assess impact.
🟦⬜️⬜️⬜️⬜️ Minimal impact on our product
🟦🟦⬜️⬜️⬜️ Low impact on our product
🟦🟦🟦⬜️⬜️ Medium impact on our product
🟦🟦🟦🟦⬜️ High impact on our product
🟦🟦🟦🟦🟦 Massive impact on our product
Example
- Customer Journey 1: For each customer who sees it, this will have a huge impact. The impact score is Very High.
- Customer Journey 2: This will have a lesser impact for each customer. The impact score is Low.
- Customer Journey 3: This is somewhere in-between in terms of impact. The impact score is Medium.
Create strategic opportunity by serving our clients in better ways.
High-scoring items create agility in our teams and offerings, outcomes are marketable and disruptive but aligned with our core strengths.
🟦⬜⬜⬜⬜ At least one client
🟦🟦⬜⬜⬜ Clients of a certain practice or region
🟦🟦🟦⬜⬜ Core or strategic clients
🟦🟦🟦🟦️⬜ All clients of a business
🟦🟦🟦🟦🟦 All clients
High-scoring items reduce friction during the work day; outcomes reduce the cost of doing work without sacrificing quality.
🟦⬜⬜⬜⬜ Automates an unproven process
🟦🟦⬜⬜⬜ Partially automates a proven, specific process
🟦🟦🟦⬜⬜ Automates a proven, specific process
🟦🟦🟦🟦⬜ Partially automates a proven, global process
🟦🟦🟦🟦🟦 Fully automates a proven, global process
High-scoring items make the best use of our human and capital resources; outcomes are maintainable and transferable well into the future.
🟦⬜⬜⬜⬜ Introduces a new technology or skillset, but aligns with our philosophy and best practices
🟦🟦⬜⬜⬜ Compatible with some existing technologies, but requires large investment in technology and skills
🟦🟦🟦⬜⬜ Compatible with most existing technologies, but requires small investment in technology and skills
🟦🟦🟦🟦⬜ Compatible with most existing technologies and some of the team has competency
🟦🟦🟦🟦🟦 Existing technology or team competency
To curb enthusiasm for exciting but ill-defined ideas, factor in your level of confidence about your estimates. If you think a customer journey could have a huge impact but don’t have data to back it up, confidence lets you control that.
Confidence is a percentage, and we use another multiple-choice scale to help avoid decision paralysis. Be honest with yourself: how much support do you really have for your estimates?
How to score
🟦⬜️⬜️⬜️⬜️ Very Low Confidence (0% of data): we have no data to back up this journey
🟦🟦⬜️⬜️⬜️ Low Confidence (30% of data): we are missing data for all drivers
🟦🟦🟦⬜️⬜️ Medium Confidence (50% of data): we are missing data for two drivers
🟦🟦🟦🟦⬜️ High Confidence (80% of data): we are missing data for one driver
🟦🟦🟦🟦🟦 Very High Confidence (100% of data): we have all the data we need
Example
- Customer Journey 1: We have quantitative metrics for reach, user research for impact, and an engineering estimate for effort. This journey gets a 100% confidence score = Very High Confidence.
- Customer Journey 2: I have data to support the reach and effort, but I’m unsure about the impact. This journey gets an 80% confidence score = High Confidence Score
- Customer Journey 3: The reach and impact may be lower than estimated, and the effort may be higher. This journey gets a 50% confidence score = Medium Confidence Score
- Customer Journey 4: We only have partial data on all three drivers. This journey gets a 30% confidence score = Low Confidence
- Customer Journey 5: We have absolutely no data. This journey gets a 0% confidence score = Very Low Confidence
Compliance
High-scoring items protect our organization and minimize liabilities; outcomes demonstrably meet standards and simplify how we reach them.
🟦⬜⬜⬜⬜ > $0 Liability
🟦🟦⬜⬜⬜ > $250k Liability
🟦🟦🟦⬜⬜ > $500k Liability
🟦🟦🟦🟦⬜ > $1m Liability
🟦🟦🟦🟦🟦 > $10m Liability
"How does this impact our business?" Try to look at it from different angles. If you have your goals or OKRs set you can see how much this task helps you to get closer to completing these.
🟦⬜⬜⬜⬜ Lowest Value
🟦🟦⬜⬜⬜ Low Value
🟦🟦🟦⬜⬜ Medium Value
🟦🟦🟦🟦⬜ High Value
🟦🟦🟦🟦🟦 Highest Value
"Is there a fixed deadline? Can we lose customers if it isn't completed within a certain time?" Tasks with closer deadlines have higher time criticality.
🟦⬜⬜⬜⬜ Lowest
🟦🟦⬜⬜⬜ Low
🟦🟦🟦⬜⬜ Medium
🟦🟦🟦🟦⬜ High
🟦🟦🟦🟦🟦 Highest
"Is there any negative impact if we delay?" This is a great companion to business value.
🟦⬜⬜⬜⬜ No Risk
🟦🟦⬜⬜⬜ Low Risk
🟦🟦🟦⬜⬜ Medium Risk
🟦🟦🟦🟦⬜ High Risk
🟦🟦🟦🟦🟦 Highest Risk
"How difficult is this to deliver?" You can try to estimate how long it would take to deliver this task if one person would work on this in months. But you can also use T-shirt sizing to make it a bit easier.
🟦⬜⬜⬜⬜ XS
🟦🟦⬜⬜⬜ S
🟦🟦🟦⬜⬜ M
🟦🟦🟦🟦⬜ L
🟦🟦🟦🟦🟦 XL
Rubric examples for prioritization scores
In OUR COMPANY it's calculated with the following weights:
- Reach: X%
- Impact: X%
- Confidence: X%
The number is divided by the effort estimate to get the final RICE score.
Prioritization is a difficult problem.
So why is prioritizing a product roadmap so difficult? Let me count the ways:
- It’s satisfying to work on pet ideas you’d use yourself, instead of projects with broad reach.
- It’s tempting to focus on clever ideas, instead of projects that directly impact your goals.
- It’s exciting to dive into new ideas, instead of projects that you’re already confident about.
- It’s easy to discount the additional effort that one project will require over another.
Even if you make it through this mental minefield intact, you’re left with the tough task of consistently combining and comparing these factors across all project ideas. Thankfully, you don’t have to do this in your head.
Using a scoring system for prioritization in product management certainly isn’t new. Systems designed to balance costs and benefits abound. But you can have a hard time finding one that allows you to usefully compare different ideas in a consistent way.
In response, smart people at Intercom began developing their own scoring system for prioritization from first principles. After lots of testing and iteration, they settled on four factors, and a method for combining them called RICE.
WSJF is a prioritization method that helps calculate and understand the level of financial impact of not finishing the task or implementing the solution sooner than later. WSJF method is usually used in organizations that are using SAFe.
The WSJF model helps them calculate a score for each initiative. Next, they can sort the initiatives by the scores into a sequence so they know how to prioritize their initiatives. The calculation itself takes as input the cost of delay and either the job duration or the job size.
WSJF Score = Cost of Delay / Estimated Size
Breaking this down:
Cost of Delay = Business value + Time criticality + Risk reduction
So WSJF score = (Business value + Time criticality + Risk reduction) / Estimated size.
The MoSCoW method allows you to figure out what matters the most to your stakeholders and customers by classifying features into four priority buckets. MoSCoW (no relation to the city—the Os were added to make the acronym more memorable) stands for Must-Have, Should-Have, Could-Have, and Won’t-Have features.
- Must-Have: These are the features that have to be present for the product to be functional at all. They’re non-negotiable and essential. If one of these requirements or features isn’t present, the product cannot be launched, thus making it the most time-sensitive of all the buckets.
Example: “Users MUST log in to access their account.”
- Should-Have: These requirements are important to deliver, but they’re not time sensitive.
Example: “Users SHOULD have an option to reset their password.”
- Could-Have: This is a feature that’s neither essential nor important to deliver within a timeframe. They’re bonuses that would greatly improve customer satisfaction, but don’t have a great impact if they’re left out.
Example: “Users COULD save their work directly to the cloud from our app.”
- Won’t-Have: These are the least critical features, tasks or requirements (and the first to go when there are resource constraints). These are features that will be considered for future releases.
Example: “Users WON'T be able to share with a single click.”
The MoSCoW model is dynamic and allows room for evolving priorities. A feature that was considered a “Won’t-Have” can one day become a "Must-Have" depending on the type of product.
See also
- Use drivers and prioritization scores
- Understanding how notifications work in Productboard
- Follow entities of interest
Comments
Article is closed for comments.