Presentation Rubric | ||||||
| ||||||
| ||||||
| ||||||
| ||||||
| ||||||
|
What do you think of this template.
At its core, the MoSCoW method is simply a prioritization framework that can be applied to any kind of situation or project, but it works best when a large number of tasks need to be ruthlessly whittled down into a prioritized and achievable to-do list. The core aim of the process is to classify tasks into four buckets; Must, Should, Could and Won’t. As you can probably fathom, Must is the highest priority bucket, and Won’t is the lowest. You can also presumably now see where the funny capitalization in the term ‘MoSCoW’ derives from. One of the primary benefits of a MoSCoW exercise is that it forces hard decisions to be made regarding which direction a digital product project will take. Indeed, the process is usually the first time a client has been asked to really weigh up which functions are absolutely fundamental to the product (Must), which are merely important (Should) and which are just nice-to-haves (Could). This can make the MoSCoW method challenging, but also incredibly rewarding. It’s not uncommon for there to be hundreds of user stories at this stage of a project, as they cover every aspect of what a user or admin will want to do with the digital product. With so many stories to keep track of it helps to group them into sets. For example, you may want to group all the stories surrounding checkout, or onboarding into one group. When we run a MoSCoW process, we use the following definitions. Must – These stories are vital to the function of the digital product. If any of these stories were removed or not completed, the product would not function. Should – These stories make the product better in important ways, but are not vital to the function of the product. We would like to add these stories to the MVP build, but we’ll only start working on them once all the Must stories are complete. Could – These stories would be nice to have, but do not add lots of extra value for users. These stories are often related to styling or ‘finessing’ a product. Won’t – These stories or functions won’t be considered at this stage as they are either out of scope or do not add value.
The first two slides of the template are similar in design and structure. These slides can be used to provide general information to the team about the client’s needs. The slides will be useful for the product owner, development team, and scrum master. The next slide groups user stories into vertical columns. You can also set a progress status for each user story. The last slide gives you the ability to specify the time spent on each user story. After summing up the time for each group, the team can understand how long it will take them to complete each group. All slides in this template are editable based on your needs. The template will be useful to everyone who uses the Agile method in their work.
You dont have access, please change your membership plan., great you're all signed up..., verify your account.
PowerSlides.com will email you template files that you've chosen to dowload.
Please make sure you've provided a valid email address! Sometimes, our emails can end up in your Promotions/Spam folder.
Simply, verify your account by clicking on the link in your email.
What is moscow prioritization.
MoSCoW prioritization, also known as the MoSCoW method or MoSCoW analysis, is a popular prioritization technique for managing requirements.
The acronym MoSCoW represents four categories of initiatives: must-have, should-have, could-have, and won’t-have, or will not have right now. Some companies also use the “W” in MoSCoW to mean “wish.”
Software development expert Dai Clegg created the MoSCoW method while working at Oracle. He designed the framework to help his team prioritize tasks during development work on product releases.
You can find a detailed account of using MoSCoW prioritization in the Dynamic System Development Method (DSDM) handbook . But because MoSCoW can prioritize tasks within any time-boxed project, teams have adapted the method for a broad range of uses.
Before running a MoSCoW analysis, a few things need to happen. First, key stakeholders and the product team need to get aligned on objectives and prioritization factors. Then, all participants must agree on which initiatives to prioritize.
At this point, your team should also discuss how they will settle any disagreements in prioritization. If you can establish how to resolve disputes before they come up, you can help prevent those disagreements from holding up progress.
Finally, you’ll also want to reach a consensus on what percentage of resources you’d like to allocate to each category.
With the groundwork complete, you may begin determining which category is most appropriate for each initiative. But, first, let’s further break down each category in the MoSCoW method.
Moscow prioritization categories.
As the name suggests, this category consists of initiatives that are “musts” for your team. They represent non-negotiable needs for the project, product, or release in question. For example, if you’re releasing a healthcare application, a must-have initiative may be security functionalities that help maintain compliance.
The “must-have” category requires the team to complete a mandatory task. If you’re unsure about whether something belongs in this category, ask yourself the following.
If the product won’t work without an initiative, or the release becomes useless without it, the initiative is most likely a “must-have.”
Should-have initiatives are just a step below must-haves. They are essential to the product, project, or release, but they are not vital. If left out, the product or project still functions. However, the initiatives may add significant value.
“Should-have” initiatives are different from “must-have” initiatives in that they can get scheduled for a future release without impacting the current one. For example, performance improvements, minor bug fixes, or new functionality may be “should-have” initiatives. Without them, the product still works.
Another way of describing “could-have” initiatives is nice-to-haves. “Could-have” initiatives are not necessary to the core function of the product. However, compared with “should-have” initiatives, they have a much smaller impact on the outcome if left out.
So, initiatives placed in the “could-have” category are often the first to be deprioritized if a project in the “should-have” or “must-have” category ends up larger than expected.
One benefit of the MoSCoW method is that it places several initiatives in the “will-not-have” category. The category can manage expectations about what the team will not include in a specific release (or another timeframe you’re prioritizing).
Placing initiatives in the “will-not-have” category is one way to help prevent scope creep . If initiatives are in this category, the team knows they are not a priority for this specific time frame.
Some initiatives in the “will-not-have” group will be prioritized in the future, while others are not likely to happen. Some teams decide to differentiate between those by creating a subcategory within this group.
Although Dai Clegg developed the approach to help prioritize tasks around his team’s limited time, the MoSCoW method also works when a development team faces limitations other than time. For example:
What if a development team’s limiting factor is not a deadline but a tight budget imposed by the company? Working with the product managers, the team can use MoSCoW first to decide on the initiatives that represent must-haves and the should-haves. Then, using the development department’s budget as the guide, the team can figure out which items they can complete.
A cross-functional product team might also find itself constrained by the experience and expertise of its developers. If the product roadmap calls for functionality the team does not have the skills to build, this limiting factor will play into scoring those items in their MoSCoW analysis.
Cross-functional teams can also find themselves constrained by other company priorities. The team wants to make progress on a new product release, but the executive staff has created tight deadlines for further releases in the same timeframe. In this case, the team can use MoSCoW to determine which aspects of their desired release represent must-haves and temporarily backlog everything else.
Although many product and development teams have prioritized MoSCoW, the approach has potential pitfalls. Here are a few examples.
One common criticism against MoSCoW is that it does not include an objective methodology for ranking initiatives against each other. Your team will need to bring this methodology to your analysis. The MoSCoW approach works only to ensure that your team applies a consistent scoring system for all initiatives.
Pro tip: One proven method is weighted scoring, where your team measures each initiative on your backlog against a standard set of cost and benefit criteria. You can use the weighted scoring approach in ProductPlan’s roadmap app .
To know which of your team’s initiatives represent must-haves for your product and which are merely should-haves, you will need as much context as possible.
For example, you might need someone from your sales team to let you know how important (or unimportant) prospective buyers view a proposed new feature.
One pitfall of the MoSCoW method is that you could make poor decisions about where to slot each initiative unless your team receives input from all relevant stakeholders.
Because MoSCoW does not include an objective scoring method, your team members can fall victim to their own opinions about certain initiatives.
One risk of using MoSCoW prioritization is that a team can mistakenly think MoSCoW itself represents an objective way of measuring the items on their list. They discuss an initiative, agree that it is a “should have,” and move on to the next.
But your team will also need an objective and consistent framework for ranking all initiatives. That is the only way to minimize your team’s biases in favor of items or against them.
MoSCoW prioritization is effective for teams that want to include representatives from the whole organization in their process. You can capture a broader perspective by involving participants from various functional departments.
Another reason you may want to use MoSCoW prioritization is it allows your team to determine how much effort goes into each category. Therefore, you can ensure you’re delivering a good variety of initiatives in each release.
If you’re considering giving MoSCoW prioritization a try, here are a few steps to keep in mind. Incorporating these into your process will help your team gain more value from the MoSCoW method.
Remember, MoSCoW helps your team group items into the appropriate buckets—from must-have items down to your longer-term wish list. But MoSCoW itself doesn’t help you determine which item belongs in which category.
You will need a separate ranking methodology. You can choose from many, such as:
For help finding the best scoring methodology for your team, check out ProductPlan’s article: 7 strategies to choose the best features for your product .
To make sure you’re placing each initiative into the right bucket—must-have, should-have, could-have, or won’t-have—your team needs context.
At the beginning of your MoSCoW method, your team should consider which stakeholders can provide valuable context and insights. Sales? Customer success? The executive staff? Product managers in another area of your business? Include them in your initiative scoring process if you think they can help you see opportunities or threats your team might miss.
MoSCoW gives your team a tangible way to show your organization prioritizing initiatives for your products or projects.
The method can help you build company-wide consensus for your work, or at least help you show stakeholders why you made the decisions you did.
Communicating your team’s prioritization strategy also helps you set expectations across the business. When they see your methodology for choosing one initiative over another, stakeholders in other departments will understand that your team has thought through and weighed all decisions you’ve made.
If any stakeholders have an issue with one of your decisions, they will understand that they can’t simply complain—they’ll need to present you with evidence to alter your course of action.
Related Terms
2×2 prioritization matrix / Eisenhower matrix / DACI decision-making framework / ICE scoring model / RICE scoring model
Talk to an expert.
Schedule a few minutes with us to share more about your product roadmapping goals and we'll tailor a demo to show you how easy it is to build strategic roadmaps, align behind customer needs, prioritize, and measure success.
Lisa Dziuba
Updated: August 28, 2024 - 26 min read
One of the most challenging aspects of Product Management is prioritization. If you’ve transitioned to Product from another discipline, you might already think you know how to do it. You choose which task to work on first, which deadline needs to be met above all others, and which order to answer your emails in.
Priorities, right? Wrong!
In product management, prioritization is on a whole other level! The engineers are telling you that Feature A will be really cool and will take you to the next level. But a key stakeholder is gently suggesting that Feature B should be included in V1. Finally, your data analyst is convinced that Feature B is completely unnecessary and that users are crying out for Feature C.
Who decides how to prioritize the features? You do.
Prioritization is an essential part of the product management process and product development. It can feel daunting, but for a successful launch , it has to be done.
Luckily, a whole community of Product experts has come before you. They’ve built great things, including some excellent prioritization frameworks!
Here’s what we’ll cover in this article:
The benefits and challenges of prioritization
The best prioritization frameworks and when to use them
How real Product Leaders implement prioritization at Microsoft, Amazon, and HSBC
Common prioritization mistakes
Frequently Asked Questions
Before we dive into the different prioritization models, let’s talk about why prioritization is so important and what holds PMs back.
Enhanced focus on key objectives: Prioritization allows you to concentrate on tasks that align closely with your product's core goals. For example, when Spotify prioritized personalized playlists, it significantly boosted user engagement, aligning perfectly with its goal of providing a unique user experience.
Resource optimization: You can allocate your team’s time and your company’s resources more efficiently. Focusing on fewer, more impactful projects can lead to greater innovation and success.
Improved decision-making: When you prioritize, you're essentially making strategic decisions about where to focus efforts. This clarity in decision-making can lead to more successful outcomes, avoiding the pitfalls of cognitive biases like recency bias and the sunk cost fallacy .
Strategic focus: Prioritization aligns tasks with the company's broader strategic goals, ensuring that day-to-day activities contribute to long-term objectives.
Consider the example of Apple Inc. under the leadership of Steve Jobs. One of Jobs' first actions when he returned to Apple in 1997 was to slash the number of projects and products the company was working on.
Apple refocused its efforts on just a handful of key projects. This ruthless prioritization allowed Apple to focus on quality rather than quantity, leading to the development of groundbreaking products like the iPod, iPhone, and iPad.
Stress reduction : From customer interactions to executive presentations, the responsibilities of a PM are vast and varied, often leading to a risk of burnout if not managed adeptly. For more on this, check out this talk by Glenn Wilson, Google Group PM, on Play the Long Game When Everything Is on Fire .
Managing stakeholder expectations: Different stakeholders may have varying priorities. For instance, your engineering team might prioritize feature development , while marketing may push for more customer-centric enhancements. Striking a balance can be challenging.
Adapting to changing market conditions: The market is dynamic, and priorities can shift unexpectedly. When the pandemic hit, Zoom had to quickly reprioritize to cater to a massive surge in users, emphasizing scalability and security over other planned enhancements.
Dealing with limited information: Even in the PM & PMM world, having a strong data-driven team is more often a dream rather than a current reality. Even when there is data, you can’t know everything. Amazon’s decision to enter the cloud computing market with AWS was initially seen as a risky move, but they prioritized the gamble and it paid off spectacularly.
Limited resources : Smaller businesses and startups don’t have the luxury of calmly building lots of small features, hoping that some of them will improve the product. The less funding a company has, the fewer mistakes (iterations) it can afford to make when building an MVP or figuring out Product-Market Fit.
Bias: If you read The Mom Test book, you probably know that people will lie about their experience with your product to make you feel comfortable. This means that product prioritization can be influenced by biased opinions, having “nice-to-have” features at the top of the list.
Lack of alignment: Different teams can have varying opinions as to what is “important”. When these differences aren’t addressed, product prioritization can become a fight between what brings Product-Led Growth, more leads, higher Net Promoter Score, better User Experience, higher retention, or lower churn. Lack of alignment is not the last issue startups face when prioritizing features.
There are a lot of prioritization models for PMs to employ. While it’s great to have so many tools at your disposal, it can also be a bit overwhelming. You might even ask yourself which prioritization framework you should…prioritize.
In reality, each model is like a different tool in your toolbox. Just like a hammer is better than a wrench at hammering nails, each model is right depending on the type of prioritization task at hand. The first step is to familiarize yourself with the most trusty frameworks out there. So, without further ado, let’s get started.
Known as the MoSCoW Prioritization Technique or MoSCoW Analysis , MoSCoW is a method used to easily categorize what’s important and what’s not. The name is an acronym of four prioritization categories: Must have, Should have, Could have, and Won’t have .
It’s a particularly useful tool for communicating to stakeholders what you’re working on and why.
According to MoSCoW, all the features go into one of four categories:
Must Have These are the features that will make or break the product. Without them, the user will not be able to get value from the product or won’t be able to use it. These are the “painkillers” that form the why behind your product, and often are closely tied to how the product will generate revenue.
Should Have These are important features but are not needed to make the product functional. Think of them as your “second priorities”. They could be enhanced options that address typical use cases.
Could Have Often seen as nice to have items, not critical but would be welcomed. These are “vitamins”, not painkillers. They might be integrations and extensions that enhance users’ workflow.
Won’t Have Similar to the “money pit” in the impact–effort matrix framework, these are features that are not worth the time or effort they would require to develop.
Pros of using this framework: MoSCoW is ideal when looking for a simplified approach that can involve the less technical members of the company and one that can easily categorize the most important features.
Cons of using this framework: It is difficult to set the right number of must-have features and, as a result, your Product Backlog may end up with too many features that tax the development team.
Developed by the Intercom team, the RICE scoring system compares Reach, Impact, Confidence , and Effort.
Reach centers the focus on the customers by thinking about how many people will be impacted by a feature or release. You can measure this using the number of people who will benefit from a feature in a certain period of time. For example, “How many customers will use this feature per month?”
Now that you’ve thought about how many people you’ll reach, it’s time to think about how they’ll be affected. Think about the goal you’re trying to reach. It could be to delight customers (measured in positive reviews and referrals) or reduce churn.
Intercom recommends a multiple-choice scale:
3 = massive impact
2 = high impact
1 = medium impact
0.5 = low impact
0.25 = minimal impact
A confidence percentage expresses how secure team members feel about their assessments of reach and impact. The effect this has is that it de-prioritizes features that are too risky.
Generally, anything above 80% is considered a high confidence score, and anything below 50% is unqualified.
Considering effort helps balance cost and benefit. In an ideal world, everything would be high-impact/low-effort, although this is rarely the case. You’ll need information from everyone involved (designers, engineers, etc.) to calculate effort.
Think about the amount of work one team member can do in a month, which will naturally be different across teams. Estimate how much work it’ll take each team member working on the project. The more time allotted to a project, the higher the reach, impact, and confidence will need to be to make it worth the effort.
Now you should have four numbers representing each of the 4 categories. To calculate your score, multiply Reach, Impact, and Confidence. Then divide by Effort.
Pros of using this framework:
Its spreadsheet format and database approach are awesome for data-focused teams. This method also filters out guesswork and the “loudest voice” factor because of the confidence metric. For teams that have a high volume of hypotheses to test, having a spreadsheet format is quick and scalable.
Cons of using this framework:
The RICE format might be hard to digest if your startup team consists mainly of visual thinkers. When you move fast, it’s essential to use a format that everyone will find comfortable. When there are 30+ possible features for complex products, this becomes a long spreadsheet to digest.
The Impact-Effort Matrix is similar to the RICE method but better suited to visual thinkers. This 2-D matrix plots the “value” (impact) of a feature for the user vs the complexity of development, otherwise known as the “effort”.
When using the impact–effort matrix, the Product Owner first adds all features or product hypotheses. Then the team that executes on these product hypotheses votes on where to place the features on the impact and effort dimensions. Each feature ends up in one of 4 quadrants:
Quick wins Low effort and high impact are features or ideas that will bring growth.
Big bets High effort but high impact. These have the potential to make a big difference but must be well-planned. If your hypothesis fails here, you waste a lot of development time.
Fill-ins Low value but also low effort. Fill-ins don’t take much time but they still should only be worked on if other more important tasks are complete. These are good tasks to focus on while waiting on blockers to higher priority features to be worked out.
Money pit Low value and high effort features are detrimental to morale and the bottom line. They should be avoided at all costs.
Pros of using this framework: It allows quick prioritization and works well when the number of features is small. It can be shared across the whole startup team, as it’s easy to understand at first glance.
Cons of using this framework: If two product hypotheses are “quick wins”, which should go first? For this reason, it’s not the best framework when there are a lot of features. Also, beware of “fill-ins”, as they can take much more time and resources than expected and create loss of focus.
Professor Noriaki Kano, a Japanese educator and influential figure in quality management, developed the Kano model in the 1980s. Since then, it has been widely used by organizations seeking to prioritize customer satisfaction.
Delighters: The features that customers will perceive as going above and beyond their expectations. These are the things that will differentiate you from your competition.
Performance features: Customers respond well to high investments in performance features.
Basic features: The minimum expected by customers to solve their problems. Without these, the product is of little use to them.
The main idea behind the Kano model is that if you focus on the features that come under these three brackets, the higher your level of customer satisfaction will be.
To find out how customers value certain features, use questionnaires asking how their experience of your product would change with or without them.
As time goes along, you may find that features that used to be delighters move down closer towards ‘Basic Features’ as technology catches up and customers have come to expect them, so it’s important to reassess periodically.
Pros of using this framework: Because the model differentiates between basic needs and features that can delight customers, it prioritizes more customer-focused products and services.
Cons of using this framework: The categorization of features into Kano’s categories can be subjective, leading to inconsistencies. It doesn't directly address other crucial aspects like cost, time-to-market, or feasibility, which can also significantly impact product success.
Developed by IDEO in the early 2000s, this scorecard takes three core criteria — feasibility, desirability, and viability. It scores each criterion from 1 - 10 for every feature and takes a total to decide on the priority.
Feasibility Can we build this feature with the skills and resources available? Is it possible to make this particular product hypothesis fast and without hiring extra people? Do you have an available tech stack/tools/cloud storage to do it?
Desirability Does this solve the pain for the customers? Do they want this feature enough to consider paying for it?
Viability How much will users pay for this feature? What’s the (ROI)? Is there any unit economy behind this feature?
Using this framework, your team creates a spreadsheet with product features and puts a score for each parameter. Another way to use this framework is to evaluate MVP ideas for feasibility, desirability, and viability via a team discussion.
Ideas that have the most support from the team on those parameters can go right into the design sprint . Use the relevant people to help with the evaluation. For example, developers to look at feasibility or Product Marketing Managers to discuss desirability. This scorecard is pretty straightforward with clear pros and cons:
Pros of using this framework: The flexibility of the FDV scorecard means it can be used for evaluating marketing initiatives, hypotheses for customer success teams, or MVP concepts. It works well for teams that don’t find rigid frameworks helpful or for a workshop, or discussion on the executive level.
Cons of using this framework: This approach relies a lot on knowledge of what the customer wants and how complex new features are. That is not always data that is readily available.
This method follows a similar pattern to other frameworks on this list but with the significant addition of weighting how much of each category counts towards the final total.
The process starts by selecting the criteria/categories you’ll be using to rate the features. For example, you might select “user experience”, “sales value”, “strategic impact”, “user adoption” or any of the Acquisition, Activation, Retention, Referral, Revenue (AARRR) metrics.
Next, you need to decide what importance you give to each category, adding a percentage value to each criterion (up to 100%). For example, during the early stages, you might focus on UX features that make an MVP usable. Each feature will have a score in those categories, from 1 (min impact) – 100 (max impact). Then you can now calculate the final score for each feature.
Pros of using this framework: The framework is customizable, which allows you to utilize the framework throughout an organization’s lifetime.
Cons of using this framework: Sometimes the weighting percentages can be hard to decide on. It requires PMMs & PMs to understand how each feature will influence user adoption across the whole product ecosystem.
The Cost of Delay framework is unique in that it focuses exclusively on monetary value. The framework is designed to calculate the cost of not producing the feature immediately. It’s relatively straightforward to understand, although the calculation itself does require careful consideration.
The calculation is as follows:
Estimated revenue per unit of time , for example, how much could be billed over a month-long period if the feature existed.
Estimated time it will take to complete the development of the feature.
Divide the estimated revenue by the estimated time to give you the cost of delay.
Pros of using this framework: This is a highly effective way of prioritizing feature backlogs. It is also useful in helping team members align around the value of features in terms of ROI.
Cons of using this framework: For new companies or brand-new features, the revenue estimate is very much based on a gut feeling as there is no hard data to base the estimates on.
Luke Hohmann introduced the concept of ‘Prune the Product Tree’, in his book Innovation Games: Creating Breakthrough Products Through Collaborative Play . During a Product Tree session, stakeholders use stickers, markers, or digital equivalents to place features, ideas, and enhancements on different parts of the tree according to where they think they belong in terms of product development priorities.
Roots : Represent the core technologies, systems, and cap
abilities that support and enable the product's basic functions. These are fundamental aspects without which the product cannot function.
Trunk : Symbolizes the product's main functionalities or the current set of features. It is the stable and established part of the product that supports further growth.
Branches : Illustrate different areas of the product that can grow and expand, such as new feature sets, product lines, or major enhancements.
Leaves : Stand for specific features, ideas, or small enhancements that can be added to the product. These are often more visible to the end-users and can directly contribute to user satisfaction and product value.
Knowing which prioritization framework to use is tough! The Kano model is useful for making customer-centric decisions and focus on delight, but it can take time to carry out all the questionnaires needed for your insights to be accurate and fair.
Many people like the RICE scoring system as it takes confidence into account in a qualitative way, but there are still a lot of uncertainties.
MoSCoW focuses on what matters to both customers and stakeholders, which is particularly useful for Product Managers who struggle with managing stakeholder expectations. However, there’s nothing stopping you from putting too many items in ‘Must have’ and overextending your resources.
Of course, these aren’t the only prioritization techniques out there, and many talented Product Managers have their own ways of doing things. All you can do is test, test, and test again!
Microsoft: applying the eisenhower matrix to a busy inbox.
Microsoft Product Manager Anusha Bahtnagar, uses a prioritization technique called The Eisenhower Matrix to prioritize what comes into her inbox. As a Product Manager working with cross-continental teams, it’s common to wake up to a full inbox.
The Eisenhower Matrix effectively sorts your tasks/emails into four categories, and presents a solution.
Important and Urgent: Top priority tasks that require your urgent attention (eg, crisis management tasks.)
Urgent and Not Important: Time-sensitive tasks that could be handled by someone else. Delegate these tasks.
Important and Not Urgent: Tasks that you definitely need to do, but they can wait. Schedule these for the future.
Not Important and Not Urgent: Declutter and eliminate tasks.
A common theme across many companies is that the customer comes first. The same goes for prioritization.
Asal Elleuch, a Senior Product Manager for Amazon Prime, calls prioritization “a never-ending and iterative process.”
Focusing on the customer gives you an incredibly useful yardstick for prioritization. After all, your company’s values should already be customer focused. And most of your stakeholders should also be aligned on The Why.
The Product vision should also be heavily influenced by customer needs.
Being customer-focused in your prioritization will help keep your decisions aligned with everything else. Like one big customer-centric puzzle!
Google product teams achieve this by using North Star Metrics . Your North Star Metric can be any metric or action that provides the most value to the customer. For instance, Spotify’s North Star Metric might be clicking ‘play’ on a song. Google Search’s North Star Metric might be clicking on a search result.
You can then base your prioritization decisions around that metric. Whichever updates/features/bug fixes will have a greater impact on that metric has priority.
To help make decisions, with so many outside influences and an interlocking web of things to consider, Product Leader Mariano Capezzani came up with his own prioritization system.
Broken down into 4 steps, it gives you a solid footing for making quality prioritization decisions.
Know the context . Understand things like how this task/feature fits with the KPIs of the company, the market trends, and related upcoming regulations.
Understand the need. Learn to differentiate between what customers are asking for and what they really need.
Consider the execution. Are you aware of the intricate network of dependencies and their interlock that are needed to deliver something?
Arrange the sequence. Apply a quick acid test to ensure it fits your criteria (contributes to company goals, benefits a market, etc.)
Mistake 1: no agreed-upon scoring guide.
What does an impact score of “5” mean? A 1% growth or 10%? In conversion rate or MRR? Do other teammates think the same?
Without an agreed-upon scoring guide, you can’t make an apples-to-apples comparison between initiatives. This makes prioritization pointless. To make matters worse, it increases the likelihood of conflicts between team members, as you are essentially disguising opinions as objective decisions.
How to fix it
Develop a shared scoring guide for your prioritization criteria. Define what each level entails with a concrete description and examples. Here’s an example guide for determining the confidence level:
A scoring guide can be created for any prioritization method, as long as it is:
Specific to your product and team context
Objective and clear
It’s important to point out that even with a guideline, there will still be disagreements — and that’s okay. Team members should be comfortable explaining their decisions and giving feedback to others. These discussions will help your team uncover blind spots and build alignment.
Software development isn’t the only thing that takes time when building a product. So do problem analysis and solution design, commonly referred to together as product discovery .
However, discovery tasks usually get either:
Lumped in with development work → Creates messy dependency issues.
Left out of the prioritization process → Introduces a selection bias from the start.
Divide your product development into discovery and delivery, and prioritize the two backlogs separately. This is called Dual Track Development .
Do note that having separate tracks doesn’t mean you should have separate teams. For any given project, the same team should carry out both discovery and delivery work to maximize quality and velocity.
Your team will always add items to the backlog faster than it will clear them. Over time, you will build up a long backlog with items from the previous century (year). Because it’s human nature to favor shiny new ideas (a.k.a. recency bias), old items tend to get forgotten for no good reason.
As new evidence emerges, situations change, and your team’s estimation skills improve, you must constantly review old items to correctly prioritize the backlog.
Track the “freshness” of each item. When something has not been updated for longer than X period of time, groom it again using the latest information. If it’s no longer relevant, it’s time to remove it permanently.
Product development is inherently messy. Besides the core value-vs-cost consideration, there are also dependencies, deadlines, skill fit, strategic fit, and other constraints that influence your prioritization decisions.
No matter how ruthless you are with prioritization, you can’t simply dismiss these constraints. However, you also shouldn’t let them override your core prioritization criteria every single time.
Teams that lack a good system to deal with these external factors often end up losing confidence in their prioritization processes altogether.
Define a set of rules to work with these constraints, and make them part of your prioritization system.
Here are a few examples:
Time-sensitive projects → Set aside a fixed amount of resources each month to fast-track projects with non-negotiable deadlines (e.g., scheduled launch events, seasonable campaigns). Everything else will follow the regular process, even if it means not getting done at all.
Dependencies → A project blocked by other tasks will resume its position in the backlog as soon as the blocker is removed. However, it shouldn’t interrupt projects that have already started.
Strategic alignment → Assign more weight to projects that align with the company’s strategic priorities. This can be done with the Weighted Scoring method.
When you have consistent guidelines, people will trust the system, knowing that every decision is made objectively.
Perfect prioritization does not exist. The information you use for prioritization is simply a set of estimations and estimations are always wrong . There is no need to treat your prioritization process like you’re planning a rocket launch.
Prioritization is an exercise that helps you maximize your execution value. If you constantly direct more resources toward prioritization than execution, you are doing it wrong.
Sometimes product teams spend months debating the relative value between small features when they could have shipped them all in the time lost.
Timebox your prioritization discussion. If your team gets stuck comparing initiatives, introduce a tie-breaker rule. For example, items that entered the backlog first go first.
The point is, trivial differences will not matter in the long run, and if you never decide what goes first you’ll never get started.
No one gets prioritization right the first time. Even if you are satisfied with your current system, there will always be room for improvement if you look hard enough. Additionally, just because something works today doesn’t mean it’ll continue to work as the company scales. It’s dangerous to think you can create a prioritization system that requires minimal iterations.
Treat your prioritization system (and other internal processes) like your product. Monitor how it’s working and iterate continuously. Because the “users” in this case are your team members, there should be an open channel for everyone to give feedback.
Generally speaking, frequent and small iterations are better than drastic revamps. However, be aware that:
It takes time for a new process to show its effects.
A new process can hurt productivity in the short term.
Not every problem has an easy solution.
To avoid interrupting team momentum with ad-hoc fixes, I recommend doing a quarterly or bi-yearly process review to go over all the feedback and discuss solutions as a team.
Having to rearrange your backlog due to management input, usually without a convincing reason, is one of the most frustrating yet common things that can happen to a product team. This is often due to a disconnect between company strategy and product strategy.
Such a discrepancy exists for a combination of reasons:
Management mistakes tactics for strategies. It dictates solutions instead of creating a direction for potential solutions.
Management doesn’t explain the “why” behind a strategy.
There is no clear process for teams to share learnings and evidence (both horizontally and vertically).
There is no agility in the company strategy, even when it no longer makes sense.
If you are a product leader (CPO, director, team lead, etc.), you have a critical responsibility here to bridge the gap between individual teams and senior management. Make sure to communicate information bi-directionally and fix misalignment proactively. A good way to start is by examining:
How are we sharing insights other teams should know?
Does every team have the same access to key information (ICP, positioning, data dashboard, etc.)?
What information does my team want to know but is out of their reach?
There is no ‘best framework’. There is only the best framework for a given prioritization task. Now that you’re familiar with the frameworks that product experts use day-to-day, look back at your OKRs and decide which model will turn your backlog into the right product at this moment in time.
The Product Manager is typically responsible for finalizing the prioritization, balancing stakeholder interests, user value, and feasibility.
Developers provide input on feasibility and effort estimates to help the PM. Stakeholders help PMs and developers understand business value and promote strategic alignment.
There are tons of great prioritization tools out there, like our free template pack , which includes templates for 5 prioritization models.
Whatever tool you use, the most important thing is to align around the model you’ll use and make sure everyone is using the same model in pursuit of the same OKRs, and make sure to clarify priorities within the timeline of your product roadmap so everyone is aligned.
Follow these general steps whenever using a prioritization model:
Identify the moment: Identify the tasks in the backlog, strategy, and current OKRs.
Decide on a framework that will help you reach your team’s goals and apply it to the tasks in the backlog.
Try other frameworks and see if the same features came in first place.
Your team should review its priorities regularly. The cadence of that review depends on your team’s needs. How often is not important as long as it’s consistent. Always re-evaluate your prioritization framework if business objectives change.
Yes! In fact, some frameworks pair together as well as a nice chablis and fresh oysters:
Pair subjective and quantitative frameworks for contrast. For example: Cost of Delay + Kano model will balance revenue and customer delight.
Pair bird’s eye views with detailed analysis. Some frameworks are based on a general sense of the market and user trends while others on careful research. Cover your bases by using both. For example: Weighted Scoring + MoSCoW.
Prioritization in product management is less about ticking off tasks and more about leading your product in the right direction. It is a crucial part of framing the priorities within your product roadmap. It is a continuous process of assessment, reassessment, and realignment with your product goals and market needs.
Product School has partnered with Productboard to create a micro-certification on how to build and maintain effective Roadmaps. Enroll for free to learn how to communicate the product vision and strategy to your stakeholders and customers.
Updated: August 28, 2024
Subscribe to The Product Blog
Share this post
By sharing your email, you agree to our Privacy Policy and Terms of Service
IMAGES
VIDEO
COMMENTS
LAI Networ Pitch Perfect Rubric 1 Pitch Evaluation Rubric The below rubric is meant for you to evaluate yourself or to evaluate peers, as detailed below: Self Evaluation: Video or voice record yourself giving your pitch presentation. Take a 15-minute break to clear your mind of what you have said, and then watch/listen to your pitch.
The pitch's use of words was moderately precise, concise, and efficient throughout. Parts of the pitch may have been further condensed or lengthened. Not Concise The pitch's use of words was imprecise, not concise, and generally inefficient. Parts or all of the pitch were wordy and/or circuitous. _____/ 3 Points Accessible / No Jargon
Product Pitch Presentation. Students will assume the role of entrepreneur. Students will select a product/service that they are passionate about and create a sales pitch. Audience must be able to identify the product, the need it fulfills, the targeted audience, the benefit and price. Rubric Code: QXA9586. By Pattie18. Ready to use. Public Rubric.
PI, PII, Int. and Sr. Team Salesmanship Presentation Grading Rubric Public Speaking and Oral Presentation Component Presentation in each of the following areas: will be scored in emerging, developing or advanced. ... composure though out sales pitch. Some eye contact throughout. (10 points) Maintained cool, calm and collected through out pitch ...
Sales Presentation Judging Rubric Each team presentation will be judged in five different categories and awarded from one to ten points (maximum score is fifty points). Final team score will be the average of the individual judge's scores. Use this judging rubric as a guideline for your decision. Presentation Content (10 points maximum ...
The project will evaluate students pitching an item in a business setting in a competitive environment. Create unique scenarios for different agricultural contexts: seed sales, livestock breeding stock sales, equipment sales, wedding flower arrangement sales, etc. Rubric Code: SX929A2. By jrizo1. Ready to use. Public Rubric. Subject: Engineering.
iRubric P9CAXC: Rubric title Sales Pitch. Built by zachholz using iRubric.com. Free rubric builder and assessment tools.
Easily Editable, Printable, Downloadable. Elevate your pitch game with precision using ths editable template. This customizable Sales Presentation Scoring Rubric Template provides a structured, objective framework for evaluating sales presentations. Seamlessly assess content, delivery, and overall impact, ensuring consistent, data-driven feedback.
shark tank ddp sales pitch rubric - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document contains a score sheet for evaluating a sales pitch presentation to Shark Tank. It rates presentations on elements like an engaging opening, clear description of product/services, effective pricing strategy, and promotion plan.
sales pitch rubric - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. This rubric outlines the criteria and possible points for a presentation on a roller coaster design. It evaluates aspects of the blueprint like the track length, loop dimensions, maximum potential and kinetic energy. It also assesses inclusion of a model, research to ...
A sales pitch is a concise sales presentation in which a salesperson makes a sales offering. They explain their business and non-intrusively show the value of their product/service. Salespeople commonly make their sales pitch at least once a week, so for sales teams, this is a regular part of the sales process.
Step 4: Present the solution. With the stakes raised, your audience needs a solution: a clear path toward their goal. An effective sales presentation presents your product as a means to the ...
Sales Presentation Rubric. Score the following categories on a -10-point scale, ranging from inadequate to excellent, using only whole numbers. Each categories score will be weighted in the final points according to the value in the weight column. Goals & Overview: Did the team set clear goals?
13 Powerful Sales Pitch Presentation Templates to Land Your Next Client. Written by: Orana Velarde. Jul 07, 2024. An effective sales process has seven cyclical steps; prospecting, preparation, approach, presentation, overcoming kickbacks, closing the sale, and following up. Every step is as important as the next for landing a client or closing ...
Presentation: Overall effectiveness of the actual presentation. Did the presenter(s) engage the audience and hold their attention? Did the presenter(s) appear to speak with confidence authority? Were visual aids (i.e. PowerPoint® slides) clear and valuable? Was the pitch exciting and compelling? How efficiently did the team allot their time?
14 Winning Sales Deck Examples (& How to Make One) If you're serious about generating leads and closing deals, you need a sales deck presentation that wins. A sales deck is a powerful product presentation you show to potential clients to showcase products or services. It's basically an elevator pitch in digital form.
The presentation is a sales pitch and should be prepared for a product or concept with equipment used to present the sales presentation. Student members, not advisers, must prepare presentations. ... fbla-rubric-salespres. Nebraska FBLA - 08/10/2024 Nebraska FBLA Future Business Leaders of America. PO Box 95072 Lincoln, NE 68509
The MoSCoW method is a simple and highly useful approach that enables you to prioritize project tasks as critical and non-critical. MoSCoW stands for: Must - These are tasks that you must complete for the project to be considered a success. Should - These are critical activities that are less urgent than Must tasks.
Shark Tank Sales Pitch Speech sales pitch As inspired by "Shark Tank," the project will evaluate student teams developing, planning and pitching a product to the class (a.ka. "sharks) with the goal to have one or more sharks invest in the product in exchange for ownership in the company. ... Presentation Rubric Excellent 5 pts Good 4 pts Fair 3 ...
Product details. At its core, the MoSCoW method is simply a prioritization framework that can be applied to any kind of situation or project, but it works best when a large number of tasks need to be ruthlessly whittled down into a prioritized and achievable to-do list. The core aim of the process is to classify tasks into four buckets; Must ...
MoSCoW prioritization, also known as the MoSCoW method or MoSCoW analysis, is a popular prioritization technique for managing requirements. The acronym MoSCoW represents four categories of initiatives: must-have, should-have, could-have, and won't-have, or will not have right now. Some companies also use the "W" in MoSCoW to mean "wish.".
Prioritization in product management is less about ticking off tasks and more about leading your product in the right direction. It is a crucial part of framing the priorities within your product roadmap. It is a continuous process of assessment, reassessment, and realignment with your product goals and market needs.