 # Prioritization methods ### ❶ Reach / Frequency

— Methodology from the company Rambler
Algorithm
1. List the functionalities that need to be prioritized in a column.
2. Draw a matrix with the Reach / Frequency axes (Fig. 3.1).
The Reach axis shows how many potential users can meet and take advantage of the functionality: “Few,” “Some,” “Most,” “All.”
The Frequency axis shows how often users can use the functionality: “Always”, “Frequently”, “Sometimes”, “Rarely”.
3. Expertly place each functionality into the matrix, one by one.
4. Imagine that each division on the horizontal and vertical scale adds 1 point to your functionality, then sum up all the points for each of the functionalities. For example, if the functionality fell at the intersection of “some”and “often”, then it will amount to 2 + 3, that is, 5 points.
5. Rank the functionality from the largest to the lowest score.
6. Then you can select several top choices (as the more likely candidates) for more detailed consideration and discard the lower candidates as the weakest.

### ❷ Poker Planning

Algorithm
1. Make a table with 4 columns.
Average usefulness rating
2. In column 1, write down all the features, existing ideas of functionality.
3. Next, select 3–5 experts whom you trust in the particular subject area.
They have to evaluate the usefulness of each of the functionality.
4. Name the functionality, give it a brief description (mini-pitch for 30 seconds) and ask all experts at the same time (at the count of three), to vote for the usefulness of the functionality, showing either 1 finger (the smallest value)or 2 or 3 fingers (most useful functionality).
5. Write down the average usefulness score next to the feature. To calculate the average, add up all the expert valuations and divide the resulting number by the number of experts.
6. Conduct the expert usefulness assessment for each of the features.

Average cost rating
7. As in the previous example, fill in the column for evaluating costs (difficulty) for the implementation of the features. In this case, we need the opinion of technical experts. For this exercise, assemble the development team and maybe even bring someone in from adjacent teams (for example, the CTO of your company).
8. Voice the name of each feature and give a brief description. Then, at the count of three, ask each of the participants to put up 1 finger (easy to implement feature), 2 or 3 fingers (appears to be a rather complicated and time-consuming feature to implement).
9. For each feature, calculate the average cost estimate (difficulty of implementation).
10. Do this for all features.

﻿Benefit / Cost
11. To fill in column 4, divide the value in column 2 (Benefit) by the value in column 3 (Costs) and record it as a percentage.
12. Rank all the features in descending order − from largest percentages (good ratio of benefits to labor costs) to the lowest ones, and subsequently use the highest ranked features for further analysis.
It is worth noting that it is irrelevant important that a feature that experts will rank as “easy to implement”, may actually take several days or weeks to actually implement — this will not affect the final ranking when comparing the benefits and costs. The main thing to keep in mind is that each feature should be evaluated using the same set of rules.

### ❶ RICE Score

— Methodology from the company Rambler
Algorithm
1. Make a list consisting of the features’ names.
2. Make a table with 6 columns (Fig. 3.3 below).
Reach − filled in with how many users may potentially encounter the feature (quantitative assessment).
Impact − filled in with the level of usefulness to users (based on your expert opinion), let’s say − on a scale of 1 to 5 (5 being the highest level of benefit).
Confidence − shows an estimated probability (as a percent) that the feature will be a hit: if there is hard data, the percent should be 90–95%, if the odds are 50/50 that, then the percent is 50%, and if it’s just your imagination then 10–20%. This exercise gives an idea of a certain coefficient of your belief in the usefulness of features.
Effort − evaluate how difficult it is to implement that particular feature, let’s also say on a scale of 1 to 5 (5 being the longest to implement).
RICE Score − calculate the final RICE score according to the formula:
RICE score = (Reach * Impact * Confidence) / Effort
3. We get the total values in the RICE Score column and select the TOP-5 features for implementation.

### ❷ Hierarchy of metrics

Algorithm
1. Global − The top-level metric that is defined by the company. Indicates the frequency of user interactions with different features: how often they return, how many actions they perform while using the service.
2. Service − Basic service metric: start from this stage and define service metrics that directly affect the upper level metric.
3. LVL 1. Determine the factors that affect the service metric.
4. LVL 2. Dig deeper into each of the identified metrics — look for deeper factors (LVL 2 metrics) that affect LVL 1 metrics.
5. Other. Next, continue to dig deeper until you identify all the underlying / fundamental factors.
6. Do the same with all the metrics.
7. Next, formulate a hypothesis or select a feature that you want to implement, and ask yourself an important question: what metric will this feature affect?
8. Determine the level at which the identified metric is located.
The point of this method is to assess how close or far away the hypothesis / feature is to the main (global) service metric.
The closer the metric in the hierarchy to the main one, the more likely it is that the related feature will be useful. If your feature is buried far away from the main service under many layers, the feature is less likely to influence the main metrics.
Once you have the hierarchy of metrics, you can move on to ranking features based on key metrics.
Based on the metrics, create a table with features and prioritize them
1. Highlight 2−5 of the most important metrics of the service (Column − “Metrics”). Each of combinations of “metric − metric LVL1” has an impact on the main metric of the service.
2. Evaluate what type of growth indicator (as a percent) you want to receive using these metrics: N% growth for the quarter.
3. Formulate a list of projects that we be started to influence a specific metric.
4. For each of feature (inside the corresponding metric) determine the weight, or importance (from 1 to 3). You can start by using the mechanics of Poker Planning.
5. Select the features that have the highest weight (highlighted in blue in Fig. 3.5).
By using this system, we get a methodology for two-stage prioritization: first, by focusing on key metrics, the first stage we focus only on the most important ones and discard lower priority functions of the product.
Then, at the second stage, we prioritize within each key metric.

### ❸ Risk Based ROI Assessment

It is necessary to pre-allocate 2 basic mechanics:
1. We will break each metric into sub-metrics to get a more accurate estimate.
2. For each sub-metric, we calculate the risks based on three scenarios (optimistic, realistic, pessimistic) and their probabilities. As a result, we will get the average value for the metric − this will help to further evaluate the feature and will act as a type of insurance for the accuracy of the calculations.
Algorithm
1. Compose a formula to evaluate the profitability of features.
Example:
number of users of the feature per month Х expected average amount a user will spend on the feature Х number of months that you will receive profit from the implementation of this feature Х product profitability.
Usually 12 (one year) is the number of months that is used, since after a while the feature becomes outdated and replaced with a new one.
2. Look at which metrics can reasonably divided into sub-metrics.
For example, the number of users can be divided into two sub-metrics.
Example:
total number of people visit the necessary section Х % of people who, in your opinion, will use / buy a new feature.
3. For each of the resulting metrics and sub-metrics, designate a value.
Estimate 3 scenarios (optimistic, realistic, pessimistic): as well as the value of the metric in these scenarios and the probability of occurrence for each of the scenarios. Then multiply the value for each scenario by the probability and add up the 3 resulting numbers. The result is an updated value of this metric (taking risks into account).
Result_Metric = N_Opt * %_Opt + N_Real * %_Real + N_Pess
4. Substitute the updated metrics in our profitability formula.
5. Consider how much money needs to be invested into the development and implementation of this functionality (for example, you can take an estimate of work hours of the development team and multiply by the average hourly cost of the developers’ work in your company).
6. Calculate the ROI → ROI = Feature Profitability / Development Cost
The higher the ROI, the better.
When using this methodology, it’s useful to compare features that make a gross profit in approximately the same scale; otherwise you will mix small features (which will bring you 100 rubles) and large ones (able to bring several million).

### Summary

1. Choose the top-3 key metrics that you consider the most important.
2. Collect hypotheses for their boost-strategy.
3. If the market is new, then use high-quality methods.
4. Perform a quick assessment and discard the “weak” candidates.
5. Make a detailed assessment of the remaining candidates (for example, based on ROI).

### Practical exercise

Take 3 existing features from your service and come up with 3 new additional features. Prioritize the features using the “Metrics Hierarchy” method and the “ROI Calculation” method (make the values of the metrics arbitrary).
Ask your colleague to prioritize the very same features. Compare the results and try to determine the reasons for the prioritization differences. Make a general list of prioritized features.