Traditional "project scoring" systems we see look like this... a list of projects in a spreadsheet scored against some sort of measurement criteria. Sometimes the criteria is weighted by importance. Each row (project) is scored against the weighted criteria. The rows are totaled and each project gets a score. Some have called this "horizontal scoring."
When one changes, the rest stay the same.
The problem with this type of simple scoring system is that, a) you don't see the relative impact new projects have on the current portfolio, b) you don't see the impact (on other projects) when one project receives a higher score, and c) you tend to get very close project scores to the point that there is very little differentiation in scoring values, rendering the scoring system of little value.
Rather, we favor a system called "vertical prioritization."The difference with this approach is that we think of the project portfolio in terms of a zero-sum game, where if one project's rating is increased, then there are other projects who's ratings must be proportionality decreased.
Further, if you add new projects to the portfolio, the new ones need to impact the existing portfolio in some way. For example, if Project A gets added and is rated HIGHEST, gaining the number one position in rank, then the other projects need to increment down proportionately.
In a project portfolio we are trying to see the relative relationship of each project to the overall mix and we want to see how new projects coming into the pipeline affect the existing mix.
The following are some examples that work off of the base model (example) above. This is designed to illustrate the different between ranking projects individually (horizontal) versus how they interact as a total portfolio (vertically). The first two examples show the impact of changing the score of the "Terra" project and how this impacts the other four projects in the portfolio. The third and forth example illustrate the horizontal ranking system we have called "grid prioritization." Both are valid methods, the point of this post is to discuss the differences.
This is the model before any changes are made. Note "Terra" is ranked last at 5.5%.
In this next illustration I have changed the "Strong Customer Engagement" rating on the "Terra Project" from "LOW" to "HIGH." Note that this changed had a ripple affect across the portfolio. Not only did Terra jump up to the #2 project in the portfolio, but all the other projects lost some of their rating value. All the projects need to add up to one at the end of the day.
"One's gain is another's loss." This Machiavellianphilosophy is key to balancing and aligning a portfolio of projects because in the end, resources are finite and there is only so much of them to be distributed across the projects--and when they run out, you need to know which projects to kill or delay.
Now I'll turn on "grid prioritization" and we will see the impact when we make the same change to the Terra Project. This next example shows how the project list is effectively "normalize." Each project in the list is considered in itself and the vertical link between project proportionality is broken.
This final example illustrates what happens when I increase Terra Project from LOW to HIGH in the "Strong Customer Engagement" criteria with Grid Prioritization turned on. Notice the other projects remain unchanged, while Terra jumps up to 43.1%. This pattern would remain the same if a new project was added to the portfolio, as each project is simply ranked in itself, you are unable to see how the new project changed the prioritization of the current project mix.
This leads to projects being ranked close to one another, little differentiation/separation between project rankings, and little ability to see how new projects impact the ones that are already in the pipeline.