Traditional "project scoring" systems we see are a list of projects in a spreadsheet scored against some sort of measurement criteria.

Most project ranking schemes stop at the ranked list of projects, at best. The problem is that you never know if the prioritized list can be completed without knowing what resources it takes to get all the project done, satisfactorily.

These "constraints" must be applied to the project portfolio to determine "what can actually be done." A ranked list of projects is just the first step. This tells you what could be done, but not what can be donegiven available (and now "diminishing" in some organizations) resources. In addition to cost, material/equipment, and people constraints there are RISKS that need to be considered when selecting the optimal portfolio mix. Further, there are also the MUST HAVES that tend to override the basic logic of constraints. Without factoring in the constraints--just prioritizing projects in a portfolio is only 50% of the solution.

In the example above you can see a simple illustration of a budget constraint being applied to my little project portfolio. I need $10,150,000, but I only have $9.5M. The model applied the constraints and dropped the lowest ranking project. Simple to illustrate the point, but this gets more interesting when more projects are added into the mix and other factors such as RISK and MUST HAVES are factored into the model.

Most development portfolios have some form of prioritization applied in order to differentiate the most important projects in the mix. However, we rarely see the real constraints of the organization applied to the portfolio to determine what is actually going to get done within a given planning window (normally a 12 month planning window).

Is it that senior managers have their "head-in-the-sand" and do not want to consider financial, resource, or risk constraints for fear that the gap in what is expected and what can be done will be "exposed?" We've seen the proverbial "10 lbs of flour is crammed into a 5 lb sack and have the 5 lbs that fall on the floor be ignored," development managers are then asked to "stretch" to the "aggressive" goals. Lots of hand-holding and group-think follows, and then "team-building" sessions in the really good organizations! Result... the gap is still there.

What happens is that either all projects are started and all are late due to insufficient resources or a few projects are started and those are also delayed as scare resources are shifted between projects and the other un-started projects slip out to the next planning window--becoming possible casualties of technology obsolescence.

The best-practice organizations prioritize projects assuming NO constraints (at first), to determine the right priority of projects (assuming they had the resources to do them) and then they apply constraints to determine the optimal mix , i.e. the greatest benefit (biggest-bang-for-the-buck). The projects that fall below the line are KILLED or delayed.

Following is a great example of this cost-benefit analysis at play. For about $6.5M I can get 80% benefit (towards my objectives). So in this case I can actually spend less than $9.5M and achieve most of my goals.