Traditional "project scoring" systems we see look like this... a list of projects in a spreadsheet scored against some sort of measurement criteria. Sometimes the criteria is weighted by importance.
Further, the scoring of alternatives (or development projects) tends to be against simple numeric scales, such as 1-5 (i.e. 1 being low and 5 is high). Although this does provide a scoring system of sort, it falls short in really explaining the measurement criteria each project should be ranked against. It also requires a "look-up table" to cross reference what "5-high" really means.
Rather, we add the specific values to the model, such as the actual $ estimates for each objective/criteria. This improves the accuracy of the ranking model and its value as a communication tool.
In the example above I present five Objectives (e.g. ROI and Margin) with different forms of RATINGS criteria. For example:
Customer Engagement: low (1), medium (3), high (9) scale. This weighting is used in QFD to accentuate the high.
ROI: specific ROI values, for example "24" means a 24-times return on investment
Lead Customer: a linear scale from 0-10 where 10 is good (i.e. really a lead customer) and 0 is bad (a nobody customer)
Margin: actual monitory values for the for the estimated Margin that will be attained once the product is sold
Differentiation: again a QFD rating of 3-9
Once the Rating criteria is defined, each project in the portfolio was Ranked by the management team. Based on the weightings of each Objective (e.g. Margin = 14.4%) the projects received scores and were ranked in priority order.
Sometimes a simple number scheme (e.g. 1-5) provides a quick way to score projects. This can be further refined later with the actual values to improve model accuracy, so "1" could be replaced with <$500k (for example) and 5 replaced with >$1M. In the example above, they went back and replaced the rating scales (e.g. 1, 3, 9) with specific values that equated to each of the three levels.