Bradford L. Goldense
Jonathan B. Gilmore

Goldense Group Inc. [GGI]
Needham, Mass.

Product development is key to business success. It determines every new product's final cost, what features it will have, and whether the company will make money selling it. And many companies track a variety of metrics or variables in the process so that management can measure and manage development.

But what gets measured and when? Are these measurements consistently updated throughout a project's lifetime, or are they perfunctorily checked off a list and then forgotten? And when a project is completed and the new product launched, does the development team track the product into the marketplace and bring back information that may benefit future endeavors?

These were some of the questions posed by a recent industry survey conducted by our company. Our initial goal was to determine the degree of standardization that exists in project metrics in individual companies, across industries, and across all companies.

Survey findings
Most respondents (66%) claim that some standard measures are taken for all their company's development projects. Of these companies, 93% say their standards have changed over the past 10 years; 89% say standards have changed in the past five years. The vast majority of these companies (94%) also predict their standards will change within the next five years. This suggests that the trend of companies to standardize project measurement is strong and increasing, and that it is an area receiving continual management attention. However, the survey also shows that project standards still vary a great deal.

Not only do the metrics vary, so do the times at which they are taken. The largest group of respondents (45%) claim their companies review projects at both predetermined milestones, such as after product definition or prior to approving development, and on a periodic (monthly or weekly) calendar basis. Almost a third (30%) measure projects at specific Stage/Gate milestones. (These milestones occur after specific, well-defined phases in product development. They were first defined by Robert Cooper during the mid1980s at McMasters University in Canada). A quarter of the companies say they measure development projects, like their financial status, by the calendar.

This shows that too many companies still rely on out-dated calendar reporting, apparently distrusting their employees to make sure projects are on track. According to Cooper, management should not call in anyone for periodic briefings unless it can add value to the process. It's more natural or logical to hold briefings at well-defined stages or milestones, such as product definition. Then management can add their expertise and insights, and help the project along.

It is sometimes hard to explain to nonengineers and management that product development does not follow a calendar. This makes it hard for accountants and the financial people to get a good understanding of the process and how to control it. Instead, they try forcing the round peg (R&D) into a square hole (accounting metrics). So it's no surprise that most metrics focus on dollars spent and target prices. And there has been virtually no change in these metrics since 1920. All this highlights the fact that managers need a new yardstick to measure R&D.

Another business practice tracked by the survey is postlaunch reviews. Respondents are almost evenly divided in their use of such reviews. A little more than half (53%) say they use postlaunch reviews some of the time, while a little under half (47%) say they don't use them at all. It seems odd and counterproductive that so many companies take the trouble of measuring their processes, and then never correlate those measurements to actual performance. As Cooper has shown, if management takes the time at the end of projects to discuss milestones missed and met with the entire team, it lets them calibrate what teams had promised to what they actually deliver. This kind of insight leads to better decision making.

Among companies doing postlaunch reviews, the overwhelming majority (80%) conduct them periodically after launch. They then conduct fewer reviews as it gets farther and farther beyond launch. For those that do postlaunch reviews, the average is 2.45 reviews for each project.

Interestingly, small companies, defined by annual revenues or number of full-time employees, keep an eye on launched products for a longer time than do larger firms. For example, of companies with less than 1,000 employees, 75% conduct postlaunch reviews at six months, and 44% do it at one year. For comparison, among companies with more than 1,000 employees, only 59% do six-month reviews, and 35% hold them one year after launch. And although few companies review products at endof-life or obsolescence, a greater percentage of under-1,000-employee companies (13%) conduct them than those with more than 1,000 employees (5%). Perhaps this statistic underscores the difficulty in getting a large staff to conduct more frequent product reviews, but it may also suggest that every new product is financially vital to smaller firms.

Similarly, more than threequarters (76%) of companies with annual revenues less than $250M do postlaunch reviews on a formal, targeted basis at six months, compared to 60% of companies with revenues above $250M. At one year, those figures are 48% for small companies, and 36% for larger companies. It seems smaller companies, with more of their resources potentially at risk with each new product, must more closely monitor their development processes.

Postlaunch reviews are an opportunity for companies to enhance their organizational knowledge of product development. They give development teams the chance to tie launched products back to initial financial and strategic business goals. They also let everyone learn which projects met their goals, which did not, and why. Monitoring products after launch creates a more effective and efficient feedback loop within a company, improving the likelihood of success for future products. We expect to see greater numbers of companies holding postlaunch reviews as management finally understands the importance of product development.

High and low tech
The survey also detected differences between high and low-tech industries in how they measure product development. (The hightech group consists of respondents from companies in aerospace, communications, computers, software, defense, medical, semiconductors, telecommunications, and research and national laboratories. The lowtech group consists of respondents from all other companies). Hightech companies showed greater change in the metrics they use. More than two-thirds (68%) of high-tech and 55% of low-tech firms have changed standard measures. All high-tech and 91% of low-tech firms will change their standard metrics in the next five years. High-tech companies are more flexible in measurement and reporting systems probably because a greater percentages of their revenues and profits stem from new products or shorter life cycles.

There were also difference between high and low-tech companies in the time intervals at which they measure projects. About a third (35%) of low-tech and a fifth (22%) of high-tech firms tracked development projects on a calendar basis. But more high-tech firms than low-tech ones (32% compared to 21%) use Stage/Gate reporting. About equal percentages of both groups review projects on both a calendar and Stage/Gate basis. As one might expect, hightech companies are taking the lead in establishing formal Stage/Gate measurement processes.

Differences in responses also appear when companies are divided on the basis of revenues. Two-fifths (40%) of companies with $250M or more in annual revenues track development projects on a calendar basis. Only about a quarter (23%) of companies with annual revenues greater than $250M do the same. More than half of over $250M companies track both calendar and Stage/Gate milestones, while only 38% of those under $250M do both. Larger companies, therefore, seem more likely to conduct dual-track reporting. This could be because they have more on-going projects, and they believe they need the more elaborate reporting processes. Or it could be bureaucratic inertia in large companies, with no one willing to end a process that has since become redundant.

In our opinion, companies using both calendar and Stage/Gate reporting are probably overmeasuring. Such companies probably never really transitioned to Stage/Gate methodology and have let traditional practices remain entrenched.

R&D projects have development schedules that typically don't show usefully measurable changes on a weekly or even a monthly basis. Development cycle times range anywhere from six months (software and computers) to five years or more (aircraft). Tying R&D projects to an optimized measurement process such as Stage/Gate ensures consistency and standardization because it was designed for product development. However, managers still feel they must harmonize processes within a company, forcing R&D into using accounting's periodic reviews.

Not surprisingly, companies with fewer than 1,000 workers showed greater changes in their cross-project measures over time. Companies in this group were more likely to have changed their standard measures in the past five years, and in the past year. In addition, among firms with no standard measures across projects, companies with under 1,000 employees are more likely to set standards within the next five years than those with more than 1,000. Smaller companies, which include newer firms, also show greater variability in standardizing project metrics.

Overall findings
The survey shows that product development metrics are usually calculated at early project stages, such as at the Definition Approved or Development Approved milestones. Companies take careful measurements during these planning phases, and then they're not examined until after product launch. There are natural exceptions to this. Metrics such as Schedule Slip Rate or Product Specification Changes, are done more often, as they are management's way of measuring ongoing processes. But it suggests that only requirements and time-based variables are consistently tracked during projects.

But project metrics are becoming standardized within and across companies, according to our research. There are strong trends towards automating the management of product development with software packages, offshoots of PDM and ERP. Yet both this survey and industry experience in general indicates that there are few, if any, true multiproject management systems available. Such systems would track projects and customer orders, then assign resources, so company's could better use their capacity. The next step toward such software is establishing centralized, standardized multiproject metrics.

Metrics are typically estimated in the early planning stages of product development, but tracking seems to break down in the latter stages. For example, more than two-thirds of those who use Target Cost and Target Price calculated them during the first two development phases, Definition Approved and Development Approved. But less than one-half of them continue to track those metrics through subsequent phases. This means that opportunities to be proactive or predictive were lost.

Measurements that can give managers better insight on product strategy or profitability, such as Time-to-Profit or Break-even Time, rank low in both the use and frequency. Project metrics are, on average, still divorced from the larger strategic and profitability concerns of business. So management methods of measuring business performance are still largely reactive, rather than proactive or predictive. Once a business decides to proceed with a development project, measurements become more tactical and infrequent.

We are still a long way from having the necessary levels of control over R&D projects. But it appears industry is ready for multiproject management and control systems that will push their practices to the next level of excellence.


This table shows what percentage of responding companies use a particular product-development metric. Margin of error is roughly ±11%.

Survey stats
The 1998 survey was conducted by Goldense Group Inc., Cambridge, Mass. The 13-page survey questionnaire was administered through the mail. Over 6,000 questionnaires were distributed and 190 usable forms were returned, a response rate of 3.2%. The data was compiled, analyzed and presented at The Management Roundtable's 3rd Annual Conference On Metrics For Managing Products, Projects, & Resources in Chicago. Subsequently, GGI published three reports of increasing detail and length entitled Survey Highlights, Survey Summary, and Survey Results.

A different way to look at project metrics
This graph divides metrics up into four categories and then charts them according to use. It shows that at least one process measurement is used by more than 80% of the survey's respondents. (Process metrics measure the way people are doing the work; Product metrics measure product specifications.) Of that 80%, two-thirds track a total of three or more process measures, along with one metric from resource capacity, resource cost and sales/profit/contribution.

Traditional basic metrics, those in the middle circle, are tracked by just more than half of respondents. These metrics include marketing/promotion costs and ROI or payback, the financial measures vital to corporate success, and product development accountability. A significant number of other sales/profit/contribution metrics and process metrics are in use, but they're not as widespread.

The most commonly employed measures such as target product cost, target product price, time to market, and capital, are reactive metrics. They help management look at what has already happened. More sophisticated planning and predictive metrics, such as those measuring planned capacity utilization and schedule slip rate, can help predict outcomes, thus giving management a chance to rectify the situation. These metrics are better at matching product development to the business goals it is supposed to support. Predictive metrics also help management identify past mistakes and avoid them in the future projects.