Jeffrey S. Poulin, Ph.D.
Loral Federal Systems
Owego, New York
This tutorial provides an introduction to measuring software reuse. It motivates and illustrates the importance of metrics in both a reuse program and their role in evaluating experience reports published by others. The tutorial presents the issues surrounding reuse metrics; the types of metrics, economic models, return-on-investment analysis, and cost-benefit analysis. The tutorial contains extensive quantitative data on the relative benefits and costs of reuse. Finally, this tutorial explains the most important issue in reuse measurement- defining what to count as reuse and why. Without a uniform understanding of what to count, all reports of reuse levels and benefits become automatically suspect. By addressing this issue, this tutorial puts reuse measurement into a reliable and consistent context.
We must first make it clear what we hope to accomplish through reuse measurement. When introducing a reuse program to an organization, a reuse advocate faces an immediate problem when the advocate walks into a development manager's office and starts talking about expected benefits. Over the past year or so the manager has heard similar promises from a plethora of other groups; the Malcom Baldridge team, the ISO9000 team, the Total Quality Management (TQM) team, the Software Process Team, etc. Every one of those groups promises the same kinds of benefits. However, the manager never sees any significant change. Part of the problem comes from the lack of measurements [Pou93]. Part of the problem comes from the ability to trace a specific action (e.g., reusing software) to a specific benefit (e.g., the bottom line). The manager will say:
"I have heard this all before. If I actually experienced all the cost savings and productivity improvements you guys promised, I wouldn't have a budget left nor people to work for me. Nothing ever comes of this stuff."
The following lists the major goals for reuse metrics. Note the emphasis on truthfulness and simplicity.
Note that in almost every experience report you see a level of reuse in terms of "reuse percent." The equation for calculating percents uses only simple division, so reuse percent equals:
(Reused Software / Total Software) * 100
This metric has a number of advantages. First, people find it easy to understand. As you read the experience reports and the discussions about the levels of reuse you probably felt pretty comfortable hearing about reuse levels in terms of "percent." Second, as the equation above shows, anyone can easily calculate a reuse percent value given some pretty basic information and a calculator.
We will see that reporting reuse levels in terms of percents has become the de facto standard in industry. You see it and hear it all the time when people discuss software reuse. However, without any explanation of the data that makes up the metric, the metric has no meaning. Furthermore, without a consistent definition of the data and a repeatable method of obtaining it, a formal metric cannot exist. The experience reports cite pretty specific results based on software reuse. But:
No one defines what they count!
How do these companies measure their level of reuse? When you see these numbers in experience reports, how do you know where they came from? You will feel especially uncomfortable once you have seen some of the ways organizations use metrics to mis-represent what they really do. Furthermore, without a standard definition of what counts and what does not, an analyst cannot compare one report with another. Without this definition, an analyst simply cannot believe many of the reports now in print and the reader of this tutorial should suspect them as well.
Having established a metrics-based definition of software reuse, the tutorial surveys the major reuse metric and economic models. The tutorial examines major works and how they quantify reuse by such means as: reuse level, reuse ratios, cost-benefit analysis, and reuse return-on-investment. Each method has strengths that apply very well in certain situations; the tutorial includes a summary of the methods and a recommendation as to when to apply each one.
This tutorial emphasizes a fundamental truth of this field:
Business decisions drive reuse!
Metrics make those business decisions possible by quantifying and justifying the investments necessary to make reuse happen. Metrics put into numbers the factors that make reuse such a powerful method to improve an organization's software development competitive advantage. Once an organization collects the data and shows the return on investment, the business decision will support the most cost effective way of building software!
The tutorial aims to give the reader the background necessary to implement and understand reuse metrics. Part of that understanding includes an introduction to major metric models. This tutorial adds value to the original presentations of each model by explaining each model within a common framework and by helping to explain when to apply a particular model.
From here, the tutorial provides a discussion of different approaches to software reusability metrics. Although these approaches have shown a lot of success and innovation towards identifying attributes of reusability, the tutorial explains why a general reusability cannot exist. Nonetheless, the reader can use the attributes of reusability in many useful ways.
Finally, the tutorial looks at different metrics to consider when working with reuse libraries. Throughout the early history of software reuse, reuse library issues drove the research and technology in the field. This tutorial explains the metrics that an organization will find useful when evaluating the success and use of their reuse library.
Although metrics aim to objectively quantify the activities of an organization, their application often leads to difficulties that span far beyond the quantifiable. This tutorial addresses the issues that any organization must face when putting together a metrics suite for a reuse program. It defines reuse from a metrics point of view, explains how to use that definition in metric and economic models, and uses real-world situations to show how to put metrics to work in any reuse program.
Jeffrey S. Poulin, Ph.D., (poulinj@lfs.loral.com) MD 0210, Loral Federal Systems, Owego, New York, 13827. Dr. Poulin works with the Loral Advanced Technology Group as the lead software architect on a major Management Information System for the U.S. Army. His responsibilities in Owego have included technical lead for the LFS reuse program, to include WWW-based information retrieval and reuse measurement. A former member of IBM's Reuse Technology Support Center (RTSC), Dr. Poulin helped lead the development and acceptance of a major software reuse metrics and return on investment (ROI) model. Dr. Poulin has over 30 publications on software metrics and reuse.
In addition to serving on numerous conference committees and panels, Dr. Poulin chaired the IEEE Computer Society 6th Annual Workshop on Software Reuse (WISR'93), served as program chair of the 1995 DoD Domain Scoping Workshop, and served as tutorial chair of the 1996 IEEE International Conference on Software Reuse (ICSR'4). A Hertz Foundation Fellow, Dr. Poulin earned his Bachelors degree at the United States Military Academy at West Point and his Masters and Ph.D. degrees at Rensselaer Polytechnic Institute in Troy, New York.