Measuring
Reuse
IBM
Corporation
P.
O. Box 950, A80D/996, Poughkeepsie, NY 12602
Tel:
(914) 432-1516, fax: (914) 432-3601
Email:
poulinj@tdcsys2.ibm.com
26
October 1992
Abstract
This position paper describes a framework
for measuring software reuse. The
method relies on readily available software data elements and defines three
metrics derived from the observable data. These metrics may be used to assess
the success of an organizational reuse program by quantifying the reuse
practiced in the organization and estimating the resulting financial benefit.
The key to any reuse metric is the
accurate reflection of effort saved.
The author developed and implemented a definition of reuse that
distinguishes the savings and benefits from those already gained through
accepted software engineering techniques.
Keywords: Reuse
Metrics, Measuring Reuse, Software metrics, Return on Investment Analysis.
Workshop Goals:
Learn and exchange information on reuse methods and metrics.
Working Groups:
Design guidelines for reuse, Useful and collectible metrics, Reuse and formal
methods.
The author has been active in the area of
software reuse since 1985 and was key to the development and acceptance of the
IBM software reuse metrics. He has
conducted extensive research in software measurement techniques and implemented
a program measurement tool for the workstation platform. He is the lead technical member of the IBM
Reuse Technology Support Center (RTSC) with responsibility for IBM reuse
standards and metrics.
Software metrics are an important
ingredient in effective software management.
Metrics are a means to measure “the software product and the process by
which it is developed’’[Mills88]. The metrics
can then be used to estimate costs, cost savings, or the value of a particular
software practice.
As with general software metrics, reuse
metrics must quantify the effect of the software process and the benefit it
provides. However, to assist in the
technology insertion process, reuse metrics must also encourage the practice of
reuse. Since reuse is bi-directional
(reusing software and contributing reusable software), reuse metrics must
recognize both activities. Finally, the metrics must establish an effective
standard that may be implemented by the development organizations in the
enterprise.
Central to improving the practice of reuse
is the understanding that good design and management is common within
development organizations, but is less common between organizations. Communication, which is necessary for the
simple exchange of information and critical to sharing software, becomes more
difficult as the number of people involved grows and natural organizational
boundaries emerge. Therefore,
measurements must encourage reuse across these organizational boundaries.
A software component is reused when it is
used by an organization that did not develop or maintain the component. Software development organizations vary, but
for measuring reuse a typical organization is either a programming team,
department, or functional group of about eight people. Also, although organizational size is a good
indicator of how well communication between organizations takes place,
functional boundaries are equally important.
For example, a small programming team may qualify as an organization if
it works independently.
Our experience in IBM is that establishing
a realistic return on investment on a reuse program is essential to inserting
reuse into a corporate software development process. Clearly stating the potential benefits of reuse in financial
terms has proven to be a powerful motivator.
However, the business case given by the return on investment model must
be achievable and not just demonstrate the substantial benefits of reuse. This position paper describes the reuse
metrics and return on investment process in place at IBM.
Reuse metrics are composite
representations of the following observable data elements. Note that alternatives to “lines of code’’
for measurement are equally effective (for example, function points[Banker91a]
[Dreger89].
It is absolutely essential when acquiring
the observable data elements, especially RSI, to recognize when reuse actually
saves effort. This requires the
researcher to distinguish reuse from normal software engineering practices
(e.g., structured programming) and to eliminate implementation-dependent
options effecting the observable data elements. (e.g., static versus dynamic
subprogram expansion). [Poulin92]
provides a detailed approach to these considerations.
The observable data elements combine to
form three derived reuse metrics. These
are:[Poulin92]
The purpose of the Reuse Percentage
measurement is to indicate the portion of a product, product release, or
organizational effort that can be attributed to reuse. Reuse Percentage is an important metric
because it is simple to calculate and it is easy to understand. Unfortunately, it is also easy to
misrepresent without a supporting framework. Many companies report their reuse
experiences in terms of “reuse percent,” but few describe how they calculate
the values. They commonly include informal reuse in the value, making it
difficult to assess actual savings or productivity gains. Since RSI is clearly defined, the reuse
percentage metric is a reasonable reflection of effort saved.
The purpose of the Reuse Cost Avoidance
(RCA) measurement is to quantify the financial benefit of reuse. In addition to historical development data,
RCA is based on a “relative cost of reuse,” and incorporates the effort to search
for, retrieve, and assess the suitability of reusable components for
integration into a product. RCA is a
particularly important metric because it shows the tremendous return on
investment potential of reuse. Because
RCA is a key metric in performing return on investment (ROI) analysis of reuse
programs, RCA also helps with the insertion of reuse technology.
The previous two metrics measure how much
organizations reuse software. It is
also important to motivate contributing software to reuse. The purpose of the Reuse Value Added is to
provide a metric that reflects positively on organizations that both reuse
software and help other organizations by developing reusable code. RVA is a ratio, or productivity index;
organizations with no involvement in reuse have an RVA=1. An RVA=2 indicates the organization is twice
as effective as it would be without reuse. In this case the organization was
able to double its productivity either directly (by reusing software) or
indirectly (by maintaining software that other organizations are using).
There are other reports on reuse
measurements available. Although reuse
percent is most common metric, the majority of published methods focus on
financial analysis. This is because the
cost benefit of reuse is a highly convincing measure for project managers and
planners. The three derived metrics are
related to their corresponding references in the previous section. The definition and the collection of the
observable data is ingrained in the IBM programming process; the application of
the observable data items is explained in [Poulin92].
Of the several measurement methods
currently in use, the method in this paper is unique in the attention given to
the definition of RSI and in attempting to present reuse as “real effort
saved.” Although [Banker91a] differentiates between reuse within an
organization and reuse from sources external to the organization, no other
paper addresses the distinction between software engineering techniques and
reuse, nor do they provide a concentrated definition of RSI.
[Banker91a]
Rajiv D. Banker and Robert J. Kauffman, “Reuse and Productivity in an
Integrated Computer Aided Software Engineering (ICASE) Environment: An
Empirical Study at the First Boston Corporation”, First Boston Corporation,
10 July 1991.
[Banker91b]
Rajiv D. Banker, Robert J. Kauffman, Charles Wright and Dani Zweig, “Automating
Output Size and Reusability Metrics in an Object-Based Computer Aided Software
Engineering (CASE) Environment,” First Boston Corporation, 25 August
1991.
[Mills88]
E.E. Mills, “Software Metrics,” SEI Technical Report SEI-CM-12-1.1,
1988.
[Dreger89]
Dreger, J.B. Function Point Analysis. Prentice-Hall, 1989.
[Gaffney89] Gaffney Jr., J.E. and Durek,
T.A., “Software Reuse- Key to Enhanced Productivity: Some Quantitative Models,”
Information and Software Technology, Vol. 31, No. 5, June 1989.
[Hayes89]
Hayes, W.E., “Measuring Software Reuse,” International Business Machines, IBM
Document Number WEH-89001-2, 2 October 1989.
[Jones91]
Jones, Capers. Applied Software Measurement: Assuring Productivity and
Quality. McGraw-Hill, 1991.
[Margano91]
Margano, Johan and Lynn Lindsey, “Software Reuse in the Air Traffic Control
Advanced Automation System,” Joint Symposia and Workshops: Improving the
Software Process and Competitive Position, Alexandria, VA, 29 April-3 May
1991.
[NATO91]
NATO, “Standard for Management of a Reusable Software Component Library,” NATO
Communications and Information Systems Agency, 18 August 1991.
[Poulin92] Poulin, Jeffrey S. and W.E. Hayes, “IBM Reuse
Methodology: Measurement Standards,” International Business Machines Internal
Document, 16 July 1992.
Jeffrey S. Poulin is
an advisory programmer at IBM’s Reuse Technology Support Center, Poughkeepsie,
New York. His primary responsibilities
include corporate standards for reusable component classification,
certification, and measurements. His
interests include object-oriented database systems, semantic data modelling,
CASE, and formal methods in software reuse.
He is currently conducting research into alternative methods for
reusable software distribution and retrieval.
He is a member of the IBM Corporate Reuse Council, the Association for
Computing Machinery, and Vice-Chairman of the Mid-Hudson Valley Chapter of the
IEEE Computer Society. He received his
Bachelor’s degree from the United States Military Academy at West Point, New
York, and his Master’s and Doctorate degrees from Rensselaer Polytechnic
Institute in Troy, New York.