A REUSE METRICS AND MEASUREMENTS PROCESS 11 September 1992 Jeffrey S. Poulin, Ph.D. International Business Machines Corporation Reuse Technology Support Center PO Box 950 Department A80D, Building 996 Poughkeepsie, New York 12602 PoulinJ at TDCSYS2 email: poulinj@tdcsys2.vnet.ibm.com ii A Reuse Metrics and Measurements Process CONTENTS ________ ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 USING VERSUS REUSING CODE . . . . . . . . . . . . . . . . . . . . . . . . 4 Using old code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Planned reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 How to measure code recovery and planned reuse . . . . . . . . . . . . . 6 MEASURING REUSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 When to Measure Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . 8 What to Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Establishing Criteria for Reuse Metrics . . . . . . . . . . . . . . . . . 9 Units of Measurement . . . . . . . . . . . . . . . . . . . . . . . . . 10 OBSERVABLE DATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Shipped Source Instructions (SSI) . . . . . . . . . . . . . . . . . . . 12 Changed Source Instructions (CSI) . . . . . . . . . . . . . . . . . . . 12 Reused Source Instructions (RSI) . . . . . . . . . . . . . . . . . . . 13 Source Instructions Reused by Others (SIRBO) . . . . . . . . . . . . . 13 Software Development Cost (Cost per LOC) . . . . . . . . . . . . . . . 14 Software Development Error Rate (TVUA rate) . . . . . . . . . . . . . . 14 Software Error Repair Cost (Cost per TVUA) . . . . . . . . . . . . . . 14 Additional Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 DERIVED METRICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Reuse Percentage . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Reuse percentage of a product . . . . . . . . . . . . . . . . . . . . 17 Reuse percentage of a product release . . . . . . . . . . . . . . . . 17 Reuse percentage for an organization . . . . . . . . . . . . . . . . 18 Reuse Cost Avoidance (RCA) . . . . . . . . . . . . . . . . . . . . . . 18 Reuse Value Added (RVA) . . . . . . . . . . . . . . . . . . . . . . . . 19 RELATED WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Percent Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Cost Benefit Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 22 Ratio Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Statistical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 25 FUTURE WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 CITED REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 BIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Contents iii iv A Reuse Metrics and Measurements Process ABSTRACT ________ The key to any reuse metric is the accurate reflection of effort saved. This paper defines reuse metrics that distinguish the savings and benefits from those already gained through accepted software engineering techniques. These metrics may be used to assess the success of an organizational reuse program by quantifying the reuse practiced in the organization and estimating the resulting financial benefit. Establishing a realistic return on investment in a reuse program is essential to inserting reuse into a corporate software development process. Tradi- tional methods to improve productivity and reduce costs have been tried and the results realized; management must now seek new techniques and pursue them with limited resources. Our experience is that a powerful motivator is to clearly state the potential benefits of reuse in financial terms. The following paper describes the reuse metrics and the measurement process in place at IBM. Three metrics which comply with the motivation, goals, and other criteria needed to be useful are derived from readily available and observable software data elements. KEYWORDS: Reuse Metrics, Measuring Reuse, Software metrics, Return on Invest- ment Analysis. Abstract 1 OVERVIEW ________ Software metrics are an important ingredient in effective software manage- ment. They are a means to measure "the software product and the process by which it is developed." İMil88¨ The metrics may then be used to estimate costs, cost savings, or the value of a particular software practice. The lack of an industry standard for reuse metrics is one of the major inhibitors to a coordinated reuse program İBow92¨. Without a means to quan- tify the practice, development organizations are unable to judge their return on investment and are therefore reluctant to engage in an active reuse program. However, if metrics are used to verify and demonstrate the substan- tial benefits of reuse, organizations may be less reticent to realize such potential. With published productivity gains commonly claimed between 20-40% İSta84¨ and occasionally up to an order of magnitude İBan91a¨, organizations should be anxious to take advantage of the increased output and corresponding lower costs reuse offers. The traditional role of metrics is to assist management by quantifying the software process İNATO91b¨. Specifically, reuse metrics help management in three ways. They measure qualities of reusable components, provide an esti- mation and justification base in support of a reuse program, and feedback information needed by a reuse library organization. With an emerging technology, however, metrics must extend beyond their tradi- tional role. Reuse metrics must also encourage the practice of reuse. Most organizations do not practice formal reuse or are reluctant to invest in a formal reuse program. Reuse metrics must assist in the technology insertion process by providing favorable process improvement statistics and by placing emphasis on activity conducive to reuse. For example, since the technology consists of both reusing software and building software for others to reuse, metrics must recognize and reward both activities. Finally, reuse metrics must establish an effective standard that may be implemented by the development organizations in the enterprise. The data must be easily obtained, meaningful, and possible to implement in a uniform way. Organizations may then set reuse goals that focus on increasing the amount of reuse reflected by the metrics. In summary, reuse metrics must: 1. quantify reuse, 2. encourage reuse, and 3. standardize reuse counting methods. Following this overview of the motivation and goals of reuse metrics is a discussion on measuring the different classes of reuse. A section on col- lecting data about the software process and the criteria that sound and 2 A Reuse Metrics and Measurements Process useful metrics must possess lays the foundation for the details of the meas- urements. Finally, the paper presents some related measurement models and a brief description of future work related to these metrics. Overview 3 USING VERSUS REUSING CODE _________________________ The most fundamental division in how organizations practice reuse is between simply recovering old code for later use (sometimes referred to as "unplanned reuse") and engaging in a formal, planned reuse program. When the reuse ____ decision is made is what distinguishes between these two classes of reuse İSTARS89¨. Planned reuse starts early in the software lifecycle; evidence of planning for reuse are a thorough requirements study and domain analysis of the problem area. The goal of this additional planning and domain analysis is to identify the factors that normally change in later projects, such as: 1. Hardware or System Software, 2. User, Mission, or Installation, and, 3. Function or Performance. Early design and analysis will result in components that can accommodate these changes without modification. Failing to plan for reuse, however, is the norm in traditional software development. Every new software product is unique except for informally considering existing software for use in the new application. Although this informal use of previously developed software in new applications is widely practiced, the systematic reuse of existing code is not part of traditional software development methods. It is important to distinguish between these two classes, especially when determining the finan- cial return of a reuse program. USING OLD CODE ______________ Recovering old code, or the copying and modifying of existing code to meet new requirements, is not true reuse. Since code recovery results in addi- tional products to maintain, the benefits are nominal compared to planned reuse. The code recovery processes that accommodate the three change factors listed above are: Rehosting is modifying existing software to fit new hardware or system _________ software. Rehosting focuses on revising internal interfaces to fit the new environment, effecting minimal change in function. Retargeting is modifying existing software to fit a specific use or ___________ installation. Retargeting focuses on modifying external interfaces and physical configurations of equipment. The code function does not change although implementation details in the code changes. Salvaging is extracting potentially useful software from an existing _________ system and modifying it to fit a new use. Salvaging is the most basic form of recovery; it relies on a bottom-up strategy of integrating ele- ments from many sources to build a new product. In each case, the original software is copied and modified, and is therefore effectively new software. This new software is not measured as reused code. 4 A Reuse Metrics and Measurements Process However, there is usually a benefit to recovering old code, especially if the cost of modification is low compared to the cost of custom development. This benefit does not apply to any service costs (the new software must be main- tained) and the cost of modification is incurred for every new application recovering the code. PLANNED REUSE _____________ By planning for reuse, it is possible to enable the software component to be more easily reused independent of the three change factors listed above. Planned reuse increases the value of the software by expanding its applica- bility through careful design and building into the software tailorable attributes based on a range of potential uses. The planned reuse processes that accommodate the three change factors are: Porting is moving a software item from one hardware or software system to _______ another. Porting is a well known technique that is usually planned in advance. Ease of porting is the result of design considerations that isolate machine-dependent functions and use standard, virtual interfaces. Tailoring allows a single software system to adapt to the needs of spe- _________ cific installations, users, or missions. Tailoring is a pre-planned product modification using a controlled customization interface that does not entail direct source code changes. Assembling is the construction of a software system with pre-built parts __________ to allow for changes in function or performance. Assembly is the most common form of formal reuse; it is a strategy in which software compo- nents are designed, coded, tested, and documented for integration. In planned reuse, there is an increased level of cost and quality that is paid for once, in the initial development of the component. However, this cost is quickly recovered by subsequent projects that are able to reuse the generalized software and by reduced support costs resulting from having only one base product to maintain. Table 1 is a summary of how system changes affect an enterprise depending on the class of reuse that is in practice. Using versus Reusing Code 5 +---------------------------------------------------------------------------+ | Table 1. How system changes affect the enterprise. | +------------------------+-------------------------+------------------------+ | REQUIRED CHANGE | CODE RECOVERY ONLY | WITH PLANNED REUSE | +------------------------+-------------------------+------------------------+ | Hardware or System | Rehosting | Porting | | Software | | | +------------------------+-------------------------+------------------------+ | User, Mission, or | Retargeting | Tailoring | | Installation | | | +------------------------+-------------------------+------------------------+ | Function or Perform- | Salvaging | Assembling | | ance | | | +------------------------+-------------------------+------------------------+ HOW TO MEASURE CODE RECOVERY AND PLANNED REUSE ______________________________________________ Over several development cycles, planned reuse provides the greatest cost and productivity benefits because there is only one resulting base product to maintain. Using this criteria, it is clear metrics should focus on planned reuse. It is also clear that code recovery is not reuse, and therefore the three techniques for code recovery should not factor into reuse metrics.(1) The current state of reuse technology is in assembling reusable components into new applications; metrics must capture this activity. The next most advanced form of reuse is tailoring. Tailoring normally occurs in organiza- tions with mature, formal, reuse programs, and is the result of thorough domain analysis and careful program design. The tremendously successful reuse experiences on the IBM Advanced Automation System (ASS) for the Federal Aeronautics Administration İMar91¨ are an example of tailoring; reuse metrics must also capture this activity. The third form of planned reuse is porting. However, porting is an anomaly for reuse metrics because it is already a standard part of the business plan- ning of products. The resource estimates for a product are normally for development of the product on one hardware platform or operating system. A relatively nominal amount of resources are then allocated for changes required to adapt to other environments. The anomaly occurs because it appears we are able to achieve reuse in-the-large but still have difficulty practicing reuse in-the-small. Since porting normally involves adapting a minor portion of a large product, to include ported code in reuse metrics would cause misleading results in the form of unrealistically high measures of reuse activity. For example, an organization making small changes to a large base might report levels of --------------- (1) Some organizations choose to track the amount of recovered code in their products to emphasize the amount of "total leverage" gained by copying and modifying old software but code recovery is not included in these reuse metrics. 6 A Reuse Metrics and Measurements Process "reuse" close to 100%,(2) whereas an organization performing an equal amount of labor on an original project might do very well to to demonstrate reuse levels of 5-10%. To prevent this distortion, IBM does not include code porting in these reuse metrics. Organizations tasked with porting software separately track the amount of porting for which they are responsible. Finally, these reuse measurements exclude not only code porting but also the use of operating system services and prerequisite products (e.g., SQL/DS, GDDM). This is because these services are part of the system environment and not the application product. Clearly, applications accessing database func- tions via Application Programming Interfaces (APIs) should not claim use of the database manager source code in the reuse measurements. Therefore, organizations track this form of planned reuse with a separate metric. Table 2 is a summary of how to measure the processes that comprise code recovery and planned reuse. +---------------------------------------------------------------------------+ | Table 2. Reuse Techniques and the Metrics | +---------------------+----------+---------+---------------------+----------+ | CODE RECOVERY | MEAS- | | PLANNED REUSE | MEAS- | | | URED? | | | URED? | +---------------------+----------+---------+---------------------+----------+ | Rehosting | No | | Porting | Sepa- | | | | | | rately | +---------------------+----------+---------+---------------------+----------+ | Retargeting | No | | Tailoring | Yes | +---------------------+----------+---------+---------------------+----------+ | Salvaging | No | | Assembling | Yes | +---------------------+----------+---------+---------------------+----------+ | | | | System Services and | Sepa- | | | | | Prerequisite Pro- | rately | | | | | ducts | | +---------------------+----------+---------+---------------------+----------+ --------------- (2) The theoretical maximum is 85%. İJon84¨ Using versus Reusing Code 7 MEASURING REUSE _______________ WHEN TO MEASURE REUSE _____________________ Differentiating between code recovery, which results in new software to main- tain, and planned reuse, in which products are assembled or tailored from building blocks of reusable software, is important. Next, we define reuse based on "who" uses the component. Any enterprise expects intelligent program design, "good programming prac- _______ tice," and informal reuse within development groups and software products. This informal reuse is the ad hoc, spontaneous sharing of basic algorithms and software parts routinely practiced by software developers and the code recovery processes discussed in the previous sections. However, to realize savings and benefits that are not possible simply by good design and manage- ment, an enterprise must encourage the formal sharing of software. Central to improving the practice of reuse is the understanding that good design and management is common within development organizations, but is less common between organizations. Communication, which is necessary for the simple exchange of information and critical to sharing software, becomes more difficult as the number of people involved grows and natural organizational boundaries emerge. Therefore, measurements must encourage reuse across these ______ organizational boundaries. Software development organizations vary, but for measuring reuse a typical organization is either a programming team, department, or functional group of comparable size to a first line department or larger (e.g., about eight people or more). Example organizations are driver teams, component teams, and product teams. Examples that are not development organizations are indi- vidual programmers and small interdependent development groups. Also, although organizational size is a good indicator of how well communication between organizations takes place, functional boundaries are equally impor- tant. For example, a small programming team may qualify as an organization if it works independently. For consistency, the type and size of the reporting organization is consid- ered part of the metrics. This provides an informal check on the flexibility allowed in selecting the most appropriate boundary for the organization. Selection of an inappropriately small boundary would distort the value of the metrics upward and an inappropriately large boundary would result in low reuse values. Changing the organizational boundary between reports would eliminate any possibility for comparisons and evaluation of the reuse program. Therefore, the organization is clearly indicated as part of the reuse measurement and is not changed between reporting periods. Within development organizations, the key to giving credit for reuse is recognizing when reuse actually saves effort. For example, the use of a common math routine already provided by the language compiler does not save development effort because the programmer expects to use the service and 8 A Reuse Metrics and Measurements Process would not have to write the routine. Furthermore, software designers normally save effort by implementing often-needed services with macros or procedures; repeated calls to a macro are an example of "good programming practice" and are not reuse. WHAT TO MEASURE _______________ PRODUCTS, PRODUCT RELEASES, ORGANIZATIONS _________________________________________ The most commonly reported reuse measurement is the reuse percentage of a product or a new release of a product. The intent is to determine how much of effort (normally expressed in "Lines Of Code," or LOC) was saved by reusing software from other products or product releases. Reuse metrics for a new product are derived from the amount of effort (in LOC) in all source files of the product. For a new release of a product the metrics are derived from the portion of the source files added or changed since the prior release of the product. In both cases, the effort attributed to reuse comes from completely unmodified reusable components. Reusable components are easy to identify in new products because the criteria is straightforward; if use of a component saved having to develop a similar component, it is recorded as reuse. However, although a component may be "used" by an organization numerous times, a component can be "reused" by an organization only once. The above distinction is critical for accurate estimates of the benefits of reuse and return on investment analysis of projects. Since we expect organ- izations to use components previously developed for a product or previously developed by themselves, we do not credit them with savings resulting from this activity. Therefore, when measuring reuse in a release of a product, all reuse comes from unmodified components that are completely new to both the product and to that release of the product. Of course, all effort from prior releases of a product (the product base) is excluded from the analysis of reuse on a product release. The second most desired reuse metric is the reuse percentage of an organiza- tion. The intent is to determine what portion of software that an organiza- tion is responsible for delivering is actually maintained outside the organization. For example, a programming team might be responsible for a subsystem of large banking application. If they use standard financial and transaction processing routines rather than write new routines they must maintain, they may greatly increase their effectiveness as a programming team. ESTABLISHING CRITERIA FOR REUSE METRICS _______________________________________ Any useful metric must be based on common sense, providing as much useful information at as little cost as possible. There are many specific criteria for metrics, such as İRei90¨: 1. The metrics must be compatible with the existing software development process. Measuring Reuse 9 2. The data needed to quantify the metrics must be easy to collect and nor- malize. 3. The metrics must be easy to understand, analyze, and interpret. 4. The cost of data collection, analysis, and reporting must be kept to a minimum. 5. Collecting the metric data must not adversely impact the process or pro- ducts being measured. 6. The metrics must be objective and not subject to bias or distortion. 7. The metrics must be independent of implementation-specific details. 8. The metrics should help generate estimates of software cost, produc- tivity, and quality. 9. The metrics must measure what you seek to measure. Reuse measurements must accurately provide an assessment of effort saved due to reuse, not, for example, how well a program adheres to structured programming tech- niques. The observable data and metrics used by IBM adhere to these criteria; they are integrated into the development process and use data that have been col- lected by the corporation for many years. UNITS OF MEASUREMENT ____________________ These metrics use traditional "lines of code" to quantify the effort in soft- ware development. Although lines of code have well known deficiencies as a unit of measure, they are also simple to understand, they are easy to collect and compare, and they are difficult to distort. Nonetheless, there are actions that may be taken to increase confidence when using lines of code as a measure. One action is to use a standard code counting tool. Another action, taken with the metrics in this paper, is to use metrics derived from ratios or percentages of effort, and thereby eliminate the units of "LOC" from the metrics. 10 A Reuse Metrics and Measurements Process OBSERVABLE DATA _______________ The reuse metrics presented in the next section are calculated from the fol- lowing observable data elements which have been in use within IBM for many years İCPM91¨. Observable data may usually be directly measured from the product. For example, the different classes of source instructions are directly measurable. Observable data may also be historical data, collected over time for a variety of reasons related to managing the software develop- ment process. Costs for software development and statistical error rates are examples of historical data. Detailed descriptions of each of the required observable data elements follow the summary in Table 3. Shipped Source Instructions (SSI). The total lines of code in the product source files. New and Changed Source Instructions (CSI). The total lines of code new or changed in a new release of a product. Reused Source Lines of Code (RSI). The total lines not written but included in the source files. RSI includes only completely unmodified reused software components. Source Instructions Reused By Others (SIRBO). The total lines of code that other products reuse from a product. Software Development Cost. A historical average required for estimating reuse cost avoidance. Software Development Error Rate. A historical average required for esti- mating maintenance cost avoidance. Software Error Repair Cost. A historical average required for estimating maintenance cost avoidance. It is absolutely essential when acquiring the observable data elements, espe- cially RSI, to recognize when reuse actually saves effort. This requires the analyst to distinguish reuse from normal software engineering practices (e.g., structured programming) and to eliminate implementation-dependent options effecting the observable data. (e.g., static versus dynamic sub- program expansion). For example, the programmer's decision to implement a system service as a subroutine or as a macro should not affect the reuse metric. Observable Data 11 +---------------------------------------------------------------------------+ | Table 3. Observable data | +------------------+------------------+------------------+------------------+ | DATA ELEMENT | SYMBOL | UNIT OF MEASURE | SOURCE | +------------------+------------------+------------------+------------------+ | Shipped Source | SSI | LOC | Direct Measure- | | Instructions | | | ment | +------------------+------------------+------------------+------------------+ | Changed Source | CSI | LOC | Direct Measure- | | Instructions | | | ment | +------------------+------------------+------------------+------------------+ | Reused Source | RSI | LOC | Direct Measure- | | Instructions | | | ment | +------------------+------------------+------------------+------------------+ | Source | SIRBO | LOC | Direct Measure- | | Instructions | | | ment | | Reused by Others | | | | +------------------+------------------+------------------+------------------+ | Software | Cost per LOC | $/LOC | Historical data | | Development Cost | | | | +------------------+------------------+------------------+------------------+ | Software Devel- | TVUA rate | TVUA/LOC | Historical data | | opment | | | | | Error Rate | | | | +------------------+------------------+------------------+------------------+ | Software Error | Cost per TVUA | $/TVUA | Historical data | | Repair Cost | | | | +------------------+------------------+------------------+------------------+ SHIPPED SOURCE INSTRUCTIONS (SSI) _________________________________ Shipped Source Instructions (SSI) are the number of non-comment instructions in the source files of a product. SSI does not include Reused Source ___ Instructions (RSI). A call to a reusable part counts as one SSI. When reporting reuse measures for development organizations, SSI are the source instructions the organization maintains. SSI are the lines of code actually written by someone for a product. CHANGED SOURCE INSTRUCTIONS (CSI) _________________________________ Changed Source Instructions (CSI) are the number of non-comment source instructions that are new, added, or modified in a product release. CSI does _______________________ not include Reused Source Instructions (RSI) or unchanged base instructions ___ from prior releases of the product. CSI does include source instructions from partially modified components that, had they not been modified, would have been considered "reused." A call to a reusable part counts as one CSI. CSI are the lines of code someone actually had to change or add for a product release. 12 A Reuse Metrics and Measurements Process REUSED SOURCE INSTRUCTIONS (RSI) ________________________________ Reused Source Instructions (RSI) are source instructions shipped, but not developed or maintained by, the reporting organization. RSI are from com- pletely unmodified components normally located in a reuse library. Base instructions from prior releases of a product and source instructions from partially modified parts are not RSI. Source Instructions from a reused part count once per organization inde- pendent of how many times one calls or expands the part. There are two reasons for this: 1. Metrics must accurately reflect effort saved. Programmers use subrou- tines and macros because many functions are repetitive. Their use is standard programming practice; not reuse. 2. Metrics must be implementation independent. The choice of using a sub- routine versus a macro is a design decision usually resulting from many considerations well outside the realm of reuse. The decision to use macros should not be made because multiple in-line expansions increase the amount of reuse reported on a project. SOURCE INSTRUCTIONS REUSED BY OTHERS (SIRBO) ____________________________________________ Source Instructions Reused by Others (SIRBO) are an organization's source instructions reused by other organizations and indicates how much an organ- ization contributes to reuse. SIRBO is important because organizations must not only reuse software but help other organizations reuse software for a reuse program to succeed. SIRBO no only measures the parts contributed for use by others but also the success of those parts. Organizations writing successful reusable parts will have a very high SIRBO because SIRBO increases every time another organization reuses their software. This encourages organizations to generate high quality, well-documented, and widely- applicable reusable components. SIRBO are a summation over all parts an organization contributes to a library of: SIRBO equals (Source%instructions%per%part) times (The%number%of%organizations%using%the%part) Example: An organization's contributions to a reuse library are: a 10kloc ________ module in use by 5 other departments, a 25kloc macro in use by 6 other departments, and an unused 75kloc macro. The organization's SIRBO is: (5%depts. times 10%kloc) plus (6%depts. times 25%kloc) plus (0%depts. times 75%kloc) equals 200kloc% Observable Data 13 SIRBO is independent of the number of times the same organization invokes or calls the part. The same rules apply that apply for counting RSI; use of a reusable part saved having to develop the part one time, not one time for every call to the part. SIRBO is also a dynamic measure. As more organiza- tions reuse the components, the SIRBO of the donating organizations increases. SOFTWARE DEVELOPMENT COST (COST PER LOC) ________________________________________ To determine the financial benefit of reuse, the cost of developing software without reuse must be known. This new software development cost is a histor- ical average that is generally available from the financial planners and man- agement of the organization. The new software cost is normally found by adding all the expenses of the organization, including overhead, and dividing by the total output (in LOC) of the organization. SOFTWARE DEVELOPMENT ERROR RATE (TVUA RATE) ___________________________________________ No amount of testing, inspection, or verification can guarantee that a product is released without errors. Although emphasis on quality and strict adherence to development processes leads to better products, errors are inev- itably revealed once the product is released to the marketplace. Every development organization has a historical average number of errors, or TVUAs _____ ("Total Valid Unique Program Analysis Reports") uncovered in their products. Note that software components built for reuse are usually designed and tested to standards that are much more strict than those for normal program product components. The additional cost of this effort is justified by the savings gained when other organizations are not required to develop and maintain a similar component. The additional testing has an additional benefit in increasing the quality of the component, so we may expect fewer errors from reusable code. SOFTWARE ERROR REPAIR COST (COST PER TVUA) __________________________________________ To quantify the benefit of the increased quality of reusable components, we need the historical average cost of maintaining components with traditional development methods. As with software development cost, this figure is gen- erally available from financial planners and management in the organization. The figure is found by taking the sum of all costs, including overhead, of repairing latent errors in software maintained by the organization and dividing by the number of errors repaired. Although software maintenance includes enhancements to products, the cost of increasing function is not included in the software error repair cost unless 14 A Reuse Metrics and Measurements Process the change is the result of an error early in the development cycle, e.g., requirements or design. ADDITIONAL DATA _______________ Formal reuse programs normally develop a set of quality standards for the reusable components developed in the program. Standards ensure uniformity of style, dictate which elements are essential to understanding or reusing the component, and ensure a certain level of testing and functional completeness. Applying standards is costly, but also results in highly reliable and trusted reusable components. Once developed, it is good practice to emphasize the use of components that have undergone a standard certification process. Within IBM, reuse metrics are collected by three levels of quality: as-is, complete, and certified İIRMQ92¨. Reporting reuse by quality level not only emphasizes use of high quality software but also helps identify good candi- dates for certification. Observable Data 15 DERIVED METRICS _______________ The observable data elements combine to form three derived reuse metrics. The first two metrics indicate the level of reuse activity in an organization as a percentage of products and by financial benefit. The third metric includes recognition for writing reusable code. The three metrics, which are summarized in Table 4, are: İPou92¨ 1. Reuse Percentage; the primary indicator of the amount of reuse in a product or practiced in an organization. Reuse percentage is derived from SSI, CSI, and RSI. 2. Reuse Cost Avoidance; indicator of reduced total product costs as a result of reuse in the product. Reuse Cost Avoidance is derived from SSI, CSI, RSI, TVUA rates, software development cost (cost per LOC), and maintenance costs (Cost per TVUA). 3. Reuse Value Added; an indicator of leverage provided by practicing reuse and contributing to the reuse practiced by others. Reuse Value Added is derived from SSI, RSI, and SIRBO. +---------------------------------------------------------------------------+ | Table 4. Derived Metrics | +------------------+------------------+------------------+------------------+ | METRIC | SYMBOL | DERIVED FROM: | UNIT OF MEASURE | +------------------+------------------+------------------+------------------+ | Reuse Percentage | Reuse Percent | SSI, RSI | Percent | | o for products | | | | +------------------+------------------+------------------+------------------+ | Reuse Percentage | Reuse Percent | CSI, RSI | Percent | | o for product | | | | | releases | | | | +------------------+------------------+------------------+------------------+ | Reuse Percentage | Reuse Percent | SSI, RSI | Percent | | o for organiza- | | | | | tions | | | | +------------------+------------------+------------------+------------------+ | Reuse Cost | RCA | SSI or CSI, RSI, | Dollars | | Avoided | | Cost/LOC, | | | | | TVUA/LOC, | | | | | Cost/TVUA | | +------------------+------------------+------------------+------------------+ | Reuse Value | RVA | SSI, RSI, SIRBO | Ratio | | Added | | | | +------------------+------------------+------------------+------------------+ 16 A Reuse Metrics and Measurements Process REUSE PERCENTAGE ________________ The purpose of this measurement is to indicate the portion of a product, product release, or organizational effort that can be attributed to reuse. Reuse Percentage is an important metric because it is simple to calculate and it is easy to understand. Unfortunately, it is also easy to misrepresent without a supporting framework. Many companies report their reuse experi- ences in terms of "reuse percent" but few describe how they calculate the values. They commonly include informal reuse in the value, making it diffi- cult to assess actual savings or productivity gains. Since RSI is clearly defined, the reuse percentage metric is a reasonable reflection of effort saved. REUSE PERCENTAGE OF A PRODUCT The reuse percentage of a product (or first release of a product) is: Reuse percent equals > times 100 percent Product Example: If a product consists of 65kloc SSI and an additional 35kloc ________________ from a reuse library, then the Reuse Percentage of the product is: Reuse percent equals <35%kloc over <35%kloc plus 65%kloc>> times 100 percent equals 35 percent REUSE PERCENTAGE OF A PRODUCT RELEASE For a new release of a product, RSI comes from reusable components that are completely new to the product. A call to a component used in a previous release is a new or changed source instruction (CSI). The reuse percentage of a product release is: Reuse percent equals > times 100 percent Product Release Example: If a release of a product consists of 7k CSI plus 3k ________________________ "new" RSI from a reuse library, then the Reuse Percentage for this product release is: Reuse percent equals <3%kloc over <3%kloc plus 7%kloc>> times 100 percent equals 30 percent Derived Metrics 17 REUSE PERCENTAGE FOR AN ORGANIZATION All software developed and maintained by an organization is the organiza- tion's SSI. Any software used by the organization but maintained elsewhere is RSI. The reuse percentage of an organization is: Reuse percent equals > times 100 percent Organizational Example: If a programming team develops and maintains 80k SSI _______________________ and the team additionally uses 20k RSI from a reuse library, then the Reuse Percentage for the team is: Reuse percent equals <20%kloc over <20%kloc plus 80%kloc>> times 100 percent equals 20 percent REUSE COST AVOIDANCE (RCA) __________________________ The purpose of this measurement is to quantify the financial benefit of reuse. This is a particularly important metric because it shows the tremen- dous return on investment potential of reuse. Because RCA is a key metric for performing return on investment (ROI) analysis of reuse programs, RCA helps with the insertion of reuse technology. Reusing software requires less resources than new development, but is not free. The developer still must search for, retrieve, and assess the suit- ability of reusable components before finally choosing the appropriate part for integration into the product. Although reuse requires this effort to understand and integrate reusable parts, studies show that the cost of this effort is only about 20% of the cost of new development. İTra88¨ Based on this relative cost of reuse, the financial benefit attributable to reuse _______________________ during the development phase of a project is: Development%Cost%Avoidance equals RSI times (1 minus .2) times (New%Code%Cost) However, development is only about 40% of the software life cycle İGar91¨; significant maintenance benefit also results from reusing quality software. This benefit may be quantified as the cost avoidance of not fixing errors (TVUAs) in newly developed code. İGaf88¨ This savings is: Service%Cost%Avoidance equals RSI times (TVUA%Rate) times (Cost%per%TVUA) The total Reuse Cost Avoidance is then: 18 A Reuse Metrics and Measurements Process Reuse%Cost%Avoidance equals Development%Cost%Avoidance plus Service%Cost%Avoidance Example: If an organization has a historical new code development cost of ________ $200 per line, a TVUA rate of 1.5/kloc, and a cost to fix a TVUA of $43k, then the RCA for integrating 20k RSI into a product is: cabove cabove REUSE VALUE ADDED (RVA) _______________________ The previous two metrics measure how much organizations reuse software. It is also important to motivate contributing software to reuse. The purpose of the Reuse Value Added is to provide a a metric that reflects positively on organizations that both reuse software and help other organizations by devel- oping reusable code. RVA is a ratio, or productivity index; organizations with no involvement in reuse have an RVA=1 . An RVA=2 indicates the organization is twice as effective as it would be without reuse. In this case the organization was able to double its productivity either directly (by reusing software) or indirectly (by maintaining software that other organizations are using). Therefore, the total effectiveness of a development group is: <(SSI plus RSI) plus SIRBO> over Example: A programming team maintains 80kloc and uses 20 kloc from a reuse ________ library. In addition, five other departments reuse a 10kloc module the pro- gramming team contributed to the organizational reuse library. The RVA of the programming team is: <(80%kloc plus 20%kloc) plus (5%depts. times 10%kloc)> over 80%kloc equals 1.9 In this example, the RVA of 1.9 indicates the programming team is 1.9 times more effective than the team would be without reuse. Some organizations organize to obtain the most benefit possible from reuse. For example, the Mid-Hudson Valley Programming Laboratory and the IBM Federal Derived Metrics 19 Systems Company in Rockville, Maryland dedicate programming teams to develop and maintain shared software or site-wide reuse libraries. Corporate parts centers, such as the Boeblingen software center, also develop and maintain software for IBM-wide use. Experience shows that although these types of groups may have modest values for the reuse percentage metric, they have extremely high values for the RVA metric. This high RVA indicates the tremen- dous programming leverage they provide to their organizations. 20 A Reuse Metrics and Measurements Process RELATED WORK ____________ Since quantifying a process is an essential step in assessing its success and effectiveness, there are several methods currently in use. However, the metrics in this paper are unique in the attention given to the definition of RSI and in attempting to present reuse as "real effort saved." Although İBan91b¨ differentiates between reuse within an organization and reuse from sources external to the organization, no other paper addresses how to measure the classes of reuse nor do they provide a concentrated definition of RSI. There are other reports on reuse measurements available. Although reuse percent is most common metric, the majority of published methods focus on financial analysis. This is because the cost benefit of reuse is a highly convincing measure for project managers and planners. Another convincing measure is the effect of reuse on productivity; ratios indicating produc- tivity increases meet this need. PERCENT METRICS _______________ Percent metrics are analogous to the "Reuse Percentage" metric discussed above, and are the most widely accepted and publicized. This is in part because "percent" is so easily understood, one of the key criteria for metrics. The equation for calculating percents requires no explanation. One variation on measuring percent reuse is to change the unit of measure in places where alternatives to lines of code exist. These units are simply used in place of lines of code. For example, function points İAlb79, Dre89¨ or reusable objects İBan91a¨ may replace lines of code as the units of measure without effecting the metrics. The analyst must simply make all required substitutions in the calculations to allow for the change; e.g., basing reuse percent on the percent of total function points in a product that were reused. Where alternate units of measure are in effect, their use may convey the same information and intent as LOC. Care should be taken, however, if the alternate units affect the the granularity of measure. Banker İBan91b¨ reports reuse percent as (1 - New Object Percentage) : over Treating reusable components as atomic units is consistent with the view of reuse taken in this paper and is an effective approach when all objects are Related Work 21 fairly uniform in size. However, this unit does not accurately reflect reuse of entire subsystems or the tailoring of large reusable components. COST BENEFIT METRICS ____________________ In 1988 Gaffney and Durek İGaf88¨ published a comprehensive model addressing business case analysis of reuse. They premise their model on the need to amortize the cost of the reuse program, including the additional cost to build reusable components, across all projects using the component. When doing cost benefit analysis for software reuse, one needs to consider the long-term benefits and associated costs, which apply to every project using the component. Taking a short term approach to these costs, the additional cost of developing reusable components greatly overemphasizes their cost rel- ative to their benefit. They argue that a better economic estimate includes the number of times the component is reused. Gaffney and Durek define the cost of software development with reuse relative to the cost of software development with all new code. Their equation for this relative cost, C is: ________________ C equals R sub U times 1 plus R times (b plus ) Where: RİU¨ is the portion of non-reused (newly written) code. R is the portion of reused code. b is the relative cost of integrating reused code. E is the relative cost of creating reusable code. n is the number of uses over which the reused code is to be amortized. As with the Reuse Value Added Metric, if the R value is zero (e.g., there is no reuse), the value of C is equal to 1. The equation also shows that for a reusable component to pay off, the component must be reused at least two times. The second metric defined by İGaf88¨ is the Productivity Index, PI. PI is ___________________ the productivity relative to the productivity of creating the software product without reuse, and is defined as the inverse of C: Productivity%Index equals <1 over C> A PI of 2.5 indicates the measured project was 150% more productive in terms of cost than the project would have been without reuse. Observing that the coefficient b varies depending on the type of reuse _ (recovering, porting), İMar91¨ extends the above model b y defining addi- tional values of b for the different types of reuse, R. C becomes C plus R sub i times b sub i for each 22 A Reuse Metrics and Measurements Process (R sub i, b sub i) For example: Rİ0¨ is the portion of reused code from other sources. bİ0¨ is the relative cost of integrating reused code from other sources. RİR¨ is the portion of code requiring re-engineering (copied and modified code). bİR¨ is the relative cost of integrating re-engineered code. An additional cost-benefit model of reuse is presented in İNATO91¨. The NATO model consists of listing the major benefits and costs of reuse, then applying time-value of money formulas to adjust for future values. The bene- fits are: Saving due to Avoided Cost, SİR¨. The sum of costs avoided each time the _________________________________ component is reused. Service Life, L. The useful lifetime, in years, of the component (how _________________ long it will be maintained in the library). Demand. The number of times the component is likely to be reused during _______ its service life, L. The costs of reuse are: Cost to Reuse, CİR¨. The cost incurred each time the component is reused ____________________ including identification, retrieval, familiarization, modification, and installation İthis is the relative cost of reuse¨. Accession Time, TİA¨. The amount of time likely to elapse between the _____________________ decision to acquire the component and its availability in the library. Accession Cost, CİA¨. The cost to add the component to the library _____________________ including obtaining raw material, developing the complete component, and installing it in the library. Maintenance Cost, CİM¨. The cost to maintain the component in the _______________________ library, including maintenance and change distribution. The Net Saving to the Reuser (NSR) is the difference between the savings due to avoided cost and the cost to reuse: NSR equals S sub R minus C sub R The Net Saving to the Supported Program (NSP) is the total savings from all instances of reuse of a component less the accession and maintenance costs. The total savings from all instances of reuse is the NSR multiplied by the number of reuses, N: NSP equals (NSR times N) minus (C sub A plus C sub M) Although the NATO model continues with adjustments for the time value of money, there is little guidance on collecting the data required for the model. For example, there are no details on accounting for the savings due to avoided cost, how to estimate the number of times a component is likely to be reused, nor estimating the service life of a product that does not "wear out." Finally, where data is not available or the analyst feels mitigating Related Work 23 factors effecting the risk of the reusable product exist, statistical dis- tributions and estimates of risk factors may be used to adjust the inputs. An additional cost benefit model for reuse is presented in this issue İHan92¨. The model begins with a thorough explanation of the cost benefit analysis procedure for reuse, consisting of five steps: 1. Select the alternatives to be analyzed. 2. Determine the organizational priorities, goals, requirements, and busi- ness strategies that will influence the investment decision. 3. Determine the time period for the analysis. 4. Identify and quantify the costs and benefits of the alternatives identi- fied in step 1. 5. Perform the cost benefit analysis. Step four in the above procedure is the key to the model; Hancock provides an extensive listing of each cost/benefit factor contributing to both reusing and producing reusable software. Once these factors have been tallied, the analyst applies the following Present Value (PV) equation over the time period (n years) identified in step 3 of the model: n (Bİt¨ - Cİt.¨) PV = &sum. åååååååå t=0 (1 + d)(t) where d is the discount rate, Bİt¨ is the value of benefits in year t, and _ ____ _ Cİt¨ is the value of costs in year t. ____ _ RATIO METRICS _____________ The Full Utility Ratio (FUR) introduced by Hayes İHay89¨ is a comprehensive ratio metric that captures both reuse practiced and contributions to reuse. Like the Reuse Value Added, a FUR value of one indicates no reuse. The FUR, which provided the basis for the RVA metric, is: over Where: CSI are new and changed source instructions. ___ Reused is code from the general reuse "pool," including all macro expan- ______ sions. Copied is code copied into the product but not counted in the CSI İcopied ______ and modified code¨. 24 A Reuse Metrics and Measurements Process Ported is new code that is used on more than one platform. ______ Created is code in the CSI that will go into the general reuse "pool." _______ Users are an initial projection of the number of times the newly Created _____ _______ code will be used during the 24 months after the code is put into the general reuse "pool" İsimilar to SIRBO¨. The FUR includes a multiplier for popular reusable software created by an organization, thus encouraging quality contributions to a reuse library. However, the FUR does not distinguish between code recovery and planned reuse, nor between porting, tailoring, and assembling. Hayes provides an extensive classification of LOC by source, e.g., changed, ported, created. When collecting data for each of these classes, however, every occurrence of subroutine call or macro expansion contributes to the total LOC for that class. This is consistent, for example, with İBan91b¨, who considers every macro call an instance of reuse, but is inconsistent with the definition of reuse in this paper. Finally, the number of Users is a _____ initial projection of the number of times the component will be used. Since this is only an estimate and not necessarily updated over time, it does not reflect an actual value like SIRBO, which is an actual, dynamic measure of real use. STATISTICAL METHODS ___________________ There are several statistical methods which are not used in these metrics. One method is to adjust LOC measures with coefficients corresponding to the different expressive powers of programming languages. For example, if assembly language has an expressive power of 1 then Pascal has a relative expressive power of 3.5 İJon84¨. This method was not used to keep the metrics simple, especially in situations in which several languages may be used in a product. Another statistical method is to weight the metrics to adjust for situational variables. For example, the value of the different classes of reuse may be adjusted by a weights assigned to each class. Salvaging may therefore have a .6 relative cost of reuse and a 0% support savings. The metrics may also be adjusted if reusing one type of component is x times more difficult than using another type of component İBan91b¨. The addition of weighting factors was, like the use of language coefficients, not used to keep the metrics simple. Furthermore, data supporting suitable weighting factors is not available in most organizations. Related Work 25 FUTURE WORK ___________ Future work includes validation of these measures, including the predicted versus actual costs avoided and the factors comparing increased productivity rates with the value calculated for Reuse Value Added. Although the model uses industry experience for default values in the equations, actual values are used where available. For example, actual costs to develop new code and standard software development defect and maintenance data are usually known, and defect data (TVUAs) for reusable components are routinely gathered. However, data needs to be continually collected and studied to compare with industry experience and to maintain the accuracy of the model. The metrics in this document do not include the additional cost of developing reusable software. Although this cost becomes insignificant as a software reuse program matures, organizations may elect to include these costs when reporting the early benefits of their reuse program. This cost and the other costs associated with establishing and maintaining an organizational reuse program are included in the IBM Return On Investment (ROI) model. This paper discusses reuse measurements for software only. Future work will include methods to quantify reuse in areas other than software (e.g., Design, Test Case, Information Development). Many other areas for study remain. Much of the relative cost data is based on industry averages and needs to be captured, tracked, and validated. Other data of interest relate to the reuse process, to the success of the reuse library, or to return on investment analysis, including: The cost of developing Reuse components (estimated at 1.5-2 times normal development costs). The cost of certifying and testing Reuse components. Project costs for Reuse, including the cost to maintain a reuse library and to staff the support personnel. 26 A Reuse Metrics and Measurements Process CONCLUSION __________ Measurements are essential to the management of any process. With emerging technologies, such as software reuse, the value of metrics goes beyond the traditional benefits of assuring the quality of reusable components, demon- strating the success of a program, and improving the ability to plan and predict for future projects. Reuse metrics also serve to encourage reuse by providing feedback on the results of a reuse program and by highlighting the benefits of a organizational reuse effort. Metrics further standardize the means by which organizations report and compare the amount of reuse in their software development process. This paper introduces three metrics for software reuse; Reuse Percent, Reuse Cost Avoidance, and Reuse Value Added. The metrics rely on easy to collect data and provide both reasonable representations of reuse activity but also encourage reuse. Most importantly, the definition of reuse increases the reliability of cost and productivity benefits attributed to reuse by care- fully defining reuse and "work actually saved." Conclusion 27 CITED REFERENCES ________________ İAlb79¨ Albrecht, A.J, "Measuring Application Development Productivity," in Proceedings of the Joint IBM/SHARE/GUIDE Application Development Symposium, ___________________________________________________________________________ October 1979, pp. 83-92. İBan91a¨ Banker, Rajiv D. and Robert J. Kauffman, "Reuse and Productivity in an Integrated Computer Aided Software Engineering (ICASE) Environment: An Empirical Study at the First Boston Corporation," unpublished manuscript, 10 _______________________ July 1991a. İBan91b¨ Banker, Rajiv D., Robert J. Kauffman, Charles Wright, and Dani Zweig, "Automating Output Size and Reusability Metrics in an Object-Based Computer Aided Software Engineering (CASE) Environment," unpublished manu- _________________ script, 25 August 1991b. _______ İBow92¨ Bowen, Gregory M, "An Organized, Devoted, Project-Wide Reuse Effort," Ada Letters, Volume 12, No. 1, January/February 1992, pp. 43-52. ____________ İDre89¨ Dreger, J.B. Function Point Analysis, Prentice-Hall, Englewood ________________________ Cliffs, NJ, 1989. İGaf88¨ Gaffney, John E., Jr. and Thomas Durek, "Software Reuse- Key to Enhanced Productivity; Some Quantitative Models," Software Productivity Con- __________________________ sortium, SPC-TR-88-015, April 1988. _______________________ İGaf89¨ Gaffney, J.E., Jr. and Durek, T.A., "Software Reuse- Key to Enhanced Productivity: Some Quantitative Models," Information and Software Technology, ____________________________________ 31:5, June 1989. İGar91¨ "Software Engineering Strategies," Strategic Analysis Report, Gartner __________________________ Group, Inc., April 30, 1991. İHay89¨ Hayes, W.E., "Measuring Software Reuse," IBM Internal Document, ______________________ Number WEH-89001-2, 2 October 1989. İCPM91¨ "Corporate Programming Measurements (CPM)," V 4.0, IBM Internal Docu- __________________ ment, dated 1 November 1991 _____ İHan92¨ Hancock, Debera R., "Reuse Investment Decisions for Building Compet- itive Software," draft submitted to IBM Systems Journal, 31 July, 1992. ____________________ İIRMQ92¨ "IBM Reuse Methodology: Qualification Standards for Reusable Compo- nents," IBM Internal Document, 19 December 1991. ______________________ İJon84¨ Jones, T.C. "Reusability in Programming: A Survey of the State of the Art," IEEE Transactions on Software Engineering, Vol. SE-10, No.5, September, __________________________________________ 1984. 28 A Reuse Metrics and Measurements Process İJon91¨ Jones, Capers. Applied Software Measurement: Assuring Productivity ___________________________________________________ and Quality. McGraw-Hill, Inc., NY, 1991. ____________ İMar91¨ Margano, Johan, and Lynn Lindsey, "Software Reuse in the Air Traffic Control Advanced Automation System," paper for the Joint Symposia and Work- ________________________ shops: Improving the Software Process and Competitive Position, 29 April-3 _______________________________________________________________ May 1991, Alexandria, VA. İMil88¨ Mills, E.E. "Software Metrics," Software Engineering Curriculum _______________________________ Module SEI-CM-12-1.1, Carnegie Mellon University, Pittsburgh, PA, 1988. _____________________ İNATO91a¨ "Standard for Management of a Reusable Software Component Library," NATO Communications and Information Systems Agency, 18 August 1991. ___________________________________________________ İNATO91b¨ "Standard for the Development of a Reusable Software Components," NATO Communications and Information Systems Agency, 18 August 1991. ___________________________________________________ İPou92¨ Poulin, Jeffrey S. and W.E. Hayes, "IBM Reuse Methodology: Measure- ment Standards," IBM Corporation Internal Document, 16 July 1992. __________________________________ İRei90¨ Reifer, Donald J., "Reuse Metrics and Measurement- A Framework," pre- sented at the NASA/Goddard Fifteenth Annual Software Engineering Workshop. ____________________________________________________________ İSta84¨ Standish, T. "An Essay on Software Reuse," in IEEE Transactions on ____________________ Software Engineering, 10 (1984), No. 5, p.494-497. _____________________ İSTARS89¨ "Repository Guidelines for the Software Technology for Adaptable, Reliable Systems (STARS) Program," CDRL Sequence Number 0460, 15 March 1989. İTra88¨ Tracz, Will, "Software Reuse Myths," ACM SIGSOFT Software Engineering ________________________________ Notes, Vol. 13, No. 1, Jan 1988 p. 17-21. ______ Cited References 29 BIOGRAPHY _________ JEFFREY S. POULIN is an advisory programmer at IBM's Reuse Technology Support Center, Poughkeepsie, New York, where his primary responsibilities include corporate standards for reusable component classification, certification, and measurements. The author has been active in the area of software reuse since 1985 and was key to the development and acceptance of the IBM software reuse metrics. He has conducted extensive research in software measurement tech- niques and implemented a program measurement tool for the workstation plat- form. His interests include object-oriented database systems, semantic data modelling, CASE, and formal methods in software reuse. He is currently con- ducting research into alternative methods for reusable software distribution and retrieval. He is a member of the IBM Corporate Reuse Council, the Asso- ciation for Computing Machinery, and Vice-Chairman of the Mid-Hudson Valley Chapter of the IEEE Computer Society. He received his Bachelor's degree from the United States Military Academy at West Point, New York, and his Master's and Doctorate degrees from Rensselaer Polytechnic Institute in Troy, New York. 30 A Reuse Metrics and Measurements Process