Download PDF Excerpt
Rights Information
Find out more about our Bulk Buyer Program
- 10-49: 20% discount
- 50-99: 35% discount
- 100-999: 38% discount
- 1000-1999: 40% discount
- 2000+ Contact Leslie Davis ( [email protected] )
Robert Bruce Kelsey, Ph.D., is well recognized for his expertise in software engineering and project management. He has authored two dozen papers on software metrics, process improvement, and quality assurance. He is on the editorial or review boards of several industry journals and professional organizations, and as a member of the IEEE Standards Association he contributes to the IEEE learning technology and software standards. Also an experienced course developer and instructor with broad interests, Dr. Kelsey has taught in corporate, university, and community college settings on topics ranging from software quality assurance, to astronomy, to logic.
Software Project Management
Measures for Improving Performance
Robert Bruce Kelsey, Ph.D.
About the Author
Robert Bruce Kelsey, Ph.D., is well recognized for his expertise in software engineering and project management. He has authored two dozen papers on software metrics, process improvement, and quality assurance. He is on the editorial or review boards of several industry journals and professional organizations, and as a member of the IEEE Standards Association he contributes to the IEEE learning technology and software standards. Also an experienced course developer and instructor with broad interests, Dr. Kelsey has taught in corporate, university, and community college settings on topics ranging from software quality assurance, to astronomy, to logic.
CHAPTER
1
Measures, Goals, and Strategies
Some of us climb mountains because they’re there. Some of us read books to know we are not alone. Some of us measure software just so we and our teams can survive the project, and some of us measure software to ascertain whether the development processes are in control.
MEASURING PERFORMANCE WITHIN A PROJECT
There’s a big difference between measuring the performance of a software development project and measuring performance within a software development project. Software project performance is typically measured in high-maturity organizations. In these organizations, the business processes are documented and audited for efficiency. The development organization has corporate support and funding for formal software quality assurance and process improvement. In high-maturity organizations, the software project as a whole is measured as if it were a complete business process in itself: it’s effective (delivers on time), it’s efficient (meets budget), and it’s profitable (results in margin or profit).
In such organizations, metrics programs are a tool for measuring project capability and compliance. The paradigms for such software metrics programs are the well-established CMMI® level 4 organizations with fully functional organizational process performance and quantitative project management in place. With their substantial historical data and standardized and audited processes, these organizations can make statistically valid inferences from their measures about the project as a whole, identifying deviations and diagnosing process failures.
Much has been written about these formal software metrics programs. Any organization that wants to start an organization-wide, executive-sponsored software metrics program can follow the IEEE standards or the CMMI® model. When it comes to implementation details, dozens of books explain how to integrate measurement across the entire product and project lifecycle or how to use the data to improve your organization. When you need more extensive advice, read the classic texts by Grady, Fenton, Myers, Kan, and others (see Further Reading). They’ll tell you what worked for Hewlett-Packard, the National Aeronautics and Space Administration, and a host of other companies with ample revenue streams and a senior management staff committed to product quality.
The problem is that many software practitioners don’t work for mature, or even maturing, software companies. Some work in situations where the software development organization is one of those “challenged cost centers”—costs too much and needs attention—but none of the executives knows what to do about it. There’s no enlightened senior management to endorse software process improvement initiatives, nor any highly respected and well compensated consultants to guide the program when senior management interest falters because the results are too slow in coming. There’s no Software Quality Assurance or Software Engineering Process Group to bear the brunt of the work of process improvement.
Some software practitioners work in dot-coms or MIS/IT shops, suffocating under the pressure to deliver on impossible schedules without requirements, designs, or even adequate resources. There’s simply no room for documented processes when there are four developers in a one-person cubicle. There’s no time for reviews or audits when just getting the code done will take 12 hours a day, every day, for the next 18 weeks.
You can’t worry about maturity levels when you’re always living on the edge. Project managers, development and test managers, and team leads in situations like these have to find a way around the crisis of the hour. They need software measurement data that they can use in the day-to-day decisions, not in the next quarterly business review. They need to measure their performance against the project schedule, not the project’s compliance with historical norms for Estimate at Completion.
Measuring progress entails far more than checking off a line item in the work breakdown structure. Projects need to be viewed as organisms rather than task lists. In the human body, different types of cells perform different tasks, in different locations and at different times. In projects, different people in different roles work individually to complete tasks that in turn trigger other people to begin or complete their tasks. You can’t diagnose and treat a disease in an organism unless you know all the symptoms and how they interact. Similarly, you can’t diagnose and treat performance problems in a project unless you know what symptoms to look for, where to look for them, and what to do about them.
TWO WAYS TO USE MEASUREMENT
Some people use software measurement like they use a daily weather forecast. They want to know how to prepare for the day, so an overview is all they need. Knowing the estimated average temperature and whether it will be rainy or sunny, they can make decisions about how to dress for the day and whether they should try to run an errand over lunch. Similarly, from a few department-wide indicators, the Director of Software Development can tell whether her projects will complete successfully or whether she should contact a headhunter on her lunch break.
For this class of measurement users, the details aren’t particularly important. They know that it will be hotter on the streets downtown than it will be in the shaded streets of the suburbs. The temperature isn’t likely to fluctuate far from the forecast. Of course, there’s always the chance that even a sunny day will suddenly turn nasty if certain conditions develop. If the forecast doesn’t show that as a possibility, they’re content to leave the umbrella behind. Similarly, since the earned value across the projects is tracking close to expectations, our Director of Software Development can go out for lunch and not worry about whether the VP will be waiting in her office when she returns.
Others use software measurement as some kind of archeological dig. They examine papyri bearing curious line glyphs and tablets with weather-worn bar carvings, from which they draw lessons from the past. For these folks, measurements are useful because they reveal how people and processes and projects really work. Measures tell stories about how teams succeeded or why they failed. What was life like on the streets of Cubicle City when the earthquake struck, the build crumbled, and the Atlantis Project slid beneath the waves of the Sea of Red Ink?
For this class of measurement users, the details are extremely important. They know that software development projects succeed or fail requirement by requirement, code line by code line, defect report by defect report. Trend lines are all well and good so long as the environmental factors don’t change. If any of them do change, the forecast can become invalid, with a sunny morning quickly turning into a dreadful afternoon. This class of measurement users knows that if you have detailed measurement data that shows how different types of events affect people, products, and schedules, then you can improve the durability and reliability of your forecasts and plans.
These two perspectives are not incompatible. In fact, both are necessary for a successful measurement program. Unless senior management can derive some operational and strategic benefit from the indicators you put on their Quarterly Business Review Dashboard, they aren’t likely to give your efforts much financial or logistical support. On the other hand, unless you’ve demonstrated that you can manage the chaos of your day-to-day tasks, you won’t be around to see them write the check.
Nevertheless, that’s no excuse to start counting everything in sight. Improperly conceived, a measurement effort can turn into an expensive exercise in arithmetic that causes more problems than it solves. Development teams wouldn’t think of starting to code without first doing some kind of design work. A measurement effort deserves the same care and preparation.
WHAT IS SOFTWARE MEASUREMENT?
Put aside for a moment everything you know about software. Forget what you learned from Pressman and Kan. Ignore what the tools vendors have told you. With your slate clean, answer the following deceptively simple question: What are we measuring when we measure software, and why do we measure it?
On the face of it, the answer seems straightforward. We are measuring a process—the tasks involved in developing software. We are also measuring a thing—the software product’s functional “content” and its conformance with specifications and quality requirements. This answer, however, merely identifies the two major domains of inquiry process and product. It tells us the areas we want to measure, but it doesn’t help us decide what exactly we want to measure, why we want to measure that instead of something else, or what we ought to do with the data once we have it.
Those two domains of inquiry are huge, and they span a host of interrelated components. So, there won’t be a simple answer to the question. When we investigate “software,” we are examining design and development processes, validation processes, customer needs and savvy at various times, code, documents, specifications, online help, etc., etc. To make matters more interesting, very few of these components are actually tangible things.
For example, requirements drift is not a thing in itself—it’s a change, a delta. For convenience sake, we like to locate drift in the physical difference between a requirements document at time A and time B. That lexical difference isn’t the shift itself, however. It’s the symptom or the trace of the measurement target, which is the event of drift. And that event is very hard to analyze effectively. It might be that the customer simply changed its collective mind. It might be that the systems engineers neglected to probe customer requirements deeply enough to determine the real requirements. It might be that the requirements never really changed, but were just inaccurately documented or inappropriately interpreted during the development cycle.
Similarly, we often speak of the source code as the end product of software development. Source code isn’t a product in the typical sense of the term, and its transference to a CD isn’t the end result of the process. The source code is a code: like any language, it is the result of experience and thinking and analyzing and communicating. Like any language, it only exists as a language when it is used or executed. The process isn’t complete even when the software is first used by the customer’s employees to successfully accomplish some task. It’s an ongoing process with many exit points and many decision milestones. Between the time the request for proposal arrives and the time the customer signs an end-of-warranty agreement, hundreds of factors are involved in specifying, designing, creating, testing, producing, distributing, using, and evaluating “software.”
If “software” is really a collection of multiple attributes evaluated by many people over a long period of time, just what are we supposed to measure? The simple answer is: We measure what will help us get our work done.
All measurement has a rationale, a purpose. It has an audience. It is a means to an end. Someone is going to use the data for some purpose. They will draw conclusions from it. They may change project plans or scope or cost estimates based on those conclusions. Those actions will in turn affect other aspects of the project, maybe even the business itself.
WHAT DO YOU WANT TO ACCOMPLISH WITH MEASUREMENT?
Since all measurement has a purpose, you should start off by deciding what you want to accomplish through measurement:
Do you want to show that your project team really is working at over 100 percent capacity?
Do you want to prove that the product is a quality product, maybe even of higher quality than might be expected given the lack of support you get from senior management and sales?
Do you want to use the data to help change the workload of some of your staff?
Do you want to be able (in some loose sense) to predict whether changes will cause more risk, effort, or cost?
Most likely, you are turning to measurement to help you do one or more of the following:
• Understand what is affecting your current performance.
• Baseline your performance.
• Address deficiencies and get better at what you do.
• Manage risks associated with changes in schedules, requirements, personnel allocations, etc.
• Improve your estimation capabilities.
The one aspect that’s missing from this list and the one you would do well not to forget is bringing visibility to your area. Business executives too often treat software development as a black-box component in their organization. They forget to give it the same attention they give more typical segments of operations like manufacturing, packaging, and shipping. Software development organizations are so used to being treated like some third-party order fulfillment service that they forget that their departments need the same level of attention and involvement from senior management that the operations departments enjoy.
As a result, your measures should help you address both the financial and the political aspects of software development. You’ll need the usual bottom-line measures such as “on time” and “at spec.” But you’ll also want to call attention to the operation of the department within the organization as a whole, showing it as a consumer of the work of some business units upstream in the total workflow, and as a supplier to other business units downstream in that workflow.
Of course, we don’t want our measures to be just demarcations, lines on some corporate ruler used to see if we measure up to the CFO’s or COO’s expectations. We also want our measures to be feedback loops. When we do measure up, we want our measurements to show us what we did right. When we don’t meet expectations, we want our measurements to show us why and where we fell short and what to do about it next time.
A COMMITMENT TO THE FUTURE
There’s an even more significant implication for software project management here. Software measurement is not just a tool—it’s a commitment. Measures aren’t tossed away at the end of the project with all the network diagrams and resource graphs that have cluttered up your desk for several months. The measurement data holds keys to your future success.
Looked at day to day, every project seems different. The schedule, the deliverables, the customers, the budget, and the resources aren’t quite the same as they were on the last project. There are new challenges, new obstacles to overcome. Yet, in every project there’s the need to know exactly where you are in the project, how much it cost to get you to where you are today, and how likely it is that you will complete the project on time and on budget.
This is where software measurement becomes a commitment. Put in place the measures you need to be successful, not just in today’s project but in next year’s projects. Don’t assume that software metrics is something a development manager or a Software Engineering Process Group is supposed to do. If there’s no measurement program, implement one. It’s in everyone’s best interest, most of all yours!
THINK GLOBALLY, ACT LOCALLY
It takes a long time to move organizations up the maturity ladder or to get them fully functional as ISO 9001 companies. The entire organization has to change the way it works, and organizations move slowly. If you work in a company that is not interested in IEEE 12207 or CMMI®, there are limits to what you will initially be able to accomplish. For example, many high-maturity, program-level measures require coordinating data across departments, and it is unlikely that you will be able to get this data early in your efforts.
Nonetheless, you should approach your software measurement effort as if it were a company-wide effort. You are obviously the first and most important user of your measures, and your choice of measures and measurement points should improve your chances of success. But your success will be just the beginning. Your peer project managers will want to adopt your best practices. Senior management will want to see more of the quantitative data you are using to manage your project in real time. What starts out as a means to get you through the project might well blossom into a company-wide effort. So plan accordingly:
1. Architect the effort, establishing measurement hierarchies and interdependencies. (Chapter 2 will take you through one such architecture.)
2. Design the effort, so you know why and when you want to collect data. (Chapters 3 and 4 will describe the measures, how they interrelate, and when to collect them.)
3. Use the measures. (Chapters 4 through 6 will guide you through interpreting and displaying the data, and Chapter 6 will offer several examples of using measurement information to solve typical development project problems.)
IDENTIFY AREAS OF OPPORTUNITY
Throughout this book, we’ll focus on getting you the information you need to manage the project’s workload. By workload, I don’t mean just producing code or completing test cases. That’s only the visible, explicitly valued aspect of most workdays. Certainly, a project manager wants to know how well his or her group is performing at the level of the work breakdown structure item. On a day-to-day basis, though, the project manager also needs to have data that he or she can use to make and to justify decisions, to plan and to estimate, and so forth. This data is just as important as productivity or defect density. In practical terms, this means that you also need to take into consideration all the planning, reviewing, bickering, and rework that so quickly fill up the empty slots on everyone’s daily calendar.
When starting up a measurement effort, identify those areas that cause the most surprises and decide how to monitor them. Answering the following questions will typically uncover areas of opportunity for measurement and improvement:
Are the requirements always changing?
Does it take longer than expected to finalize the requirements?
Does the development team make changes to the requirements and/or design documents right up to the last minute?
Does coding take longer than estimated?
Does the development team have to add code tasks to the project plan in the middle of the development phase?
Does the transition to testing often uncover problems with planned tasks or estimates?
Is the test phase always full of unwelcome surprises that force schedule slips?
Is internal documentation delivered on time, and is it accurate?
Is external documentation delivered on time, and is it accurate?
As you consider what needs to be improved in these areas, set your sights on changes you can make within the boundaries of your own group. Focus on things you can control—areas where you’d like to be better prepared, more efficient, or more disciplined—and start there.
SELECT AN OPERATIONAL MEASURE AND STRATEGY
The four key operational measures—defects, time, cost, and risk—will provide you ample information for managing and improving your current situation. The measures are interrelated, and it is helpful to select a primary strategy for setting up your measurement effort. Since many project risks and costs are directly affected by variance in defects and time, the choice of strategy is really between product quality and productivity.
Defects take time to fix. Unplanned time quickly turns into schedule risks. Schedule risks can become increased project costs. So if your defect rates are high, focus your attention on defect-related measurements.
If your defect rates are not high, but your estimates are generally off, then time would be the best measure to start with. Inaccurate estimates of time increase risk to meeting milestones, which in turn can become additional costs. Moreover, developers and testers working under pressure can make mistakes, which increases product quality risks.
Some of the reasons for inaccurate estimates or plans may be out of your control. Likewise, some of the root causes for defects may fall within departments other than your own. Nonetheless, measuring time or defects will give you enough information about your own organization to improve your performance and lower your risks.
A STRATEGY FOR ADDRESSING PRODUCT QUALITY
Suppose you decide to start your measurement effort by measuring defects because there are a lot of them and they take time that’s not included in the project plan. This is really two issues: (1) defect rates, and (2) estimating the effort required to correct defects. Better information related to the first issue will help resolve the second issue, so focus first on defects rather than estimates of time to fix.
Before you can glean any useful information from defect rates, you have to know what kinds of defects affect your projects. Ask yourself the following questions:
Do you know where defects are introduced in the development process?
Do you know where most of your defects are first discovered?
What kinds of defects occur (e.g., functional, usability, data integrity)?
What kind of defect appears most often?
How do you decide whether a defect has to be fixed right away or not?
If your bug tracking system categorizes defects by severity and priority, are those categories really helpful in determining the nature of the defect and its impact both on the release and on the customer?What kind of testing generates the most defects?
Is this testing related to validating customer requirements or to proving the product works on specific environments, or is it another kind of testing?
You probably won’t have answers to all these questions immediately. In fact, you may have to use the measurement effort to provide you with some of the answers. The point to remember is that defects don’t just happen; they are caused. Once you know the conditions under which they appear, you can start managing the project environment to limit their effect on the project.
Using your answers to the questions above, set up a defect profile:
1. Identify the characteristics of the defects.
2. Group the defects according to some classification scheme.
3. Identify the root cause(s) for each class of defect.
4. Determine if different classes of defect typically have different impacts on the project and the customer.
Now that you have the profile, you can look for appropriate measurement points:
5. Identify the typical project milestones or phases of development and test efforts.
6. Make a list of the questions you would like answered at these measurement points. Don’t worry about the data itself yet. The goal of this step is to identify as best you can the information you want to extract from the data.
Once you have set up your problem tracking system (discussed in detail in Chapter 4), this approach is fairly easy to implement. It will give you some insight into where your own activities may be producing defects or not catching them. It will also give you visibility into where other organizations may be contributing to the defect rate. Only indirectly will it improve your productivity or streamline your workflow, because it is focused on obstacles rather than opportunities for efficiency. For that information, you’ll need to implement another kind of strategy.
A STRATEGY FOR ADDRESSING PRODUCTIVITY AND WORKFLOW
The observed defect rate is low in many development environments. What gets the most visibility are schedule overruns and resource cost overruns. In this situation, your measures need to focus on how the developers and testers get their work done: where the process is efficient and where it is inefficient. Just as the defect strategy began with a profile, the productivity strategy needs to begin with a profile. In this case, you want to know what “effective use of time” means in your organization, and how that currently maps to your project plans and work breakdown structures.
The first step is to recognize where your organization’s primary focus lies. Ask yourself the following questions:
Is your corporate climate laissez-faire as long as the dates are met? If your team is left pretty much alone so it can try to meet impossible deadlines, then you’ll want to focus on the actual time to complete tasks. This will give you quantitative support when you escalate problems or when your management makes unrealistic commitments on your behalf. It will also allow you to plan more accurately for, and manage risks related to, the commitments you currently have.
Alternatively, does your corporate environment revere billable hours? If so, do you need to demonstrate that your resources are fully utilized on their assigned projects?
Do you also need to know how much time is spent doing other tasks that are mission-critical to some stakeholders in your project but are not critical to your project per se? If your resources are “publicly” allocated to your project, while in reality they are often pulled off to help someone else, you may want to track time misspent on interruptions as well as direct labor hours.
What level of granularity is required to meet the organization’s expectations?
What level of granularity do you think you need as a project manager?
Do you need to track time by resource and by task, or just by total charged hours and time period (say, all hours charged to the project by all team members for the previous week)?
Do you need to track time to a line item on a work breakdown structure, or to the various development activities that contribute to that task?
Once you’ve decided whether your goal is to measure time spent, time misspent, or both, and once you’ve decided what level of granularity is required, you can look for appropriate measurement points:
1. Categorize the project tasks into types of activities and/or into phases of project activity.
2. Identify the significant obstacles or interruptions that your project team will face.
3. Using any existing data you may have and interviews with the project team members, establish the duration range (the anticipated minimum and maximum) for the categories of tasks and the obstacles or interruptions.
4. Make a list of the questions you would like answered at these measurement points. Different tasks will probably prompt different questions, as will different kinds of obstacles. Don’t worry about the data itself yet; the goal of this step is to identify as best you can the information you want to extract from the data.
A time-focused approach will give you some insight into where you can streamline your own activities. It will also give you visibility into time constraints or time conflicts within your project team. The information garnered from the analysis phase can be turned around immediately into better planning for the next development cycle in the current project.
APPLYING WHAT YOU’VE LEARNED
The following exercises will help you position and define your measurement effort within the rest of the organization.
Exercise 1-1: A Clear Rationale
1. What does your company value (innovation, revenue, quality, professional development, etc.)?
2. What are your department’s specific business goals? If your department doesn’t have any explicit business goals, what goals do you think it should have?
3. Why do you want to measure your processes and products? Phrase your answer so it addresses both the values and goals.
Exercise 1-2: A Strategy to Meet Real Needs
1. If you could change three aspects of your own work environment, what would you change?
2. If you could change three aspects of your team’s or department’s work environment, what would you change?
3. What processes in the organization have an impact on these six aspects? (The processes don’t have to be documented, formal processes; they could be just “how things get done around here” or, even more informally, patterns of behavior.)
4. Rank the six aspects from highest to lowest priority.
5. What should you measure to leverage change in these six areas? How can these measures help improve the processes or behaviors that affect these six areas?
6. As you think about what you would like to change in the way you manage projects, think also about how this change can be motivated and who owns making the actual change. For example, if you need to motivate people to implement a new process, you may have to focus first on what the lack of a process costs and only later on how well people are performing the process.