Writing code can be hard; building great solutions is even harder. Couple that with the fact that you probably don’t work alone, but that you have to collaborate with other developers, designers, testers, and project managers—and you have to get everything done within strict budget limits—and the task of building a great quality piece of software becomes "interesting" to say the least.
We all aspire to build great software but the reality is that, during the past couple of decades, most organizations have focused not on quality but on cost reduction, including driving down the cost of IT. And organizations have been fairly successful, but at what cost?
Also in the past two decades, we have not seen a significant upward trend in project success rates. In the past several years, according to the Standish Group’s Chaos Report (2009)\\[1\\], while projects regarded as outright failures have consistently trended down, projects regarded as challenged have edged up again. Only about 35 percent of projects are regarded as successful. Does this mean that the singular focus on cost reduction has effectively made it “cheaper to fail”? And if that's the case, have we really saved money?
Focus on Success
So, we're as successful as professional baseball players, but in most cases paid significantly less. If we want to be successful in the field of software engineering, we need to up our average significantly. Now is the time to put the cost reduction mentality behind us and to focus on successfully delivering the software we're asked to build. In the long run, that will be a much more efficient way of saving money.
To become successful, it's important to understand exactly what success looks like. Many projects classed as successful by an IT department (i.e., delivered on time, on budget, and with the specified features) may not be seen in the same light by the customer. At the end of the day, the only measure of success that matters is whether or not the customer (whoever that might be) is happy with the final product. This generally means that the product does what the customer wants (provides greater efficiency and effectiveness in their daily work) and is free of defects that detract from the software's effectiveness. Even though a project might meet the standard criteria for success, as judged by the IT department, for many reasons it might not be judged as successful from the customer's viewpoint. The customer's needs might somehow have changed since the inception of the project; the designed system might "work" but is considered "unusable" or any other myriad reasons that might make the project a technical success but a failure for the customer.
Many barriers can hinder us as we strive for success but probably the single most important barrier that we as software engineers have direct control over is that of quality. Traditionally, quality is thought of in terms of bugs but quality issues can take many forms, including poor user interaction or interface design; inappropriate or missing features; poor performance, scalability, or security; or poor maintainability.
As you've seen, what defines quality can be difficult to quantify, but an even more difficult question to address is what level of quality is acceptable? As in all things, compromise is inevitable. An interesting way to think about quality is to consider the level of acceptable risk. If the software being developed is a flight control system for a commercial airliner, the level of acceptable risk in regards to quality issues is far lower than, say, for a community website. Regardless of how you approach quality or what you think of as an acceptable level of risk, you can't deny that users of computer systems are becoming far less tolerant of software failures, poor user experience and interaction, or poor performance and security.
Flaws Aren't Acceptable
Traditionally, computer users have been willing to accept, or even expect, that computer software will have issues. For those old enough to remember when we didn’t have computers, we grew up in the infancy of the computer age where the marvel of what they could do outweighed any issues that might occur. The same is not true of today’s younger generations, who have grown up with computers and expect them to work. We're fast reaching a turning point where quality issues (bugs, design, UI, or whatever) will fast kill any software we build because people will not accept the flaws.
Problems also exist with the way the majority of the software industry has traditionally approached quality. Even in the most advanced IT departments with dedicated QA teams, it's almost expected that the role of the QA department is to find all the bugs, UX, performance, and security issues. As they find them, they send them back to be fixed. Remember what QA stands for: Quality Assurance. The role of the QA department should be to ensure that no erroneous bugs make it through the process, not to catch the simplest of bugs because no checks were done before the hand-off to QA. If any other industry approached quality assurance in this manner, nothing would be finished.
So how do you become successful? Primarily, success is defined by meeting your customer’s needs. To meet those needs you must build what your customer wants, deliver it in an appropriate time frame, and achieve the quality expected.
No single silver bullet can drive up success rates while keeping costs under control, but you can take steps in the right direction by using tools and processes that give you the best chance of avoiding failure. This is where Application Lifecycle Management tools come into the picture. Four steps exist to achieving this outcome and they must be coordinated across all team members responsible for the finished product's delivery. Quality starts with design, understanding what your customer is looking for, and ensuring that throughout the process of building the software what you're building is still what the customer wants or needs. Requirements management and traceability can help ensure that you're always building what a customer needs, and living architectural diagrams ensure that through the process everyone stays in touch with the end goal.
Quality continues with development, where many of the new technologies (e.g., code analysis, code metrics, and unit testing with code coverage) can help you write the best code. Performance and load tests ensure that you're writing efficient code, and new debugging technologies like IntelliTrace™, Microsoft’s latest innovation in Visual Studio 2010, ensure that if you find a bug you can always reproduce it so it can be fixed. Quality is then verified by testers where new functionality for manual testing (which accounts for about 70 percent of all testing) can exponentially increase the efficiency of manual testing—and when used in conjunction with technologies like IntelliTrace™ can ensure that bugs found are "actionable." In other words, when a developer gets the bug, he can reproduce it, which is not always the case. The final step is that the customer and other team members have visibility into what is going on so that issues or course corrections can be made early, and with the least impact on the project. This is also where the new wave of Agile techniques can significantly affect a project's chance for success.
Quality matters. Focusing on quality and having the right tools will increase your chance of being successful. Take a look at this month's feature article to see how Visual Studio 2010 can help you achieve these goals by winning the battle against bugs—and watch for future articles for details and insight into other areas of Visual Studio 2010 that can help you be more successful.
Matt Nunn (email@example.com), a senior product manager with Visual Studio, has led technical product management efforts for over four years, with a focus on the Microsoft ALM products and the business of application lifecycle management with Microsoft tools.
\\[1\\] Standish Group, (2009), Chaos Report.