Read Matt Nunn's guest editorial "The Time Is Now to Focus on Software Quality" here.

Maurice Wilkes, builder and designer of EDSAC, the first computer with an internally stored program, once said "…the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.” \\[1\\]

I'm sure that, for Mr. Wilkes, this was a fairly depressing realization, and one I think applies to most computer programmers. After all, most of us got into this business to create, not to debug. However, we all know that part of the process of bringing a new piece of software to life is ensuring that it's bug free, or at least risk free. We also know that long periods of debugging can get in the way of creating our masterpiece.

As software development tools have become more advanced over the years, debugging and debuggers have seen some of the more significant enhancements. If you once debugged by inserting print statements in your code so you could see the value of your variables, then you know that F5 debugging, watch windows, call stacks, and the like are light years beyond those days. But one fundamental debugging feature has remained constant: debugging is a linear process and you can only go forward.

In Visual Studio, a typical debugging session involves using the debugger to step through the code execution while you watch variable states and execution flow to try and identify the bug's cause. For a developer this means setting a breakpoint at some point in the code prior to the error occurring.

The trick is to set the breakpoint close enough to the error that you don’t spend hours stepping through code that has nothing to do with the error, while hoping that you set it early enough to catch the cause of the error so that you don’t have to restart the debugging process. As you know, this is a fairly "hit and miss" exercise and although you might set the breakpoint so that you can step through the actual error, the root cause of the error might have happened many steps before your breakpoint. This leads to a frustrating dance with the debugger: every time you make a wrong step, you must reset and start again. And if you're lucky enough to determine the location of an error's root cause, extracting enough information from the code surrounding the error to understand exactly what went wrong can be difficult.

More Power to Debug

What if you could debug a program in the same way that you watch a movie or TV show on a DVR? If something doesn’t interest you, you can fast forward to a new spot and if you miss something you can rewind. And what if you had a "More Info" button that you could press and find out more details about what was happening at the point in time where the error started? Visual Studio 2010 Ultimate gives you that power.

IntelliTrace is a new capability in Visual Studio 2010 Ultimate that lets you record the execution of your code and then interact with the recording in a number of ways to make it easier to find and fix any bugs. IntelliTrace uses one of a number of diagnostic data adapters (DDAs) to record all the information about your application's execution. The level of event data collection is adjustable via DDA configuration, and may be fairly coarse grain collection, or much more fine grain collection. You'll learn more about DDAs later in this article.

Before you can use IntelliTrace, you must make sure that it's turned on and that it's collecting the information you want to review. You do this by going to Tools—Options and selecting the General category from the IntelliTrace node. The general screen lets you turn IntelliTrace on or off and select what you want to capture. The more information you capture the more likely you are to degrade performance of the application during the debug run, but often that is a worthy trade off to get the information you need to find the bug.

Collecting Data

Once IntelliTrace is enabled, it collects data every time you do a debug run. You'll interact with the IntelliTrace results in a couple of ways. If you hit a bug while you're executing your test run, you'll be dropped into the code at that point. The IntelliTrace window will show all the information from the DDA’s during the run to this point. You'll be able to move through the code execution, just like you would if you were using a DVR. You can move forward, backward, pause, stop, rewind, and so on.

Similar to a DVR, you can’t change the code execution that occurred—after all, it already happened and you can’t change the past—but you're able to examine the code execution more easily. If you miss something, simply "rewind" to the event to see what happened. This enables you to more easily and more quickly identify what caused the bug.

Figure 1 shows an example of an IntelliTrace run and the results that you can expect to see as a developer. In this particular instance, you can see that simply by looking through the log you can pinpoint where a user control was being loaded and that it's reading from the cookie information, which in this case it shouldn’t be.

Another way to interact with an IntelliTrace log is to load it manually into the debugger. Whenever IntelliTrace runs it saves the information log as a file. You, or anyone else with IntelliTrace, can attach the Visual Studio debugger to this file and replay the entire application execution, with all the IntelliTrace navigation features—even if you're on a different machine. This feature is incredibly powerful; it lets you hand off IntelliTrace results for someone else to look at which is particularly helpful when you're suffering from "Debuggers Block." The video, "IntelliTrace—Getting Started with Helper TAG.png," shows how IntelliTrace can help you find and fix bugs.

IntelliTrace is great for an individual developer who needs to eliminate bugs so the coding can continue, but there’s more and it gets even better. Most software today is developed by teams. Teams can come in many shapes and sizes, but a typical software development team will consist of, at least, developers and testers. As soon as you start to divide responsibility for bugs between multiple rolls, resolution can become more difficult. This means that you're finding more of the bugs because the focus is not solely on bugs that happen only on the developer's machine. 

Fixing More Bugs

As a software development team member, what is the most common bug resolution used by your development team? Is it "Cannot Reproduce," "Not a Bug," or "Won’t Fix?" If you ranked all the bug resolutions by how commonly they were used, where would "Fixed" appear? Imagine that you could move "Fixed" to the top of the list. How would your software development process change if resolutions like "Not a Bug" and "Won’t Fix" became rarities? What if "Cannot Reproduce" became obsolete?

To move "Fixed" to the top of the resolution list, you have to look at how software is tested and how defects are reported. In most environments today, the test or QA team uses a set of tools that enable them to do their job, but these tools rarely integrate with the tools used by the development team. This is the first fracture in the system and a contributor to costly bug ping-pong (the constant sending back and forth of a bug as a tester finds it, a developer can’t reproduce it, a tester finds it again, and so on). In an ideal world, a development team and a test team would use tools that integrated with each other, enabling the seamless flow of information between the two teams.

The majority of testing done today is functional testing. Functional testing refers to testing a specific action or function of the code. Typically, a function test is designed to identify whether a particular requirement has been fulfilled, a feature works, or a user story can be completed without error. Currently, about 70 percent of functional testing is done manually—that is, a software tester follows a script to execute a series of steps to verify the outcome of a test. This means that we rely on the individual tester to gather enough information about the bug to let a developer fix it.

Visual Studio 2010 introduces a brand new product specifically targeted at testers to try and combat these problems. Microsoft Test Manager 2010 is to testers what Visual Studio is to developers. That is to say, where Visual Studio is an IDE—an integrated development environment—Test Manager is an ITE—an integrated test environment. Figure 2 shows the Testing Center interface that a tester will use to create test cases, organize test plans, track test results, and file bugs when defects are found. Test Manager, integrated with Team Foundation Server, is designed to improve the productivity of testers.

The Key Is Diagnostics

At the heart of Test Manager’s ability to gather information about bugs are diagnostic data adapters (DDAs—see the sidebar below, "Diagnostic Data Adapters"). Earlier I discussed how IntelliTrace relies on one of these DDAs. Test Manager uses this and many other DDAs to make the most of test case execution and to ensure that when a tester files a bug it's actionable with very little work on the tester’s part.

To define an actionable bug, let’s examine a typical bug coming from a tester. It's important to remember that in many cases the tester filing the bug is not technical; a tester records information he thinks is important or helpful for whomever the bug is assigned. The problem starts at this point. You might say that developers are from Mars, testers are from Venus. The roles of these individuals, what's important to them, and the language they speak is so different that they might as well be from different planets. How can anyone expect a tester to accurately provide the information a developer needs to do his job? As a result, a typical bug fails to contain the necessary detail for a developer to understand the bug, find the root cause, and fix it.

An actionable bug, on the other hand, is a bug full of data that will aid a developer in understanding the bug, and enable him to take immediate action toward fixing the bug. An actionable bug is the result of executing a test case using Microsoft Test Manager 2010, discovering a defect, and using the tool to create and file a new bug. To begin, a tester launches Test Manager and selects a test case to run. The test case describes a series of steps to perform and any validation to be done on any one or more of those steps. When the test case is run, the Test Manager interface changes to support the Test Runner activity (Figure 3). The tool becomes a dock-able sidebar interface that enables the tester to easily step through the test case, maintaining focus on both the Test Runner and the application being tested.

Beginning a Test

As you begin the test case, you're given the option to create an action recording. This recording—captured by one of the DDAs—collects and records all the tester's mouse clicks and keyboard strokes. On subsequent runs of the test case, the action recording may be used to fast-forward through the test case.

When the test run is started, the Test Runner displays the sequence of steps to be executed. Each step can be marked as Passed or Failed after the step is completed. As you go through the steps you can indicate the result of each step, which provides granular-level detail about what happened during the test run.

If you indicate that a step has failed, you're prompted to add a note about the failure—that is, what happened that made you mark this as failed. Additionally, Test Manager provides the capability to easily capture the screen, or part of it, to show what went wrong during that step—a great feature for capturing error messages and dialog boxes.

When a defect is found and a step is marked as "Failed," the next thing to do is to create a new bug. This, too, can be done directly through the Test Manager interface. When a new bug is created, all the artifacts of the data collection by the DDAs are automatically attached to the bug. This includes the steps performed in the test case, and system information for the test environment, and may also include a video capture of the tester's screen, any screenshots captured during the test execution, and an IntelliTrace file that the developer can use to see exactly what events took place during the test execution. All the tester needs to do is enter a title for the new bug and click Save and Close; everything else is done for him. You can see a demo "Creating Actionable Bugs with Microsoft Visual Studio Test Professional 2010" here.

A Helpful Bug

An actionable bug is tremendously helpful. No longer do you have to hope that the tester properly documented the steps to reproduce the bug, or the system information—those things are collected automatically and added to the bug. No longer do you have to walk down the hall and ask the tester to run the test case again so that you can watch and see what happened with your own eyes—the screen capture and video capture do that for you. And with the IntelliTrace log attached to the bug you can replay exactly what the tester experienced and step though the code as if you had been there when the test was run. You can see a demo "Consuming an Actionable Bug" here.

So, we can find bugs on our own and have a wealth of information at our fingertips to allow us to quickly find the root cause. We can have testers find bugs and provide us with the detail we need to reproduce the bugs so that we can fix them. Are we done? Do we know that as we continue to add code that the bugs we have found and fixed are still fixed? The only way we can truly know that the bugs are still fixed is to constantly run tests that confirm the status of the bug.

Testing designed to ensure that fixed bugs do not recur is referred to as regression testing. A regression test is a test designed to identify if a bug fix begins to fail. With a regression test (and even other functional tests) you're repeatedly testing something that was known to work at one point. Just as with a functional test, regression tests are mostly a manual exercise. This is a massive resource hit; testers spend countless hours testing functionality that works, solely to ensure it still works. Imagine how much more productive testers could be if they could focus their efforts on creating and running new test cases that covered parts of the system not currently tested, instead of spending their time testing the same thing over and over again.

Coded UI Tests

Visual Studio 2010 introduces a new capability called a Coded UI Test, which automates testing an application's user interface. Coded UI Tests work similarly to unit tests, in that they are written in a programming language such as Visual Basic or C#, and they perform some action or series of actions, validate one or more assertions, and are easily repeatable with predictable results. The difference from a unit test is that, while a unit test is intended to test an individual unit of code, such as a single method call, the Coded UI Test is intended for automating a functional test, including the validation of the outcomes within the test.

Visual Studio makes it easy for you to create Coded UI Tests. You can either use the action recorder, which lets you capture your actions, keyboard strokes, mouse clicks, and so on, or you can import an existing action recording created with Test Manager 2010 as part of a test case execution. In either case, the actions recorded are converted into code, Visual Basic or C#. Once you've captured the recording, you can go into the code and add assertions to the test using the Coded UI test builder. This very simple tool lets you select areas of the UI that you want to validate and then provides you with the list of properties for that UI control so that you can choose which properties you want to check. You can see a demo "Creating a Coded UI Test" here.

By automating functional tests, including regression tests, you're able to free up testing resources to focus on creating and executing new tests to ensure complete test coverage of your software. This is like adding an automated tester to you team—one that is never late for work, never complains, works at incredible speed, and takes on additional testing work as fast as you can create the Coded UI Tests.

Bugs have plagued us since people first started working with computers. In the early days of computing, bugs were easier to find because they were actual bugs (e.g., flies). But as computer systems become more complex, bugs are increasingly difficult to find and fix. Visual Studio 2010 takes a big step forward by providing you with the tools and information you need to not only find the bugs, but also to work with other members of your team to find and fix bugs efficiently and effectively. Once the bugs are fixed, Coded UI Tests let you make sure that bugs don’t come back—and if they do, you’ll know about them early.

Matt Nunn (matt.nunn@microsoft.com), a senior product manager with Visual Studio, has led technical product management efforts for over four years, with a focus on the Microsoft ALM products and the business of application lifecycle management with Microsoft tools.

 


\\[1\\] Maurice Wilkes, Memoirs of a Computer Pioneer, MIT Press

 

Diagnostic Data Adapters

Visual Studio Agents 2010 is a set of technologies—included with Visual Studio 2010—that enable tasks to be performed on behalf of one or more users. In the context of testing and debugging, the most important agent is the Test Agent, which includes a set of diagnostic data adapters (DDAs).

There are several DDAs, each of which serves a different purpose. During test execution, you can use DDAs to collect:

  • a video capture of the tester’s machine.
  • the steps performed during the test execution.
  • system information from any or all of the machines involved in the test execution.
  • an action recording—a recording of the keyboard strokes and mouse clicks performed by the tester.
  • an IntelliTrace file that can be used later to replay the events that occurred during the test run.

While this is not an exhaustive list of the DDAs and their capabilities, these are the ones that will provide the most value when filing a bug; these are the DDAs that will automate the collection of information that will turn a normal, somewhat helpful bug into a rich, actionable bug.