Why NOT to fix a bug

Us testers love to have our issues/bugs fixed, especially Sev 1 (i.e. crashing or data loss) ones. Sometimes we love it when they DON’T fix a bug. Say what? Yes, I fought to NOT fix a crashing bug. But I’m getting ahead of myself.

Whenever we find a bug, we assign a number to it denoting the severity of the bug. Maybe it’s trivial issue and the customer would likely never notice it. Maybe it’s a must-fix bug such as a crash, data loss, or security vulnerability. At Microsoft, we generally assign all bugs two numbers when we enter it: Severity and Priority. Severity is how bad the bug is: Crash = 1, a button border color off by a shade = 4. Priority is how soon the bug should be fixed: Search does nothing so I can’t test my feature = 1, Searching for ESC-aped text doesn’t work = 4.

Once we enter a bug, then it’s off to Bug Triage. Bug Triage is a committee made up of representatives from most of the disciplines. At the start of a project, there is a good chance all bugs will be fixed. We know, though, based on data mining our engineering process data, that whenever a bug is fixed, there is a non-zero chance that the fix won’t be perfect or something else will be broken. Early on in the project, we have time to find those new bugs. As we get closer to release, there may not be time to find those few cases where we broke the code.

One more piece to this puzzle: Quality Essentials (QE). It is a list of the practices and procedures – the requirements – that our software or service must meet in order to be released. It could be as simple as verifying the service can be successfully deployed AND rolled back. It could be as mundane as zero-ing out the unused portions of sectors on the install disk.

Now, that bug I told you about at the beginning. We have an internal web site that allows employees to search for and register for trainings. We had a sprint, a four week release cycle, at the end of the year where we had to make the site fully accessible to those with disabilities. This was a new QE requirement. We were on track for shipping on time…as long as we skipped our planned holiday vacations. While messing around with the site one lunch, I noticed that we had a SQL code injection bug. I could crash the SQL backend. The developer looked at the bug and the fix was fairly straight forward. The regression testing required, though, would take a couple of days. That time was not in the schedule. Our options were:
• Reset the sprint, fix the new bug, and ship late. We HAD to release the fix by the end of the year, so this wasn’t an option.
• Bring in more testing resources. With the holiday vacations already taking place, this wasn’t a really good option.
• Take the fix, do limited testing, and be ready to roll back if problems were found. Since this site has to be up 99.999%, this wasn’t a legitimate option.
• Not fix the bug. This is the option we decided to go with.

Why did we go with the last option? There were a couple of reasons:
1) The accessibility fix HAD to be released before the end of the year due to a Quality Essentials requirement.
2) The SQL backend was behind a load balancer, with a second server and one standby. One SQL server was usually enough to handle the traffic.
3) The crashed SQL server was automatically rebooted and rejoined the load balancer within a minute or two, so the end user was unlikely to notice any performance issues.
4) The web site is internal only, and we expect most employees to be well behaved…the project tester, me, being the exception.

So, the likelihood of the crash was small, the results of the crash were small, so we shipped it. After a few days off, the next sprint, a short one, was carried out just to fix and regress this one bug. According to the server logs, the SQL server was crashed once between the holidays and the release of the fix. It was noted by our ever diligent Operations team. But, hey, I was testing the logging and reporting system. 🙂

I would be remiss if I didn’t add that each bug is different and must be examined as part of the whole system. The fix decision would have been very different if this were an external facing service, or something critical such as financial data was involved.

The End of a Bug’s Life

I recently wrote an article describing five best practices that will increase the odds of your bugs getting fixed. If you were killing time at work, you might have read it. One  practice I suggested was closing your Resolved bugs as early as possible. Although I follow this practice now, I didn’t always do so.

Not long after I became a Microsoft SDET, I was assigned to a project that was approaching its release date. Two weeks before our deadline, it looked like we were in trouble. (I’ve since learned it always looks like you’re in trouble two weeks before a deadline.) There were a lot of bugs that had been Resolved by Dev, but not validated and Closed by Test.

A couple of days before our deadline, the Program Manager emailed the team a status update. We were making great progress, and there were only a few bugs left to Resolve and Close. The email had a chart that looked like this:

clip_image001

The graph showed we went from over 300 Resolved, but un-tested, bugs to almost zero in two weeks. How did we do this? A lot of long days. Working over the weekend. Meals at the office. Little sleep. Lots of stress. As good teams do, we rallied together to meet our deadline. It felt good.

It felt good until our next deadline, when we repeated this cycle all over again.

I’ve since learned there’s a simple solution to avoid the stress that comes with validating so many bug fixes just before a deadline: don’t wait until the last-minute to close your bugs!

Closing your bugs should be part of your weekly routine, not something you do in batch before a deadline. This approach has six advantages.

  1. You won’t have to work long hours to close bugs at your deadline.
  2. You’ll never be tempted to cut corners to close bugs at your deadline.
  3. Bugs are easier to close when they’re fresh in your mind; if you wait too long, you’ll have to re-learn them.
  4. If you find a problem with the bug fix, it gives the developer more time to resolve it.
  5. If you have to re-open the bug because it still doesn’t work, it won’t be rejected because the “bug bar” has been raised.
  6. When a bug isn’t closed, it’s not known whether it’s fixed or not, or whether the fix breaks something else. The more Resolved bugs you have, the less you know about the real state of your software.

Since there are so many advantages to closing your bugs in realtime, rather than in batch, why don’t more testers do it? Let’s create another list.

  1. It’s not very interesting to close your bugs. After automating a test, finding the bug, and investigating the root cause, all the interesting work is done. Verifying the fix is either a matter of running an automated test, or manually testing the feature again; neither of which are favorite pastimes of most testers.
  2. Although many organizations encourage the timely resolution of bugs, few encourage the timely closing of bugs.  I’ve worked on many teams that had guidelines on how quickly bugs must be Resolved, such as P0 bugs within 24 hours, P1 bugs within a week, and P2 within the current milestone. However, there were rarely similar guidelines for testers to verify and Close these bugs. It seems reasonable that testers should be held to  the same standards as developers, and have to close P0 bugs with 24 hours of them being Resolved. One way I’ve seen teams encourage the Closing of bugs is by having a “bug jail”. Once a tester or team exceeds a pre-defined number of Resolved bugs, they have to close these bugs before moving on to other tasks.
  3. Some testers work on teams that reward employees based on the number of bugs they find. What effect does this have on testers? It encourages the finding of bugs, not the Closing of them. If anything, testers should be rewarded based on the number of bugs fixed and closed, which would at least encourage the resolution of quality bugs.
  4. Sometimes testers don’t Close their bugs in a timely manner because they’re disorganized. Most testers have a hectic schedule, and it can be hard to keep track of which bugs you need to Close. Fortunately, Microsoft testers have access to a tool called Bugger. Bugger docks on your desktop, queries the bug database, and displays the number of bugs assigned to you. Clicking on Bugger shows the details. With Bugger docked on your desktop, it’s almost impossible to forget which bugs are assigned to you.

clip_image002

If you work at Microsoft, install Bugger and avoid some of the stress that comes at project deadlines. If you don’t work at Microsoft, consider writing your own version of this tool. Your manager and co-workers will appreciate it.

How to Get Your Bugs Fixed

Fixing a bug.

One of the worst kept secrets in Test is that all released software contains bugs. Some bugs are never found by the Test team; others are found, but the cost of fixing them is too high. To release good quality software, then, you not only have to be proficient at finding bugs, but you also have to be proficient at getting them fixed. Unfortunately some testers are good at the former, but not the latter.

A project I worked on a couple of years ago illustrates the point. We were working hard to resolve our large backlog of active bugs as the release deadline approached. Just before our deadline, I received an email declaring that we were successful, and our backlog had been resolved. This meant every bug had either been fixed, postponed, marked as a duplicate, etc. Congratulations were seemingly in order–until I looked more closely at the chart included in the email, which looked like this:

How we resolved our backlog of active bugs.

I noticed the size of the green Fixed wedge was rather small. The chart represented 150 bugs, so only 4 or 5 were fixed. I wondered why it wasn’t more. Perhaps many of those logged “bugs” weren’t product bugs at all. This turned out not to be the case, however, as the small size of the Not Repro, By Design, and Duplicate wedges tell us.

Now look at the size of the Postponed and Won’t Fix wedges; they make up almost the entire pie! It turned out that although most of the bugs were “real”, they were too trivial to fix that late in the development cycle. For example, there were misaligned UI buttons and misspellings in the product documentation.

I agreed we made the right decision by not fixing these bugs. However, each legitimate bug that was reported, but not fixed, suggests that some amount of test effort didn’t affect the released product.

It takes a lot of effort to log a bug! You have to set up the test environment, learn the product, find the bug, check if it was already logged, investigate the root cause, and file the bug. Let’s be pretentious and call this effort E. Since we had 150 bugs, we had:

150 * E = 5 bugs fixed

In a perfect world, this would be:

150 * E = 150 bugs fixed

I’m not saying that our test effort wasn’t valuable. If the Fixed wedge contained just one security flaw, it would easily justify our work. It did, however, seem as if the ratio of test-effort to bugs-fixed was less than ideal.

Some testers might blame this low ratio on the program manager who rejected their bugs. Others might blame it on the developers who either caused the bugs or never got around to fixing them. These are excuses.

Testers should take responsibility for their own bugs-logged to bugs-fixed ratio. If you think one of your bugs should be fixed, it’s up to you to make sure this happens. You’ll probably never get to the point where every one of your bugs has a positive impact on the product, but here are five “best practices” we can follow to improve our odds.

Account for Bugs in the Schedule

All software contains bugs. Your Test schedule should account for the time it’ll take to find and investigate these bugs. Similarly, the Dev schedule should account for the time it’ll take to fix them. If you don’t see this time in the Dev and Test schedules, kindly call it out.

Start Testing as Soon as Possible

The earlier your report a bug, the better chance it’ll be fixed. Not only will Dev have more time to fix your bugs, but it’s also less likely they’ll be rejected because the “bug bar” is too high. A common mistake is waiting until the end of the release cycle to schedule your bug bashes or performance testing. If you schedule these activities close to the release date, you’ll end up logging a bunch of bugs that never get fixed.

Log Detailed Bugs

When you log a bug, make sure you include all the information necessary to reproduce the issue. This should include repro steps, environment information, screenshots, videos, or anything else that may help. Be sure you also describe the customer impact as detailed as possible; this is often the biggest factor in the decision about whether to fix or defer the bug. For example, detailing real exploits for security bugs, instead of hypotheticals, can help push for fixes. Finally, use reasonable spelling, grammar, and punctuation; it doesn’t have to be perfect, but it has to be understandable by the triage team. Poorly filed bugs are often resolved as Not Repro, Won’t Fix, or Postponed because the triage team doesn’t understand the true impact of the issue.

Close Your Resolved Bugs as Soon as Possible

Although testers love finding bugs, they don’t like verifying bug fixes. Some testers consistently keep a large backlog of resolved bugs, and have to be “asked” by management to close them as the release date approaches. Their theory is that it’s better to spend time finding new bugs than closing old ones. The problem is that by the time they get around to verifying the bug fix, it’s often too late to do anything if the fix didn’t work.

Push Back on Bugs You Feel Strongly About

If you believe a bug should be fixed, don’t be afraid to fight for it. Don’t put the blame on the triage team that closed your bug as Won’t Fix or Postponed. Take ownership of the issue and present your case. If you can successfully push the bug through, both you and the customer win. But even if you can’t push it through, you’ll be respected for fighting for what you believe in and being an advocate for the customer. (Just be smart about which battles you pick!)

If you have other best practices that improve the odds of your bugs being fixed, please share them below. I would love to hear them.

Should We Combine Similar Bugs?

There are two schools of thought when it comes to logging bugs. One is that every product defect should be logged and tracked as its own bug. The other is that multiple similar defects should be combined into a single bug. Which process is more efficient, and is the answer always the same?

Combining Similar Bugs

Combining Similar Bugs

Let’s start with an example. Not long after I learned about accessibility testing, I was asked to take part in a bug bash. Accessibility testing ensures  customers with physical limitations can use the product. It includes verifying the UI works correctly with large fonts, all controls are accessible without a mouse, proper tab order, etc. I decided to try out my new skills and test the UI.

I logged more than 50 defects in the first day of the bash. To my surprise, this did not go over well.

Triage was not happy about evaluating 50 fairly similar bugs. They claimed it would’ve been more efficient to track some of these defects together in a single bug. My fellow testers weren’t too happy either; the bug bash organizer was awarding a prize to the tester who logged the most bugs. It looked to them as if I padded my bug count to win a $10 Panera gift card.

I was told that combining similar defects into a single bug had the following advantages:

  1. The triage process goes quicker. Some of the issues I logged were so similar, there was no need to evaluate them individually.
  2. The bug-fix process goes quicker. If I had combined these defects based on the UI screen where they were found, the developers could have fixed several bugs at once. Instead, they checked-out, modified, and checked-in the same file several times.
  3. The testing process goes quicker. Since developers resolve all the similar defects at once, testers can validate all the fixes at once. This is a tremendous time saver, especially if validating a defect requires deploying a new server or topology.

I was convinced. When we had our next bug bash, I combined my similar defects. I quickly found that although this saved time for the triage team, it added overhead for me.

Every time I logged a bug, not only did I have to check if the same issue was already logged, but I had to check if any similar issue was logged. And what exactly defines similar when it comes to combining bugs? If misaligned text appears on two different UI screens, should they be combined? Do the defects have to be in the same area of the application? Do they need to have the same root cause?

Once these bugs went to triage, more problems were revealed. Several of my combined bugs were closed as “Won’t Fix”. I found that if I combined four defects into one bug, and the triage team determined we shouldn’t fix one of the defects, they rejected the whole bug. It was then my responsibility to create a new bug with just the defects that would be fixed.

There were other issues. Even if the defects were similar, I had no way of knowing the same developer would be assigned to fix all of them. In some cases, the work had to be load-balanced between two developers. These bugs were assigned back to me to split up. I also had to split bugs if triage wanted to assign a different priority or severity to each defect.

Combining bugs didn’t have the expected benefits for the developer, either. When several defects were tracked in the same bug, they all had to be resolved and checked-in together. Developers were no longer able to fix one defect immediately, and save the rest for later.

This leads us back to our original question. Is it better to track multiple similar defects together, or is it better to log them separately? Some defects lend themselves to being combined, such as when the same UI screen has three hardcoded strings. Others don’t, such as when two sets of repro steps with different call stacks both result in a “can’t connect to web service” error.

I suggest you first get your team on the same page regarding the criteria for combining bugs. I’ve found, however, that in almost all cases, it’s better to log defects individually.  The only time it’s more efficient to combine defects is when you know all the following are true. (Here’s a tip: you’ll never know if all the following are true.)

  1. Triage will assign the same status (accept, reject, postpone, etc.) to all the defects.
  2. Triage will assign the same priority and severity to all the defects.
  3. The same developer will be assigned to fix all the defects.
  4. The developer will be making all the fixes at the same time.
  5. The same tester will be assigned to validate all the fixes.

Some bug-tracking systems allow related bugs to be “linked” together, which is a great compromise. Now when I log bugs, I almost always follow the “one defect per bug” rule. But if I log similar bugs, I use this feature to link them together.  The triage team might not be crazy about it, but it saves me and the company time. As a side benefit, I also have a wallet full of Panera gift cards.

Video Is Worth A Thousand Words

When attempting to file a bug, some people are not the best at explaining the issue (you know who you are), and time is lost by triage trying to understand the issue, as well as by the filer trying to answer any questions by developers. There is also the risk that the bug will be mismarked as ‘no repro’ or ‘by design’ if it is not well understood.

Therefore, your best friend is very good repro notes to explain how a developer can reproduce the same issue on their own environment. It is even highly advised to add a picture of the issue and attach it to the bug to give the bug readers a quick and easy way to fully understand the issue at a glance. If you simply take a screen shot (PrtScr key), paste into Paint and add that file, you are ahead of the curve… but really, the “Snipping Tool” that ships with Win7 and forward is way easier to use and allows you to annotate the image before saving it.

Yeah, smarty pants… but what about an issue that involves a set of complicated Repro Steps?

If your issue contains a series of steps, or the reactions are hard to describe, what do you do then? A picture of the one event will not suffice. This is when you go to the movies. No, not the new action flick that was a remake of a much better foreign film…. I mean MAKE a movie of the bug!

There are several options to make movies of your mouse screen actions. I’ve tried quite a few, but the one that beats all the others in my opinion is one unfortunately available only to Microsoft internal employees… so this will not help in this post…. but I’ll detail the features that are essential to me in a screen recorder, and let you evaluate some of the options available out there.

image_thumb3 Screen Recorder is a product created by a developer here at Microsoft, and meets and exceeds my expectations for a good option in bug reporting. Unfortunately, it is not yet available to the general public, but it does illustrate what a good app for bug reporting should look like.

This app does lots of things correctly:

  • very simple UI (see above)
  • allows you to configure where the output file is written easily
  • outputs to Windows media file (some apps you have to do the final encoding yourself)
  • allows for audio recording (configured from the file output menu)
  • allows for pausing and resume
  • allows for full screen or selection of one running application

This method is perfect for adding a WMV to a bug or presentation to easily portray a bug. This is also great for tutorials, how-to wiki, and PowerPoint presentations.

One publicly available option I’ve tried is “My Screen Recorder”.  It has most of the options detailed above, but is not a simple UI, however I’d say it’s the best option at present.  Another was Microsoft’s Expression Encoder, which was very versatile, but way too complex for this application, and did not encode the video in same step as the recording, which was very time consuming.

Lastly, I should mention that Windows7 ships with it’s own bug recording software for reporting bugs to Microsoft. This software, called “Problem Steps Recorder” can be used to create a detailed HTML page that includes step by step screen shots of the repro, which could be an option on a machine where you can not install additional software. A detailed view of how to use this option is shown in this TechRepublic blog post.

It would be great to see if any one of you has additional options for screen casting software that you think is superior, and why. Please feel free to leave that suggestion in the comment section, so that we all can test it out, pun intended.

Please remember that videos make excellent supplements to a bug, but they do not substitute for good, searchable text describing issues and its repro steps. The bug should include good setup, initialization, and execution steps for the issue which most teams would consider mandatory. 

For bugs to be most useful, they should be reported containing all investigatory evidence:

  • Expected result/actual result notes
  • Source data/files
  • Exception details
  • Stack trace (if possible)
  • And of course… THE MOVIE!

So let’s go to the movies people… make bug resolution more efficient in the process!

Enjoy!