The Case for Fewer Test Cases

Robotium Remote Testing

Testers are often encouraged to automate more and more test cases. At first glance, the case for more test cases makes sense—the more tests you have, the better your product is tested. Who can argue with that? I can.

Creating too many test cases leads to the condition known as “test case bloat”. This occurs when you have so many test cases that you spend a disproportionate amount of time executing, investigating, and maintaining these tests. This leaves little time for more important tasks, such as actually finding and resolving product issues. Test case bloat causes the following four problems:

1. Test passes take a long time to complete.

The longer it takes for your test pass to complete, the longer you have to wait before you can begin investigating the failures. I worked on one project where there were so many test cases, the daily test pass took 27 hours to finish. It’s hard to run a test pass every day when it takes more than 24 hours to complete.

2. Failure investigations take a long time to complete.

The more tests you have, the more failures you have to investigate. If your test pass takes a day to complete, and you have a mountain of failures to investigate, it could be two days or longer before a build is validated. This turn-around time may be tolerable if you’re shipping your product on a DVD. But when your software is a service, you may need to validate product changes a lot faster.

For example, the product I’m working on is an email service. If a customer is without email, it’s unacceptable for my team to take this long to validate a bug fix. Executing just the highest-priority tests to validate a hot-fix may be a valid compromise. If you have a lot of test cases, however, even this can take too long.

3. Tests take too much effort to maintain.

When your automation suffers from test case bloat, even subtle changes in product functionality can cause massive ripples in your existing test cases, drastically increasing the amount of time you spend maintaining them. This leaves little time for other, more valuable tasks, such as testing new features. It’s also a morale killer. Most testers I know— the really good ones, at least— don’t want to continually maintain the same test cases. They want to test new features and write new code.

4. After a certain threshold, more test cases no longer uncover product bugsthey mask them.

Most test cases only provide new information the first time they’re run. If the test passes, we can assume the feature works. If the test fails, we file a bug, which is eventually fixed by development, and the test case begins to pass. If it’s written well, the test will continue to pass unless a regression occurs.

Let’s assume we have 25 test cases that happily pass every time they’re run. At 3:00 a.m. an overtired developer then checks in a bug causing three tests to fail. Our pass rate would drop from 100% to an alarming 88%. The failures would be quickly investigated, and the perpetrator would be caught. Perhaps we would playfully mock him and make him wear a silly hat.

But what if we had 50 test cases? Three failures out of 50 test cases is a respectable 94% pass rate. What about a hundred or two hundred tests? With this many tests, it’s now very possible that there are some amount of failures in every pass simply due to test code problems; timing issues are a common culprit. The same three failures in two hundred tests is a 99% pass rate. But were these failures caused by expected timing issues, or a real product bug? If your team was pressed to get a hot-fix out the door to fix a live production issue, it may not investigate a 99% pass rate with as much vigor as an 88% pass rate.

Bloat Relief

If your automation suffers from test case bloat, you may be able to refactor your tests. But you can’t simply mash four or five tests with different validation points into a single test case. The more complicated a test, the more difficult it becomes to determine the cause and severity of failure.

You can, however, combine test cases when your validation points are similar, and the severity of a failure at each validation point is the same. For example, if you’re testing a UI dialog, you don’t need 50 different test cases to validate that 50 objects on the screen are all at their expected location. This can be done in one test.

You can also combine tests when you’re checking a single validation point, such as a database field, with different input combinations. Don’t create 50 different test cases that check the same field for 50 different data combinations. Create a single test case that loops through all combinations, validating the results.

When my test pass was taking 27 hours to complete, one solution we discussed was splitting the pass based on priority, feature, or some other criteria. If we had split it into three separate test passes, each would have taken only nine hours to finish. But this would have required three times as many servers. That may not be an issue if your test pass runs on a single server or virtual machines, however I’ve worked on automation that required more than twenty physical servers–tripling your server count is not always an option.

In addition to the techniques discussed above, pair-wise testing and equivalence class partitioning are tools that all testers should have in their arsenal. The ideal solution, however, is to prevent bloating before it even starts. When designing your test cases, it’s important to be aware of the number of tests you’re writing. If all else fails, I hear you can gain time by investigating your test failures while travelling at the speed of light.

Death by a Thousand Little Bugs

Software bug

Minor product defects that take only a few minutes to resolve are often never fixed; it seems there are always more important tasks to work on. If this sounds familiar, your test team may suffer from morale issues. And your product may suffer from “death by a thousand little bugs”. Fortunately, these problems can be fixed as easily as these bugs can.

Once testers get their hands on a feature, it doesn’t take long for low-priority defects to pile up in their bug-tracking database. These may include, for example, minor UI issues such as missing punctuation, inconsistent fonts, or grammar errors. These bugs tend to pile up because they are primarily cosmetic. Testers resolve the highest-priority bugs first–often rightly so. We should fix bugs that greatly affect functionality, performance, or security before fixing a spelling typo in the UI.

What can happen, however, is that we never fix many of these low-priority bugs. There are often more critical defects being discovered, so we continuously postpone the low-priority ones.

Unfortunately, some of the bugs left behind are those that were logged the earliest. There are few things I find more frustrating than reporting a simple bug that doesn’t get fixed. My typical complaint sounds something like this: “Why hasn’t this bug been fixed? I logged it weeks ago. It’s a one-line change that will take only two minutes to fix!”

A previous project I worked on provides a perfect example. Not long after I was given the first working build of the UI, I logged two minor bugs. One issue was logged because two buttons on the same page were not aligned properly. The other bug was simply that a sentence ended with an extra period. When the product was released more than four months later, the misaligned buttons and the extra period were still there.

Another problem is that even if these low-impact bugs don’t affect functionality, they can greatly affect the customer’s perception of the product. How can a customer fully trust a product, no matter how well it actually works, if there are mountains of minor defects? This is the “death by a thousand little bugs” syndrome.

Before I came to Microsoft, I ran an online store. One night I modified the shopping cart page, and the next day sales plummeted. When I reviewed the changes I had made, I realized that I misspelled two words and added a broken image link. I fixed these issues and sales quickly went back to normal.

The functionality of the page hadn’t changed at all. But potential customers saw the “minor” errors and assumed the entire shopping cart had poor quality. They certainly didn’t rationalize, “They must have spent all their effort making sure the functionality was solid. That’s why they postponed these obvious, but low-priority bugs.”

The “death by a thousand little bugs” syndrome exists because most teams evaluate each bug individually–and individually, each of these bugs is trivial; but in the aggregate, they are not. Collectively, they make users skeptical of your product.

The solution is that we shouldn’t always address high-priority bugs before low-priority bugs. But when do we make the exceptions? Here are three strategies that I think could help solve these problems.

  1. Set aside one day each month for developers to address the low-priority, low-hanging-fruit bugs. This is a great way to fix a lot of bugs in a short amount of time. It can also prevent your product from suffering from “death by a thousand little bugs.”
  2. Put aside one day every month to fix the defects that have been in the bug database the longest–regardless of priority. This helps prevent testers from becoming demoralized because bugs they logged months ago still haven’t been fixed.
  3. Once a month, increase the priority of all bugs that are least 30 days old. Developers can continue to pull bugs out of the queue in priority order, but the difference is that after one month, a bug that was logged as P4 (lowest priority) becomes a P3. After three months, it becomes a high-priority P1 bug. It may initially sound odd that low-priority defects, such as a misspelled word in a log file, will eventually be classified as highest priority. But doing so forces some action to be taken on the bug. As a P1, it now must either be fixed or closed by the Programmer Manager as “Won’t Fix”.

You may be thinking, “but I’m a tester, and these solutions have nothing to do with testers.” When I started in Test, that’s how I thought. I now realize that my primary responsibility is to make my product better, not just to log bugs. If these strategies would work well for your team, then you should lobby for them–they may even increase your own morale along the way.

Do you think any of these strategies work well for your team? What strategies have you tried in the past, and how have they worked? I’m very interested in hearing your comments.

%d bloggers like this: