UI Testing Checklist

When testing a UI, it’s important to not only validate each input field, but to do so using interesting data. There are plenty of techniques for doing so, such as boundary value analysis, decision tables, state-transition diagrams, and combinatorial testing. Since you’re reading a testing blog, you’re probably already familiar with these. Still, it’s still nice to have a short, bulleted checklist of all the tools at your disposal. When I recently tested a new web-based UI, I took the opportunity to create one.

One of the more interesting tests I ran was a successful HTML injection attack. In an input field that accepted a string, I entered: <input type=”button” onclick=”alert(‘hi’)” value=”click me”>. When I navigated to the web page that should have displayed this string, I instead saw a button labeled “click me”. Clicking on it produced a pop-up with the message “hi”.  The web page was rendering all HTML and JavaScript I entered. Although my popup was fairly harmless, a malicious user could have used this same technique to be, well, malicious.

inject

 

 

 

 

 

 

Another interesting test was running the UI in non-English languages. Individually, each screen looked fine. But when I compared similar functionality on different screens, I noticed some dates were formatted mm/dd/yyyy and others dd/mm/yyyy. In fact, the most common bug type I found was inconsistencies between screens. The heading on some pages were name-cased, while others were lower-cased. Some headings were butted against the left size of the screen, and others had a small margin. Different fonts were used for similar purposes.

Let’s get back to boundary value analysis for a minute. Assume you’re testing an input field that accepts a value from 1 to 100. The obvious boundary tests are 0, 1, 100, and 101. However, there’s another, less obvious, boundary test. Since this value may be stored internally as an integer, a good boundary test is a number too large to be stored as an int.

My UI Testing Checklist has all these ideas, plus plenty more: accented characters, GB18030 characters, different date formats, leap days, etc.. It’s by no means complete, so please leave a comment with anything you would like added. I can (almost) guarantee it’ll lead you to uncover at least one new bug in your project.

 

Click this images to download the UI Testing Checklist

Click this image to download the UI Testing Checklist

Continue reading

State-Transition Testing

One of our goals at Expert Testers is to discuss practical topics that can help every tester do their job better. To this end, my last two articles have been about Decision Table Testing and Being an Effective Spec Reviewer. Admittedly, neither of these topics break new ground. That doesn’t mean, however, most testers have mastered these techniques. In fact, almost 50% of the respondents to our Decision Table poll said they’ve never used one.

Continuing the theme of discussing practical topics, let’s talk about State Transition Diagrams. State Transition Diagrams, or STDs as they’re affectionately called, are effective for documenting functionality and designing test cases. They should be in every testers bag of tricks, along with Decision Tables, Pair-Wise analysis, and acting annoyed at work to appear busy.

STDs show the state a system will move to, based on its current state and other inputs. These words, I understand, mean little until you’ve seen one in action, so let’s get to an example. Since I’m particularly busy (i.e., lazy) today, I’ll use a simple example I found on the web.

Below is a Hotel Reservation STD. Each rectangle, or node, represents the state of the reservation. Each arrow is a transition from one state to the next. The text above the line is the input–the event that caused the state to change. The text below the line is the output–the action the system performs in response to the event.

clip_image001

Pasted from <http://users.csc.calpoly.edu/~jdalbey/SWE/Design/STDexamples.html>

One benefit of State Transition Diagrams is that they describe the behavior of the system in a complete, yet easy-to-read and compact, way. Imagine describing this functionality in sentence format; it would take pages of text to describe it fully. STDs are much simpler to read and understand. For this reason, they can show paths that were missed by PM or Developer, or paths the Tester forgot to test.

I learned this when I was testing Microsoft Forefront Protection for Exchange Server, a product that protects email customers from malware and spam. The product logic for determining when a message would be scanned was complicated; it depended on the server role, several Forefront settings, and whether the message was previously scanned.

The feature spec described this logic in sentence format, and was nearly impossible to follow. I took it upon myself to create a State Transition Diagram to model the logic. I printed it out and stuck it on my office (i.e., cubicle) wall. Not a week went by without a Dev, Tester, or PM stopping by to figure out why their mail wasn’t being scanned as they expected.

If you read my article on Decision Tables (DTs), and I’m sure you didn’t, you may be wondering when to use an STD and when to use a DT. If you’re working on a system where the order of events matter, then use an STD; Decision Tables only work if the order of events doesn’t matter.

Another benefit of STDs is that we can use them to design our test cases. To test a system completely, you’d need to cover all possible paths in the STD. This is often either impractical or impossible.

In our simple example, there are only four paths from start of the STD to the end, but in larger systems there can be too many to cover in a reasonable amount of time. For these systems, you can use multiple STDs for sub-systems rather than trying to create a single STD for the entire system. This will make the STDs easier to read, but will not lower the total number of paths. It’s also common to find loops in an STD, resulting in an infinite number of possible paths.

When covering all paths is impractical, one alternative is to ensure each state (node) is covered by at least one test. This, however, would result in weak coverage. For our hotel booking system, we could test all seven states while leaving some transitions and events completely untested.

Often, the best strategy is to create tests that cover all transitions (the arrows) at least once. This guarantees you will test every state, event, action, and transition. It gives you good coverage in a reasonable amount of tests.

If you’re interested in learning more about STDs (it’s impossible to cover them fully in a short blog article) I highly recommend reading A Practitioner’s Guide to Software Test Design. It’s where I first learned about them.

The next time you’re having trouble describing a feature or designing your tests, give a State Transition Diagram or Decision Table a try. The DTs and STDs never felt so good!

 

Decision Table Testing

English: The Black eyed peasDecision tables are an effective tool for describing complex functional requirements and designing test cases. Yet, although they’re far from a new concept, I’ve only seen a handful of functional specs and test plans that included one. The best way to illustrate their effectiveness is by regaling you with a tale of when one wasn’t used.

The year was 2009. Barack Obama was recently inaugurated as the 44th President of the United States, and the Black Eyed Peas hauntingly poetic Boom Boom Pow was topping the Billboard charts. I was the lead tester on a new security feature for our product.

Our software could be installed on six server types. Depending the server type and whether it was a domain controller, eleven variables would be set. The project PM attempted to describe these setting combinations in paragraph format. The result was a spec that was impossible to follow.

I tried designing my test cases from the document, but I wasn’t convinced all possible configurations were covered. You know how poor programming can have “code smell“? Well, this had “spec smell”. I decided to capture the logic in a decision table.

According to the always-reliable Wikipedia, decision tables have been around since ancient Babylon. However, my uncle, Johnny “Dumplings”, who is equally reliable, insists they’ve only been around since Thursday. I suspect the true answer lies somewhere in between.

If you’re not familiar with decision tables, let’s go through a simple example. Assume your local baseball squadron offers free tickets to kids and discounted tickets to senior citizens. One game a year, free hats are given to all fans.

To represent this logic in a decision table, create a spreadsheet and list all inputs and expected results down the left side. The inputs in this case are the fan’s age and sex. The expected results are ticket price and hat color.

Each row of a decision table should contain the different possible values for a single variable. Each column, then, represents a different combination of input values along with their expected results. In this example, the first column represents the expected results for boys under 5 — “Free Admission” and “Blue Hat”. The last column shows that female senior citizens get $10 tickets and a pink hat.

Rule 1 Rule 2 Rule 3 Rule 4 Rule 5 Rule 6
Inputs
Age < 5 Y Y
5 =< Age < 65 Y Y
Age >= 65 Y Y
Sex M F M F M F
Results
Free Admission Y Y
$10 Admission Y Y
$20 Admission Y Y
Blue Hat Giveaway Y Y Y
Pink Hat Giveaway Y Y Y

There are three major advantages to decision tables.

  1. Decision tables define expected results for all input combinations in an easy-to-read format. When included in your functional spec, decision tables help developers keep bugs out of the product from the beginning.
  2. Decision tables help us design our test cases. Every column in a decision table should be converted into at least one test case. The first column in this table defines a test for boys under 5. If an input can be a range of values, however, such as “5 =< Age < 65”, then we should create tests at the high and low ends of the range to validate our boundary conditions.
  3. Decision tables are effective for reporting test results. They can be used to clearly show management exactly what scenarios are working and not working so informed decisions can be made.

It’s important to note that decision tables only work if the order the conditions are evaluated in, and the order of the expected results, doesn’t matter. If order matters, use a state transition diagram instead. I’ll blabber about them in a future article.

Getting back to my story, I used the spec to fill in a decision table representing all possible server combinations and expected results.  The question marks in the table clearly showed that results weren’t defined for several installation configurations. I brought this to the Program Manager’s attention, and we will able to fill in the blanks to lock down the requirements.

Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Case 7 Case 8 Case 9 Case 10
Inputs
Server Type Type 1 Type 1 Type 2 Type 2 Type 3 Type 3 Type 4 Type 4 Type 5 Type 6
DC Y N Y N Y N Y N * *
Results
Setting1 Y Y Y Y Y ? N N N N
Setting2 Y ? Y Y Y ? N N N N
Setting3 Y Y Y Y Y ? N N N N
Setting4 Y Y Y Y Y ? N ? N N
Setting5 Y Y Y Y Y ? N N N N
Setting6 Y Y Y Y Y ? N N N N
Setting7 ? ? ? ? ? ? ? ? ? ?
Setting8 N N N N N ? Y Y N N
Setting9 ? N N N N ? Y N N N
Setting10 N N ? Y Y ? N N ? N
Setting11 N N Y Y Y ? N N N N

This easy-to-understand table took the place of 2 full pages of spaghetti text in the functional spec. As a result, the developers had a comprehensive set of requirements to work off of, and I had designed a thorough set of test cases to validate their work. Boom Boom Pow, indeed!

Are Test Plans a Waste of Time?

Acceptance Test Plan Template, Test Methodology

Image by IvanWalsh.com via Flickr

If you’re like many testers I know, you hate test plans. Most testers fall into three groups: Group A doesn’t like writing test plans, Group B thinks test plans are a waste of time, and Group C thinks groups A and B are both right.

Let’s start with Group A. Some testers don’t like writing test plans because they simply don’t enjoy writing. Their dislike has less to do with what they are writing, than the fact that they are writing. Perhaps writing does not come naturally to them. One tester recently told me, “I write like a third grader.” It’s no surprise he doesn’t like writing test plans–we like to do things we’re good at.

Some testers don’t enjoy writing test plans because they are not really testers at all–they are developers working in Test. They would much rather if someone else analyzed the functional specifications, wrote the test plan, and told them exactly what they needed to automate.

Like many testers, all of my previous jobs had been in development. I thought writing a test plan would be easy–until I saw my first test plan template. Performance testing? Stress testing? Long-haul testing? I quickly learned that I had no idea how to write a test plan, and I dreaded writing one. But I started reading plans from testers I respected, and I started reading books on testing. I slowly made the transformation from developer to tester. Now I enjoy writing test plans because I know it makes my testing better.

Perhaps the most interesting reason testers dislike test plans is because they don’t think the plans are useful. They point to the fact that most test plans are never updated. As the product evolves, test plans become out of sync with the current functionality. I’ve found this to be true. Of all the test plans I’ve written in my career, I know exactly how many I updated after the product changed: zero.

This is a problem for two reasons. First, it would be one thing if test plans were quick and easy to write; they are not. Depending on the feature, it can take me a week to write a solid detailed test plan. Some would argue this time could be better spent automating test cases or performing exploratory testing.

Even worse, test plans that are out of sync with product functionality give inaccurate information to the reader. Recently, I worked on a product that was being updated for its next release, and I was assigned to test a feature I was completely unfamiliar with. The first thing I did was review the original test plan to learn how the feature worked and how it was first tested. I assumed, incorrectly, that the document was up to date. As a result, I reported bugs even though the feature was working properly.

James Whittaker, Test Director at Google, recently debated the value of creating test plans on the Google Testing blog:

As to whether it is worth doing, well, that is another story entirely. Every time I look at any of the dozens of test plans my teams have written, I see dead test plans. Plans written, reviewed, referred to a few times and then cast aside as the project moves in directions not documented in the plan. This begs the question: if a plan isn’t worth bothering to update, is it worth creating in the first place?

Other times a plan is discarded because it went into too much detail or too little; still others because it provided value only in starting a test effort and not in the ongoing work. Again, if this is the case, was the plan worth the cost of creating it given its limited and diminishing value?

Some test plans document simple truths that likely didn’t really need documenting at all or provide detailed information that isn’t relevant to the day to day job of a software tester. In all these cases we are wasting effort.

I agree that there may be some wasted effort in writing a test plan. For instance, I’m guilty of spending too much time tweaking the wording of my documents. It’s a test plan, not a novel. Bullet points and sentence fragments are fine–you shouldn’t be spending time using a thesaurus to find synonyms for the word “feature”.

But that doesn’t mean it’s all wasted effort. In fact, I believe the benefit far outweighs the effort. This is true even if the test plan quickly becomes obsolete.

Consider feature specification documents: much like test plans, feature specifications often become outdated as the product evolves. This doesn’t this mean we shouldn’t write feature specifications. Potential document “staleness” is not a valid argument against writing spec documents—or test plans. Just don’t make the mistake of assuming old specifications are still accurate.

One of the most important reasons for creating a test plan is to get your thoughts about testing the feature onto paper and out of your head. This unclutters your mind and helps you think clearly. It also documents every idea you’ve had about testing the feature, so none are forgotten.

The writing process often leads you to think of both more and better ways to test the feature. Your first pass at writing a test plan may include mostly positive functional test cases, as well as a handful of negative functional tests. But the process of refining your document leads you to consider more negative tests, more boundary tests, more concerns around scalability and security. The more time you spend planning your testing, the more complete your testing will be.

Detailed test plans can also help you find bugs in the feature before the code has even been implemented, when they are much less costly to address. Two techniques that are excellent for finding design holes during the test planning phase are Decision Tables and State Transition Diagrams. I remember creating a large Decision Table as part of a test plan for a security feature which uncovered that nearly 10% of the possible input combinations didn’t have an expected result in the design specification document.

Test documents are also valuable for conducting test plan reviews. After creating your test plan, it’s important to have it reviewed by other testers–no single tester will ever think of all possible test cases. It’s also valuable to have your plan reviewed by both the developers and program managers. In my most recent test plan review, a program manager told me that one of the features I planned on testing had been cut. In the same review, a developer informed me about existing test hooks that saved me hours of development time.

When testers say they don’t want to write a test plan, I can sympathize. Most of us got into this business either because we like to program or because we like to break things, not because we like to write documents. But when testers say they don’t want to write a test plan because it’s not valuable, I have to disagree. Good test plans make your product better. So what if they become obsolete? As both Dwight D. Eisenhower and Wile E. Coyote once said, “plans are useless, but planning is indispensable.”