Being an Effective Spec Reviewer

The first time testers can impact a feature is often during functional and design reviews. This is also when we make our first impression on our co-workers. If you want to make a great initial impact on both your product and peers, you have to be an effective reviewer.

In my seven years in testing, I’ve noticed that many testers don’t take advantage of this opportunity. Most of us fall into one of four categories:

  1. Testers who pay attention during reviews without proving feedback. This used to be me. I learned the feature, which is one of the goals of a review meeting. A more important goal, however, is to give feedback that exposes defects as early as possible.
  2. Testers who push-back (argue) at seemingly every minor point. Their goal is to increase their visibility and prove their worth as much as it is to improve the product. They learn the feature and can give valuable feedback. However, while they think they’re impressing their teammates, they’re actually frustrating them.
  3. Testers who attend reviews with their laptops open, not paying attention. If this is you, please stop; no one’s impressed with how busy you’re pretending to be.
  4. Testers who pay attention and learn the feature, while also providing constructive feedback. Not only do they understand and improve the feature, but they look good doing it. This can be you!

How do you do this, you ask? With this simple recipe that only took me four years to learn, I answer.

1. Read the Spec

Before attending any functional or design review, make sure you read the documentation. This is common sense, but most of us are so busy we don’t have time to read the specs. Instead, we use the review itself to learn the feature.

This was always a problem for me because although I learned the feature during the review, I didn’t have enough time to absorb the details and give valuable feedback during the meeting. It was only afterwards when I understood the changes we needed to make. By then it was too late–decisions had already been made and it was hard to change people’s minds. Or coding had begun, and accepting changes meant wasted development time.

A great idea Bruce Cronquist suggested is to block out the half hour before the review meeting to read the spec. Put this time on your calendar to make sure you don’t get interrupted.

2. Commit to Contributing

Come to every review with the goal of contributing at least one idea. Once I committed to this, I immediately made a bigger impact on both my product and peers. This strategy works for two reasons.

First, it forces you to pay closer attention than you normally might have. If you know you’ll be speaking during the meeting, you will pay closer attention.

Second, it forces you to speak up about ideas you might otherwise have kept to yourself. I used to keep quiet in reviews if I wasn’t 100% sure I was right. Even if I was almost positive, I would still investigate further after the meeting. The result was that someone else would often mention the idea first.

It  took four years for me to realize this is an effective tool.

3. Have an Agenda

It’s easy to say you’ll give a good idea during every review, but how can you make sure you’ll always have a good idea to give? For me, the answer was a simple checklist.

The first review checklist I made was to make sure features are testable. Not only are testers uniquely qualified to enforce testability, but if we don’t do it no one will. Bringing up testability concerns as early as possible will also make your job of testing the feature later-on much easier. My worksheet listed the key tenets of testability, had a checklist of items for each tenant, and room for notes.

At the time, I thought the concept of a review checklist was revolutionary. So much so, in fact, that I emailed Alan Page about it no less than five times. I’m now sure Alan must have thought I was some kind of stalker or mental patient. However, he was very encouraging and was kind enough to give the checklist a nice review on Toolbox–a Microsoft internal engineering website. If you work at Microsoft, you can download my testability checklist here.

I now know that not only are checklists the exact opposite of revolutionary, but there are plenty of other qualities to look for than just testability.

Test is the one discipline that knows about most (or all) of the product features.  It’s easy for us to find and identify inconsistencies between specs, such as when one PM says the product should do X, while another PM says it should do Y. It’s also our job to be a customer advocate. And we need to enforce software qualities such as performance, security, and usability. So I decided to expand my checklist.

My new checklist includes 50 attributes to look for in functional and design reviews. It’s in Excel format, so you can easily filter the items based on Review Type (Feature, Design, or Test) and Subtype (Testability, Usability, Performance, Security, etc.)

Review Checklist

Click this image to download the Review Checklist.

If there are any other items you would like added to the checklist, please list them in the comments section below. Enjoy!

10 Responses

  1. Any chance you could make the checklist available to people that work outside MS ? You’ve bothered to write about what you do, made it into a public blog post, tweeted about it – and most people cant access the information…

  2. I’d like to add an item to your checklist, probably under SubType “Deployment”.
    How will the feature be undeployed or uninstalled? What’s expected to be cleaned up and what’s expected to be left behind? I’ve found designers often don’t pay enough attention to this part of the feature life cycle, which means there may be ambiguity or bugs here that can be found in a review. In addition, being able to undeploy easily and correctly may be a great testability aid.

  3. This is a comprehensive checklist and I will definitely try this out in my product. Thanks for sharing.

  4. Overall, I thik this is pretty good. A nice explanation for junior testers, with a handy tool, to boot. Well done!

    A few off-the-cuff thoughts:
    -In your spreadsheet, “SubType” implies that there is a “Type”. Is there?
    -Your blog post mentions “consistency”, but your spreadsheet lacks it. Add it!
    -Can required (test) data be found or created? If not, you might have a problem later.
    -Does the spec describe something that will require “special knowledge” to use? (For example, in order to test localization, the tester must know Chinese). If so, can you get that “special knowledge”? If not, you might have trouble later.
    -Does the spec describe something that can be measured? (For example, watch for terms like “good” and “quick”)
    -You said, “50 attributes to look for in functional and design reviews”. I’d suggest that this list might be useful for many other things/times, as well.
    -If you get an unfavorable response to any of your list items, what is the mitigation plan? Something to think about.
    -This list could grow (and grow, and grow). The larger the list, the more unwieldy it becomes (and therefore, less useful). Be careful.

    -Damian

    • Andrew, excellent resource. Thanks.
      Part of an e-mail review is to be picky, so forgive me if I am:
      1. Run the list through a spell checker
      2. “Are all performance goals defined and measurable?”. Goals are usually taken as “Best Effort” and do not have to be measured. I would change to “goals and limits”. Also you might mention Performance behavior when stated limits are exceeded. (Examples are crash, stop gracefully, degrade gracefully.)
      3. “Does the UI work for cultures other than English?” Sometimes this is referred to as Internationalization (I18N) and Localization (L10N), see http://stackoverflow.com/questions/506743/localization-and-internationalization-whats-the-difference

  5. Hi Andrew,

    Thanks for this. It helps alot. 🙂

    Jarisse

  6. I updated the downloadable checklist based on all the suggestions I received. Thanks for all the feedback, and keep it coming!

Leave a comment