“I see test as a role going away – and it can’t disappear fast enough”

In case you missed it, Alan Page dropped a bombshell in his last post of 2013 — and then immediately (and literally) got on a plane and left. Definitely worth a read.

Advertisements

The Evolution of Software Testing

As I sat down to eat my free Microsoft oatmeal this morning, I noticed how dusty my Ship-It plaque had become. I hadn’t given it much attention lately, but today it struck me how these awards illustrate the evolution of software testing.

Since I started at Microsoft seven years ago, I’ve earned nine Ship-It awards — plaques given to recognize your contribution to releasing a product. The first Ship-It I “earned” was in 2006 for Microsoft Antigen, an antivirus solution for Exchange.

I put “earned” in quotes because I only worked on Antigen a couple of days before it was released. A few days into my new job, I found myself on a gambling cruise — with open bar! — to celebrate Antigen’s release. I was also asked to sign a comically over-sized version of the product DVD box. Three years later I received another Ship-It — and signed another over-sized box — for Antigen’s successor, “Forefront Protection 2010 for Exchange Server”.

Fast-forward to 2011, when I received my plaque for Microsoft Office 365. This time there was no DVD box to sign — because the product wasn’t released on DVD. It’s a cloud-based productivity suite featuring email and Office.

This got me thinking. In 2006, when we shipped Antigen, all the features we wanted to include had to be fully developed, and mostly bug-free, by the day the DVD was cut. After all, it would be another three years before we cut a new one. And it would be a terrible experience for a customer to install our DVD, only to then have to download and install a service pack to address an issue.

By 2011, however, “shipping” meant something much different. There was no DVD to cut. The product was “released” to the cloud. When we want to update Office 365, we patch it in the cloud ourselves without troubling the customer; the change simply “lights up” for them.

This means products no longer have to include all features on day one. If a low-priority feature isn’t quite ready, we can weigh the impact of delaying the release to include the feature, versus shipping immediately and patching it later. The same hold true if/when bugs are found.

What does this all mean for Test? In the past, it was imperative to meet a strict set of release criteria before cutting a DVD. For example, no more code churn, test-pass rates above 95%, code coverage above 65%, etc. Now that we can patch bugs quicker than ever, do these standards still hold? Have our jobs become easier?

We should be so lucky.

In fact, our jobs have become harder. You still don’t want to release a buggy product — customers would flock to your competitors, regardless of how quickly you fix the bugs. And you certainly don’t want to ship a product with security issues that compromise your customer’s data.

Furthermore, it turns out delivering to the cloud isn’t “free.” It’s hard work! Patching a bug might mean updating thousands of servers and keeping product versions in sync across all of them.

Testers now have a new set of problems. How often should you release? What set of bugs need to be fixed before shipping, and which ones can we fix after shipping? How do you know everything will work when it’s deployed to the cloud? If the deployment fails, how do you roll it back? If the deployment works, how do you know it continues to work — that servers aren’t crashing, slowing down, or being hacked? Who gets notified when a feature stops working in the middle of the night? (Spoiler alert: you do.)

I hope to explore some of these issues in upcoming articles. Unfortunately, I’m not sure when I’ll have time to do that. I’m on-call 24×7 for two weeks in December.

Being an Effective Spec Reviewer

The first time testers can impact a feature is often during functional and design reviews. This is also when we make our first impression on our co-workers. If you want to make a great initial impact on both your product and peers, you have to be an effective reviewer.

In my seven years in testing, I’ve noticed that many testers don’t take advantage of this opportunity. Most of us fall into one of four categories:

  1. Testers who pay attention during reviews without proving feedback. This used to be me. I learned the feature, which is one of the goals of a review meeting. A more important goal, however, is to give feedback that exposes defects as early as possible.
  2. Testers who push-back (argue) at seemingly every minor point. Their goal is to increase their visibility and prove their worth as much as it is to improve the product. They learn the feature and can give valuable feedback. However, while they think they’re impressing their teammates, they’re actually frustrating them.
  3. Testers who attend reviews with their laptops open, not paying attention. If this is you, please stop; no one’s impressed with how busy you’re pretending to be.
  4. Testers who pay attention and learn the feature, while also providing constructive feedback. Not only do they understand and improve the feature, but they look good doing it. This can be you!

How do you do this, you ask? With this simple recipe that only took me four years to learn, I answer.

1. Read the Spec

Before attending any functional or design review, make sure you read the documentation. This is common sense, but most of us are so busy we don’t have time to read the specs. Instead, we use the review itself to learn the feature.

This was always a problem for me because although I learned the feature during the review, I didn’t have enough time to absorb the details and give valuable feedback during the meeting. It was only afterwards when I understood the changes we needed to make. By then it was too late–decisions had already been made and it was hard to change people’s minds. Or coding had begun, and accepting changes meant wasted development time.

A great idea Bruce Cronquist suggested is to block out the half hour before the review meeting to read the spec. Put this time on your calendar to make sure you don’t get interrupted.

2. Commit to Contributing

Come to every review with the goal of contributing at least one idea. Once I committed to this, I immediately made a bigger impact on both my product and peers. This strategy works for two reasons.

First, it forces you to pay closer attention than you normally might have. If you know you’ll be speaking during the meeting, you will pay closer attention.

Second, it forces you to speak up about ideas you might otherwise have kept to yourself. I used to keep quiet in reviews if I wasn’t 100% sure I was right. Even if I was almost positive, I would still investigate further after the meeting. The result was that someone else would often mention the idea first.

It  took four years for me to realize this is an effective tool.

3. Have an Agenda

It’s easy to say you’ll give a good idea during every review, but how can you make sure you’ll always have a good idea to give? For me, the answer was a simple checklist.

The first review checklist I made was to make sure features are testable. Not only are testers uniquely qualified to enforce testability, but if we don’t do it no one will. Bringing up testability concerns as early as possible will also make your job of testing the feature later-on much easier. My worksheet listed the key tenets of testability, had a checklist of items for each tenant, and room for notes.

At the time, I thought the concept of a review checklist was revolutionary. So much so, in fact, that I emailed Alan Page about it no less than five times. I’m now sure Alan must have thought I was some kind of stalker or mental patient. However, he was very encouraging and was kind enough to give the checklist a nice review on Toolbox–a Microsoft internal engineering website. If you work at Microsoft, you can download my testability checklist here.

I now know that not only are checklists the exact opposite of revolutionary, but there are plenty of other qualities to look for than just testability.

Test is the one discipline that knows about most (or all) of the product features.  It’s easy for us to find and identify inconsistencies between specs, such as when one PM says the product should do X, while another PM says it should do Y. It’s also our job to be a customer advocate. And we need to enforce software qualities such as performance, security, and usability. So I decided to expand my checklist.

My new checklist includes 50 attributes to look for in functional and design reviews. It’s in Excel format, so you can easily filter the items based on Review Type (Feature, Design, or Test) and Subtype (Testability, Usability, Performance, Security, etc.)

Review Checklist

Click this image to download the Review Checklist.

If there are any other items you would like added to the checklist, please list them in the comments section below. Enjoy!

Qingsong’s Random Thoughts

I would like to share my random thoughts with you through the below presentation.

Qingosng random thought  from Qingsong Yao

A couple of notes for the presentation:
  • Stevsi’s Blog is our president Steven Sinofsky’s internal blog where he frequently (weekly) wrote his thoughts to us.
  • My previous manger Roger Fleig maintains a list of books with different topics, and keeps them updated with which book he is reading now. His reading list encouraged me to think that a great leader should be a great thinker.
  • In the last slide, I list a couple of hot areas inside Microsoft, you may search Testing in Production, Garage to get more details.
  • When I say SDET and SDE roles will be the same, I mean the SDET role inside Microsoft; I do not mean QA. The same is true for my suggestion that SDET should do more coding.

Ten Firsts, Part 2

As promised, here is part two of the ten ‘firsts’ in my career at Microsoft.  Click here to read the first five.  Now, on with the show!

First Bad Manager.  The culprit shall rename nameless.  The person was a long-time Microsoft developer.  He was a great developer, but lacked people skills.  At that time, Microsoft had limited training for managers.  Management felt, the person was a great developer/tester, they’ll make a great manager!  Once he became our manager, he would pop into our offices unannounced at a random time during the day and say “Status Me, Baby.”  You had to stop what you were doing and tell him.  We all know how important uninterrupted time is.  Well, almost everyone.

Lesson #6: Not all great {insert the discipline of your choice} make great managers.  The skillsets are different.  The skillset that got you in as an IC (Individual Contributor) won’t necessarily get you in as a manager.  Like anything, you need to learn.

First Automation System.  The project was Word for DOS 5.  Our automation system was called VCR.  You recorded keystrokes and mouse movements on one computer, and they were played back into another computer through the keyboard and mouse ports.  This was some of the first automated BVTs (Build Verification Tests) I had ever worked on.

Lesson #7: Automation matters.  It is the future of the discipline.  Hopefully the automation system is something more robust than VCR, but any extensible easy-to-maintain system is a good start.

First Black Hat.  After shipping Word for DOS 6.0, we came into the WinWord ship cycle close to RTM (Release to Manufacturing).  The six of us formed a black hat team.   We would read up some something for an hour, and then test the heck out of Word; rinse, lather, repeat.  We found twelve recall-class bugs during the RTM process.

Lesson #8: You don’t need to be an expert to test.  We are all intelligent folks.  Give the tester some information to get them started, and then let them run with it.  The innate tester will do the rest.

First Feature Cut During RTM.  I was the tester for a cutting edge technology feature in V1 of Office Web Server.  We provided the UI and relied on another team for the code that did the actual work.  The feature never gelled through the milestones: it would just crash from time-to-time. We talked with the other team and always walked away thinking “it will work fine with the next release.” The next release would have another major bug.  When we hit RC (Release Candidate), I ran some automation over the weekend in the automation lab.  We crashed one-out-of-three machines.  Needless to say, we had to cut the feature.  I spent the rest of the ship cycle verifying we removed every trace of the UI from the OS and Office apps.  Of all the products I’ve worked on over the years, this is the only one that never shipped.

Lesson #9: I learned back then what is obvious today:

  • Automation is important.
  • Cut your losses early.
  • Cross team collaboration is important.

First Addiction.  Word and web servers are fun, but one product turned out to be the “crack” in my Microsoft career.  I was part of the team for V1 of Microsoft OneNote.  All of the disciplines worked side-by-side to develop this product.  We met with customers, developed personas, ran usability studies, and so on; all in addition to our regular duties.  OneNote turned out to be one of Microsoft’s hidden gems.  We liked to joke “The first hit is free.” After we shipped, the disciplines came together again to form the product support for our customers.

Lesson #10: Jobs can be fun, but some are more fun than others. Find your ‘crack’ and enjoy your job just that much more.

%d bloggers like this: