As I sat down to eat my free Microsoft oatmeal this morning, I noticed how dusty my Ship-It plaque had become. I hadn’t given it much attention lately, but today it struck me how these awards illustrate the evolution of software testing.
Since I started at Microsoft seven years ago, I’ve earned nine Ship-It awards — plaques given to recognize your contribution to releasing a product. The first Ship-It I “earned” was in 2006 for Microsoft Antigen, an antivirus solution for Exchange.
I put “earned” in quotes because I only worked on Antigen a couple of days before it was released. A few days into my new job, I found myself on a gambling cruise — with open bar! — to celebrate Antigen’s release. I was also asked to sign a comically over-sized version of the product DVD box. Three years later I received another Ship-It — and signed another over-sized box — for Antigen’s successor, “Forefront Protection 2010 for Exchange Server”.
Fast-forward to 2011, when I received my plaque for Microsoft Office 365. This time there was no DVD box to sign — because the product wasn’t released on DVD. It’s a cloud-based productivity suite featuring email and Office.
This got me thinking. In 2006, when we shipped Antigen, all the features we wanted to include had to be fully developed, and mostly bug-free, by the day the DVD was cut. After all, it would be another three years before we cut a new one. And it would be a terrible experience for a customer to install our DVD, only to then have to download and install a service pack to address an issue.
By 2011, however, “shipping” meant something much different. There was no DVD to cut. The product was “released” to the cloud. When we want to update Office 365, we patch it in the cloud ourselves without troubling the customer; the change simply “lights up” for them.
This means products no longer have to include all features on day one. If a low-priority feature isn’t quite ready, we can weigh the impact of delaying the release to include the feature, versus shipping immediately and patching it later. The same hold true if/when bugs are found.
What does this all mean for Test? In the past, it was imperative to meet a strict set of release criteria before cutting a DVD. For example, no more code churn, test-pass rates above 95%, code coverage above 65%, etc. Now that we can patch bugs quicker than ever, do these standards still hold? Have our jobs become easier?
We should be so lucky.
In fact, our jobs have become harder. You still don’t want to release a buggy product — customers would flock to your competitors, regardless of how quickly you fix the bugs. And you certainly don’t want to ship a product with security issues that compromise your customer’s data.
Furthermore, it turns out delivering to the cloud isn’t “free.” It’s hard work! Patching a bug might mean updating thousands of servers and keeping product versions in sync across all of them.
Testers now have a new set of problems. How often should you release? What set of bugs need to be fixed before shipping, and which ones can we fix after shipping? How do you know everything will work when it’s deployed to the cloud? If the deployment fails, how do you roll it back? If the deployment works, how do you know it continues to work — that servers aren’t crashing, slowing down, or being hacked? Who gets notified when a feature stops working in the middle of the night? (Spoiler alert: you do.)
I hope to explore some of these issues in upcoming articles. Unfortunately, I’m not sure when I’ll have time to do that. I’m on-call 24×7 for two weeks in December.
Filed under: Careers in Test, Software as a Service | Tagged: career |
It’s not that testing becomes easier or harder (I don’t think) nor is it that testing remains “about the same” it’s that it becomes a different beast with it’s own unique dangers, strengths, weaknesses etc. I have been begging my company to move us to a state where our client-side software can be controlled such that when we identify a bug we can “release a fix” and the fix is pushed to all the clients in our system. I have yet to see that come into fruition but I can be confident that if we did that we would have a greater degree of control over the quality of our product. In the past I think this was much more difficult. Users had to decide to download updates, or updates had to be pushed using some sort of complicated, heavy client management suite.
So what does that mean for testing? It means that testing doesn’t end when the product releases… or dare I say it… The product doesn’t get “cut” it is simply brought to life, and once it’s in its life cycle it constantly needs to be improved and the improvement needs to be completely seamless, and all but transparent to the end user.
“How do you know everything will work when it’s deployed to the cloud?”
Oh how I wish I could answer this question with any sort of confidence. I know that scalability, load, restricted server-side resources and a whole slew of other problems related to those have caused us such a headache… But hey! The US Government is worse off than this than most of us, just look at healthcare.gov 😉
“Patching a bug might mean updating thousands of servers and keeping product versions in sync across all of them.”
I have a feeling this can be automated, if you haven’t tried using powershell in your test environment, I highly recommend. I’ve seen it automated, hell, I have automated it, but not across thousands of servers, only across thousands of clients. It’s been some of the most challenging work I’ve ever done. But once it is automated… It’s very very satisfying.
Also on security, in my experience, the biggest threat is the people, not the code. You might have something to say to that, and I’d love to hear it. But I feel like companies put all this money into trying to make their product super secure, when really, the issue isn’t going to be that their product is insecure, it’s going to be that someone decides to do something that breaks their security.
I’m glad to see some of the focus moving off of a big “ship it” milestone. The focus of developers and testers needs to be more spread out across the lifecycle of the software. For many in the test community, there is still a long way to go to set the right goals and pay appropriate attention to design, service, migration, and end-of-life issues.
[…] The Evolution of Software Testing – Andrew Schiano – https://experttesters.com/2013/12/05/the-evolution-of-software-testing/ […]