Ten Firsts, Part 2

As promised, here is part two of the ten ‘firsts’ in my career at Microsoft.  Click here to read the first five.  Now, on with the show!

First Bad Manager.  The culprit shall rename nameless.  The person was a long-time Microsoft developer.  He was a great developer, but lacked people skills.  At that time, Microsoft had limited training for managers.  Management felt, the person was a great developer/tester, they’ll make a great manager!  Once he became our manager, he would pop into our offices unannounced at a random time during the day and say “Status Me, Baby.”  You had to stop what you were doing and tell him.  We all know how important uninterrupted time is.  Well, almost everyone.

Lesson #6: Not all great {insert the discipline of your choice} make great managers.  The skillsets are different.  The skillset that got you in as an IC (Individual Contributor) won’t necessarily get you in as a manager.  Like anything, you need to learn.

First Automation System.  The project was Word for DOS 5.  Our automation system was called VCR.  You recorded keystrokes and mouse movements on one computer, and they were played back into another computer through the keyboard and mouse ports.  This was some of the first automated BVTs (Build Verification Tests) I had ever worked on.

Lesson #7: Automation matters.  It is the future of the discipline.  Hopefully the automation system is something more robust than VCR, but any extensible easy-to-maintain system is a good start.

First Black Hat.  After shipping Word for DOS 6.0, we came into the WinWord ship cycle close to RTM (Release to Manufacturing).  The six of us formed a black hat team.   We would read up some something for an hour, and then test the heck out of Word; rinse, lather, repeat.  We found twelve recall-class bugs during the RTM process.

Lesson #8: You don’t need to be an expert to test.  We are all intelligent folks.  Give the tester some information to get them started, and then let them run with it.  The innate tester will do the rest.

First Feature Cut During RTM.  I was the tester for a cutting edge technology feature in V1 of Office Web Server.  We provided the UI and relied on another team for the code that did the actual work.  The feature never gelled through the milestones: it would just crash from time-to-time. We talked with the other team and always walked away thinking “it will work fine with the next release.” The next release would have another major bug.  When we hit RC (Release Candidate), I ran some automation over the weekend in the automation lab.  We crashed one-out-of-three machines.  Needless to say, we had to cut the feature.  I spent the rest of the ship cycle verifying we removed every trace of the UI from the OS and Office apps.  Of all the products I’ve worked on over the years, this is the only one that never shipped.

Lesson #9: I learned back then what is obvious today:

  • Automation is important.
  • Cut your losses early.
  • Cross team collaboration is important.

First Addiction.  Word and web servers are fun, but one product turned out to be the “crack” in my Microsoft career.  I was part of the team for V1 of Microsoft OneNote.  All of the disciplines worked side-by-side to develop this product.  We met with customers, developed personas, ran usability studies, and so on; all in addition to our regular duties.  OneNote turned out to be one of Microsoft’s hidden gems.  We liked to joke “The first hit is free.” After we shipped, the disciplines came together again to form the product support for our customers.

Lesson #10: Jobs can be fun, but some are more fun than others. Find your ‘crack’ and enjoy your job just that much more.

Advertisements

Who Tests the Watchers?

Back in February, in his blog titled “Monitoring your service platform – What and how much to monitor and alert?“,  Prakash discussed the monitoring of a service running in the cloud, the multitude of alerts that could be set up, and how testers need to trigger and test the alerts.  Toward the end he said:

Once it’s successful to simulate [the alerts], these tests results will provide confidence to the operations engineering team that they will be able to handle and manage them quickly and effectively.

When I read this part of his posting, one thing jumped into my Tester’s mind: Who is verifying that there is a working process in place to deal with the alerts once they start coming through from Production?  If you have an Operations team, do they already have a system in place that you can just ‘plug’ your alerts into?  If you don’t have a system in place, are they responsible for developing the system?

If you, the tester, has any doubt about the alert handling process, here are a few questions you might want to ask.  Think of it as a test plan for the process.

  • Is the process documented someplace accessible to everyone it affects?
  • Is there a CRM (Customer Relationship Management) or other tracking system for the alerts?
  • Is a system like SCOM  (System Center Operations Manager) being used that can automatically generate a ticket for the alert, and handle some of the alerts without human intervention?
  • Is there an SLA (Service Level Agreement) or OLA (Operations level Agreement) detailing turn-around time for each level of alert?
  • Who is on the hook for the first level of investigation?
  • What is the escalation path if the Operations team can’t debug the issue?  Is it the original product team?  Is there a separate sustainment team?
  • Who codes, tests, and deploys any code fix?

You might want to take an alert from the test system and inject it into the proposed production system, appropriately marked as a test, of course.  Does the system work as expected?

More information on monitoring and alerts can be found in the Microsoft Operations Framework.

Ten Firsts

First, a quick Hello. You can read my bio for all the details, but suffice it to say I’ve been testing at Microsoft for a while. My present job is to train the new SDETs joining Microsoft. A lot has changed over the years here at Microsoft. To paraphrase a famous quote: To know where we’re going, we must know where we’ve been. I thought I’d share a list of ten ‘firsts’ in my career at Microsoft. They are in rough chronological order. The catch is I include what I learned in each of them. Let’s get started at the start.

First Interview. I was a nerdy kid with programming experience on punch cards. I had the right word on my résumé that matched the experience they needed, so that got me in the door. My first interviewer sat me in front of the keyboard, mouse, and monitor. I said “Cool, interactive!” He sighed, and then decided on another tactic. He handed me one of the Highlights magazines in his office. If you aren’t familiar with them, it’s a kids magazine. On the back is a picture that at first glance looks normal. On closer inspection you’ll notice things that are wrong. He asked me to do the puzzle: identify what was wrong. I was smart enough to know that the purpose of his question wasn’t that easy. So, I started by saying: “I’ll use present day Earth as a reference. Based on that, the ‘wrong’ things are…”

At that point he took the magazine away from me and we chatted for the rest of the hour. Did I fail? Nope. In fact, I had inadvertently asked one of the most important questions of the time: “Where’s the spec?”

Lesson #1: It’s the person and their abilities that counts, not necessarily the experience. We all know that now. If we see an innate test ability in a person, we can teach them how to formally test software or service. I’ve gambled on a few of the folks I’ve interviewed, and they’ve all worked out just fine.

First Recall Class Bug I missed. Anyone remember Microsoft Opus (Word for Windows 1.0)? Well, I was one of the reasons we had to ship Bill The Cat (Word for Windows 1.1). I was responsible for testing the Word Perfect text converter. It allowed us to read and write to their file format. I manually verified character and paragraph formatting was preserved, file properties were properly updated, and so on. I was still new to networked PCs and shared printers, so I didn’t really ask why there was an extra sheet of paper between the printout separator page and my document. After we shipped, this extra page was brought to our attention by one of our larger customers. It turns out our converter created a mild corruption in the document on import. Everything looked fine on the screen, but that corruption caused an extra blank page to print. The large customer printed a LOT of documents, so this bug was going to cost them a lot of money. We quickly patched this bug (and others, thankfully mine wasn’t the only one), and shipped the updated version.

Lesson #2: Question everything. If in doubt, ask.

First Business Trip. Some of our Word text converters were written by a third party company. When we found a bug, we would email it to them. They would fix the code, and then mail a disk with the new code back to us. The other company didn’t have FTP. This long turnaround time was causing problems as we neared code complete, so I traveled to their corporate office with three ‘luggables’ (the laptop of the day) and sat in a conference room for a week. As I found a bug, I would write it on a 3×5 card, hand it to the developer, and he would hand me back a disk with the card(s) attached that were fixed in that version. I would run the fix through our tests (no unit tests back then), and repeat the process.

Lesson #3: Quick turnaround is important. If you work with people onsite, get up and go talk with people in person. There is a reason for open workspaces like cubicles. If the people are offsite, use instant messaging, phone, and email to keep in contact. The act of getting up and networking was recently identified as one of the Personal Effectiveness vital attitudes and behaviors of senior engineers here at Microsoft.

First great manager. I’ve worked under Jeanne Sheldon from time to time over the years. She is now a Corporate Vice President, but when I was hired she was a Test Manager. She taught me the importance of the Test discipline. It didn’t matter whether you were in the Beizer, Juran, or Deming camps of testing (those were the arguments back then). We were there to test the product, verify it matched the spec, and help prevent bugs. The way we did it didn’t matter as much.

Lesson #4: A good tester is a good tester. Our common goal is a better product for the customer.

First big decision. There are two defined career paths at Microsoft: individual contributor (IC) or management. My lead at the time asked me if I wanted to be a test lead. I talked with her about it, talked with my mentor, and I talked with some of my growing circle of friends at work .

Back then managers had very little time to have hands on the product. Most of their time was spent in meetings. I tend to fall asleep during meetings, so the IC path was looking good. Also, politics wasn’t my game. I acknowledged it was there, but it wasn’t my game. I decided to follow the IC path.

Lesson #5: Build up your network of people, inside and outside of your company. You can’t know everything. Keep your network active. Use your network to help you make decisions, answer questions, and so on. And reciprocate and help people in your network when they ask for it.

Tune in for part two, coming soon!

%d bloggers like this: