Transitioning from testing one sort of application to another, and general advice on job hunting

August 27, 2008

I’ve just started my second round of teaching Black Box Software Testing Foundations for AST members. Besides the obvious benefits of what I consider to be a very well designed and rigorous course, there are the side benefits of meeting interesting folks from all over the world, and the discussions that arise. One student who’s currently between jobs, and has extensive experience testing embedded software but is looking for a job testing web applications asked me for advice.

Looking back on my answer to her, it seems like something others might appreciate as well, so I decided to publish it here as well.

Specific ideas for transitioning from testing one type of software to another:

  1. Consider finding a relevant open source project to help test. I can’t think of an open source web app right now, but certainly experience testing Firefox or Watir sounds relevant to me…and even experience testing something further afield like Open Office shows that your experience is applicable beyond embedded software. The bug reports you file here become a publicly visible portfolio that you can link to in your resume.
  2. Consider reading/learning more about web testing in particular. You might add some of what you’ve read to your cover letter or resume directly, or perhaps it will just inform how you answer questions when you get called for an interview.
  3. Finally, if you’re having a hard time getting the permanent job you want, consider getting a short-term contract to get web testing onto your resume.

And then there’s the general advice:

  1. Make your resume & cover letter shine. When I’m hiring, cover letters that convey personality, communication skill, and intelligence are few and far between. I am *much* more likely to call a candidate with holes in her/his resume if their cover letter is strong – which certainly includes enough personalization that I can tell s/he has thought about working at *this* company in particular.
  2. Are you not getting called for interviews? Ask friends or colleagues who you can trust to be thoughtfully candid to review your resume and cover letters.
  3. Are you getting interviews but not getting offers? Ask the hiring manager why. Five years ago, when I was transitioning from my first testing job and looking for my second. I got turned down for a gig I was interested in after the second interview. I worked up my courage, called the hiring manager up, and said I wanted to know why I hadn’t gotten the position so that I could learn and improve myself. He was very impressed…and in fact after talking for 10 minutes on the phone told me he’d changed his mind and made me an offer. While I can’t guarantee that that’ll happen for you, I think there’s at least a decent chance you’ll learn something. As a hiring manager, I’ve had a lot of candidates who turned me off because of something they wrote or said, or who I turned down because of a skill they appeared to lack. I don’t believe it’s my business to point this sort of thing out unsolicited, but if someone takes the initiative to ask I will often be happy to offer a friendly tip or two for the next place they apply.

Article on Working With Inattentional Blindness In Software Testing

August 22, 2008

David Christiansen, the editor of the AST Update, has put out a nice new issue…including an essay of mine on working with Inattentional Blindness. It’s available in print and as a PDF, and there’s a lot of good in it. Check it out!

Conferences are for Conferring

August 6, 2008

I’ve been home from the 3rd annual Conference of the Association of Software Testing in Toronto for three weeks now, and am still thinking about all I’ve learned.

This was my 3rd time attending CAST. As always, the keynotes, tutorials and track sessions were excellent…and as always, even better than that was the conferring. You see, CAST knows that it attracts testers with an impressive array of experience, and that making time for them to riff off of each other is very valuable. Toward that end, every single track session and keynote included substantial time devoted to discussion, with an explicit welcome to challenge the presenter or to otherwise test their assertions. Inevitably, the I found the lively conversation spilling over into dinner, or following us to the pub.

While I’ve been testing for eight years now, and have worked on a range of web and client-server applications, all of my work so far is limited to the San Francisco Bay Area. It’s a tremendous pleasure discussing testing with folks from vastly different industries, and from around the world.

One story of many: Over dinner I mentioned a problem that I thought was best solved by starting a conversation with the programmer. Scott Barber responded with something like “Providing the tester doesn’t get in trouble for talking to her.” I was surprised, and asked if that’s really something that happens much in 2008…and was informed by folks at the table from more regulated industries or on government projects that that’s not uncommon. Partly this made me very happy to be a tester for in very human culture we have at Freebase, but it also served to reinforce how easy it is to overgeneralize about the field of testing, when I’m really thinking about testing in a particular context or contexts.

Discussing our particular challenges, sharing stories, and questioning testers with very different perspectives really made CAST 2008 a joy for me…and I’m looking forward to 2009 in Colorado Springs!

Ergonomic Options That Keep Me Computing

July 8, 2008

This is only loosely related to software testing…being some of the things that keep me healthy while I do it. This started as an email template, which I’ve sent to many friends and colleagues at their request – enough that I decided it was worth posting here:

The Goldtouch Adjustable Keyboard is what I use at work, and like it a lot. I found it to have next to no learning curve. I know others who prefer the Kinesis Contoured Keyboard…which friends say takes up to a week to learn to use, but I know a couple folks who my keyboard did nothing for, and who love these.

My biggest advice for folks though is to try several options out, and (for folks in the SF Bay Area) Office Relief is a great place in San Leandro to do so.

Again, very personal, and trying out options is definitely the way to go. What I’ve settled on is a fairly traditional mouse on my left, and an Evoluent VerticalMouse 3 on my right. Some days I use mostly one or the other, most days I switch between them. Note, I was excited to try their left handed version, but it is still at v2 and I really didn’t like the way it fit in my hand.

Taking Breaks
The last thing I’ll say is that the single most helpful ergonomic adjustment I’ve made is software that helps me to pace my breaks. There are good free options for PC (Workrave) or Mac (Anti-RSI). Both are very configurable.

Why Would You Call Yourself THAT?

May 7, 2008

Scott Barber just wrote an amusing and insightful reflection on what sorts of titles testing professionals have, entitled Software Testers: Identity Crisis Or Illusions of Grandeur?.

I personally am inclined toward simple, descriptive titles, and think that ‘Software Tester’ (including variants like ‘Senior’, ‘Lead’, etc.) describes what I do pretty well. Several times at previous companies I’ve discussed changing the titles we use from Software Quality Assurance Engineer to Software Tester. I’ve gotten several interesting reactions, including:

  • “SQAE is a more impressive title. Why would you want a title that makes you sound low-skill?”, and
  • “But we want you to assure the quality here. Don’t try to back out of your responsibilities!”

Both of these objections seem to be compelling reasons to raise the conversation. Do folks around you not understand what software testing is, and why it’s a challenging, high-skill activity? Well, that’s a good conversation to get to have.

More concerning to me is when folks think that my team should “assure quality”. At this point I really want to make clear my perspective: Everyone in software development should consider quality to be there job. I am here to discover and communicate important quality-related information about the product and project – but I cannot and shouldn’t try to “assure quality”.

Rationales that I’m more sympathetic to for having high-fallutin’ titles include:

  • “SQAE is a form of shorthand to communicate to budget setters that testing is a high-skill activity”,
  • “It’s the standard for companies like ours. Job applicants will know what we mean”, and the related
  • “Some applicants might think that Software Tester is a lower-pay position, and so listing the job as such might turn off some.”

Scott ends with

Since there seems to be a prevalent desire for software testers to have fancy sounding titles, maybe we should consider “Software Quality Forecaster” instead. At least that would help our teammates better understand what we really do.

Maybe. I like Forecast way better than I like Assure. Perhaps “Software Quality Investigator”? For someone who doesn’t understand software testing, each title communicates what the job means imperfectly. To capture the flavor and variety of what testing can be, I expect that no title will be a substitute for a good conversation.

Certainly, I would love to see Software Tester considered an esteemed title, and to see the practice of mislabeling testing as ‘quality assurance’ fade away. For now I’m flexible as to what I’m called, but believe it’s important to consider what inaccuracies any given title communicates – and what biases it represents – and then to use the title as a springboard for conversation.

Why Do You Use Watir?

April 15, 2008

Bret Pettichord, lead Watir developer just announced on the watir-general discussion list that he’s taken a new job “working full-time with Prototest and Pete Dignan (CEO of Prototest) to accelerate the development of new versions of Watir.”

One of his top priorities is getting FireWatir and Watir in sync with each other – something I’m very excited about.

He just asked for stories regarding “Why do you use Watir?

I haven’t blogged for a while, and I’ve meant to tell this story here for some time, so here goes:

I had just started a new job as the first dedicated tester at a company with an established product. In my first few months, the testing that made the most sense to do was hands-on, sapient testing…but I’d heard some buzz about a new open source tool under development, nearing a 1.0 release, and I was itching to try it.

One afternoon a report came in from the field: Someone had been attempting to import members to a project, and had seen “Permission Denied”…with someone else’s name up in the right hand corner! Had their members been imported into some stranger’s project? They tried again and succeeded, but were understandably spooked.

I asked around, “Have we ever heard reports like this before?”

“You know, now that you mention it, once or twice a year someone makes a similar complaint. Each time, a coder has poked about in the code, changed something, and said ‘I bet that’ll fix it.'” The problem was no one had ever succeeded at reproducing the problem, and so no one had a way to test if it was fixed.

I had a hunch that there was some concurrency / race condition going on, and that Watir might help me diagnose it. Watir turned out to be simple enough that in less than I day, and without any previous ruby experience, I wrote a script that performed these actions over and over, concurrently, as 5 different users. Lo and behold, I could consistently reproduce this issue in under 30 minutes.

The coders I worked with were thrilled to have a consistent repro case, and after a few false leads, they had a fix.

Since then, I’ve used Watir for tasks including concurrency testing (above), creating large data sets, and measuring end-to-end page performance across revisions, not to mention regression testing. Inspired by some of Harry Robinson‘s work doing high-volume, semi-random testing of Google directions using Watir, my latest project is venturing into using Watir for some automated model-based testing. Along the way I’ve learned a real language (not a vendorscript) and am part of a friendly and thriving community of users and contributors…to whom I’m very grateful.

AST’s Online Testing Courses, and Power of Two Bugs in Excel 2007

September 24, 2007

The Association for Software Testing has recently started offering the Black Box Testing Course, designed by Cem Kaner, James Bach, and Becky Fiedler, as a free benefit to AST members. I took the first round (taught by Cem, Scott Barber, and Jon Hagar) and was very impressed. I expected that any course designed by Cem would reflect a deep understanding of testing – and it did – but what surprised me was the deep understanding of teaching. Pedagogically, this course stood head-and-shoulders above the previous online courses I’ve taken from UC Berkeley Extension and Foothill College.

I’ll write more about it soon, but for now I want to highlight one exercise in particular. We were told:

You are testing a program that includes an Integer Square Root function. The function reads a 32-bit word that is stored in memory, interprets the contents as an unsigned integer and then computes the square root of the integer, returning the result as a floating point number.

And then asked a number of thoughtful questions about what test strategy we would take, how many square roots we would(n’t) test, etc. It’s an interesting problem, and many different suggestions were made. One of common thread was the importance of testing around powers of two, especially just over and under boundaries of 8, 16, and 32-bit integers. Now, Doug Hoffman has an excellent article showing that interesting errors can occur far from any of these boundaries…but testing around powers of two was one of our baselines.

That said, I’m fascinated to learn that Excel 2007 has a major calculation error at 2**16-1. Many (or all?) formulas that should evaluate to 65535 instead evaluate as 100000. So for example:

  • 850 x 77.1 – 1 = 65534
  • 850 x 77.1 = 100000

As a test engineer, I fully understand that not every bug will ever be caught…but it fascinates me that Microsoft‘s test team doesn’t seem to have thoroughly tested the boundary between 16-bit and 32-bit integers. This is of course being discussed on Slashdot, Digg, and many other places. I’m curious how Microsoft’ll react, and what the repercussions will be.

The 2nd Annual CAST Tester Competition

August 24, 2007

I had the pleasure of competing in CAST‘s testing competition as captain of Team “Hey, David”, and I’m proud (and a bit stunned) to say that we won 1st or 2nd place in all four categories!

Needless to say, it was a 100% team effort, and I was lucky to work with my teammates Dee Pizzica and Grace Hensley. James Bach talks about certifying testers he as worked with closely, and (while the Jeff Fry certification may not be as broadly recognized) I have to say I am proud to certify that Dee and Grace are stellar folks to pair with for a 6 hour testing frenzy!

I should say as well that Team Canada would have knocked us to 2nd place for the top prize, if they hadn’t been disqualified for having Paul Holland, a AST facilitator on their team…a rule which wasn’t entirely clear to them. I’ve seen bits of Team Canada’s record and heard more. They clearly did a phenomenal job. Hopefully this loss’ll teach the rest of them not to associate with that shady Paul Holland character. ;0)

All that said, let me explain how things worked. David Gilbert of SiriusSQA generously offered to expose some alpha quality software he’d been working on in his spare time to our testing. This meant we got to test real software…with the real developer/project owner on site. Specifically, we tested ShapeUp, an installed Windows exercise management program. James Bach ran the show, and explained that they didn’t want to have to police for cheating, and so they were going to allow as much as possible: One could ask for help from anyone they wanted to, talk to anyone they wanted to, and could listen in on each other if they chose to…but the first team who logged a bug was going to be the only one who got credit for it for the Best Bug List category.

The four categories, and the associated prizes were:

<Team Canada, (Paul Holland, Captain)>
Team Hey David, (Jeff Fry, Captain)
Team Louise, (Louise Perod, Captain)

Team Blue, (Martin Taylor, Captain)
Team Hey David, (Jeff Fry, Captain)

Bug 110, Team Hey David, (Jeff Fry, Captain)
Bug 117, Team Crazy Canuck, (Jason Coutu, Captain)

DEVELOPERS CHOICE: The cash in James’ wallet ($120)
Team Hey David, (Jeff Fry, Captain)
Team Louise (Louise Perod, Captain)

The first two categories are described a bit more on the Association for Software Testing site.

One could test solo or in teams, and I believe folks chose to work on teams anywhere from one to five testers. Prizes were to be split amongst the team, regardless of the size of the team.

I teamed up with Grace Hensley and Dee Pizzica, and while we rotated roles, at most times two of us were pair-testing on my Mac using Parallels, and a third was entering bugs in the bug tracker on Grace’s Mac (which lacks Windows and so couldn’t run ShapeUp). If I had to point to one single decision we made that led to our victory, it was deciding to sit an arm’s length from the developer. The second we got our install CDs, most testers ran for a corner or another room, presumably to keep their finds hidden. We chose an opposite strategy – sitting an arm’s length away from the developer.

Now, the job of a tester is to provide important quality-related information about the state of the product, yes? And the importance of the information is defined by the developers and project owners. I appreciate Gregory Bateson’s definition of information as “a difference that makes a difference”, and it always helps me to know what makes a difference to the people I’m testing for. From a different angle, lets take James Bach’s definition of testing as “Questioning a product in order to evaluate it.” Many of those questions I explore through experimentation, but (especially when I am first exploring an application) others are best answered by talking to a developer, customer or stakeholder. Here where the developer and product owner were the same person, we very quickly started asking David if and how he preferred to be approached with questions. As it happened, he wasn’t coding during the contest, he was mostly evaluating bug reports and answering questions, so he invited us (and anyone else that thought to ask) to go ahead and pepper him.
A final benefit of sitting by the developer, one I’ve enjoyed every single time I’ve had the pleasure of working side by side with the folks whose work I was testing: Developers have all kinds of important conversations that they don’t pass along to the test team. This isn’t necessarily about being non-communicative; thoughout the day developers will have conversations with each other or others on the project team, and those conversations are often peppered with helpful nuggets for the test team. In this case, when other testers came to ask David questions, we often kept an ear open, and got useful clues as a result.

There was of course much more to our 6 hour testing spree than colocation. I found that pairing for the testing was fantastic. It kept our spirits up as the night wore on. It helped us focus our testing, and enabled us to catch more problems and to brainstorm more experiments than any one of us testing alone. For next years contenders, I strongly recommend:

  • Colocation – I think that it’ll be quite crowded around the developer next time around.
  • Testing as a team – Grace, Dee and I didn’t know each other at all before the start of the weekend. Testing as a team helped me to test better, learn more about testing, build community, and have a much better time.
  • Test reports – I learned a lot about this from Team Canada, who turned in an amazing pile of test artifacts including video captures of bug reports, video demonstrations of testing techniques (both ones that found bugs and those that didn’t), and a voluminous description of all their testing.

Most of all I recommend joining the fun next year! I believe we had around fifteen teams competing this year, and from the conversations I’ve had it seems folks in general had a great time.

Definitions of Testing

June 21, 2007

Elisabeth Hendrickson has just posted a great blog entry about definitions of testing. She lists several older definitions of testing, and then gives one she and Dale Emery crafted:

Dale Emery and I discussed this at length and decided that we agree that: Testing is a process of gathering information by making observations and comparing them to expectations.

I added two others that work well for me, from James Bach and Cem Kaner respectively, taken from this post on James Bach’s blog:

Testing: questioning a product in order to evaluate it (Bach version); technical investigation of a product, on behalf of stakeholders, with the objective of exposing quality-related information of the kind they seek (Kaner version).

I am a big fan of these three definitions, and think that both their similarities and their differences are quite interesting. I like that they emphasize testing as fundamentally a learning / questioning / investigative process. I like the Hendrickson/Emery emphasis on “…comparing them to expectations.” The open-endedness of ‘expectations’ versus the narrower ‘requirements’. This phrase reminds me that there are many documented and undocumented ways any software or system is expected to work, and harkens back to Jerry Weinberg’s definition of quality as “Value to some person”. This in turn can reminds me that we need to think about whose expectations matter to us, and what might be missing from our current map of the (typically vague, often conflicting) expectations that matter to our stakeholders.

Kaner approaches a very similar notion with a very different flavor when he says (my emphasis) “on behalf of stakeholders, with the objective of exposing quality-related information of the kind they seek.” Kaner is much more explicit that there is a certain set of stakeholders who determine what kind of information we are trying to discover. The set of expectations that we want to compare against are the set of expectations that matter to the stakeholders. These might be the stakeholders’ expectations…or they might be the expectations of others, e.g. valued customers, who matter to the stakeholders.

The origin of Elisabeth’s post was a question from a group of testers who’d been discussing the definitions of a test and testing. In the end, I appreciate knowing several definitions, precisely because of the conversation and reflection that the set as a whole generates.

Adding A Little Ambiguity To Your Plan

May 31, 2007

I was just talking with coworkers about the pros and cons of test plans with expected results. I tend to organize my testing with a series of questions to investigate, and or a rough map (often in .xls format) of areas to explore. Here we have some very well thought out test cases, including clear steps to follow and clear expected results.

Cem Kaner refers to having a test script with expected results as inattentional blindness by design. I agree wholeheartedly. In discussing this, an analogy occured to me to a 2004 Wired article about Hans Monderman, a Dutch traffic planner who has succeeded in making roads safer by making what to do more ambiguous. Here they discuss his solution to a frantic intersection that was the site of many accidents:

It’s the confluence of two busy two-lane roads that handle 20,000 cars a day, plus thousands of bicyclists and pedestrians. Several years ago, Monderman ripped out all the traditional instruments used by traffic engineers to influence driver behavior – traffic lights, road markings, and some pedestrian crossings – and in their place created a roundabout, or traffic circle. The circle is remarkable for what it doesn’t contain: signs or signals telling drivers how fast to go, who has the right-of-way, or how to behave. There are no lane markers or curbs separating street and sidewalk, so it’s unclear exactly where the car zone ends and the pedestrian zone begins. To an approaching driver, the intersection is utterly ambiguous – and that’s the point.

The general goal is to keep the driver’s brain engaged. If they enter the intersection thinking “I know just what needs to be done here”, they risk running on autopilot. If they enter and think “Whoa. This is a bit confusing, I better pay attention!” there are a lot less accidents.

I think this has clear parallels to testing. If we go to a section of the application and think “Ahh. I know just what needs to be checked here. I’ve followed this script before.” we are much more likely to miss the elephant in the middle of the room.

To quote Monderman,

“The trouble with traffic engineers is that when there’s a problem with a road, they always try to add something,” Monderman says. “To my mind, it’s much better to remove things.”

In many cases, I couldn’t agree more.

[Edit, 2007, June 1]

I originally titled this “Why You Should Make Your Test Plans Less Clear”, which at the moment seemed like an interesting teaser/title. I just changed it to “Adding A Little Ambiguity To Your Plan”. Why? “…You Should…” was meant tongue-in-cheek, but I read it again today and it irked me. Who is this yahoo telling me what I should do? Oh yeah, it’s me. And so I changed the title.