Archive for April, 2007

Why Do You Enjoy Testing?

April 30, 2007

My first career was as an educator. I worked for 10 years with youth of all ages, in and out of the school system. One of my favorite jobs was running leadership training programs for teenagers at Hidden Villa. I had some very gratifying times, but was feeling ready for a change just as a grant-funded position of mine was drying up. That was September of 2000, at the tail end of that wacky boom when someone could get a testing job just by being bright and willing to learn.

When I started testing, I decided to try it out for a year – pay off my tenacious student loan debt – and then decide what was next. I knew very little about what to expect, and what I discovered surprised me. I’ve now been a software tester for five and a half years and my satisfaction with the work continues to grow. Many folks in my life see that I seem pretty pleased with my work, but continue to be perplexed as to why exactly this is so stimulating for me. Recently a friend who doesn’t work in technology asked “…But don’t you miss interacting with people?”

To this friend, I started by saying that a great deal of the work I do is interactive – I commonly spend much of the day talking to other testers, to software developers, to the folks who asked for the software or who represent our users…and in fact just about everyone in the company. In my current position, I may well have regular interactions with more folks in the company than anyone else. Along the way, I get to ask myself and others thorny questions. I challenge myself to seek out and illuminate unaware assumptions (my own first, and then those of everyone else connected to the project). I imagine what the potential risks are in the product we are building. (What might be broken? What might have unintended consequences? What proposed solution might not really solve our users’ problems?) I think as creatively and as strategically as I’m able about how to explore those risks (What else haven’t I considered yet? What’s another angle this could be approached from?) and then as I explore those risks I keep thinking, generating new test ideas and refining my strategy. Along the way I am learning constantly – about the product, underlying technologies, the users we want to serve, etc.

To me, this is in many ways a dream job. My friend clearly didn’t understand. She is both a voracious reader and a writer, so my next tact was describing the books I’m currently reading to learn and grow as a tester. While I’ve learned a good deal from testing and programming books, that’s not what I’m reading at the moment. I recently finished The Logic Of Failure (mentioned in an earlier post) – a fascinating study of how our thinking can break down in the face of complex systems, often leading to dire results. The next (barely begun) book is Jerry Weinberg’s Introduction To General Systems Thinking…which I can already tell is one for me to read slowly and to reread – it is dense with insight into how complex systems work.

This meant a bit more to her. She still couldn’t quite picture what I did (which is fine) but was intrigued that social psychology and general systems theory were on a tester’s reading list, and decided based on that that whatever-it-is-I-do must be more than she thought it was.

I’ve been thinking about the job of a tester a lot recently, partly because I’ve been hiring (or attempting to hire) testers…and having a hard time. I know there’s a marketing problem here, because (a) so few folks (outside of tech companies) seem to have even heard of testing as a job, and (b) those who have heard of it tend to have heard either that it’s “a job for programmers” or that it’s “boring and repetitive”. Now, programming skills will almost always help (and sometimes are necessary) but frankly I think that the technical skills involved in testing are often easier to train folks on than the just-as-crucial creativity, organization, communication, and strategy. As far as repetitiveness: I know that every job contains repetitive elements, but I would suggest that testing well minimizes the repetitive aspects while maximizing covering new territory…because covering new territory (or finding ways to cover old territory in a new way) tends to provide more useful information about the state of the product to its stakeholders.

All that said, do you enjoy testing? If you do, why? And if you’re feeling bold enough, how do we get the word out to smart, creative, organized folks that exploring software is a fascinating and lucrative way to make a living?


A Rose By Any Other Name

April 6, 2007

There was an interesting conversation about test terminology a few weeks back on the Agile Testing list. It started with Chris McMahon forwarding an amusing post looking for a Non-Functional Tester…and led to an interesting conversation about variation amongst test terminology, and whether we should be trying to standardize it. I’m feeling the urge to sum up and synthesize what I’m

First, let me go on record stating that I think trying to hire a “non-functional tester” is an painful miss-use of language. It reminds me of a white fellow I knew in college who commented after being at a black event that “It was interesting to be the only majority in the room”.

Regarding what language to use – There are no meaningful standard definitions for testing terms of which I am aware. There was an interesting conversation about this on the software-testing yahoo group a few weeks back. Matt posted a bit about it here:

When folks talk of non-functional or parafunctional testing, I think they tend to mean “all forms of testing other than testing a particular function of the software.” This tends to include some combination of: performance/load/stress, scalability, integration, usability, and security testing…and probably a good deal more.

Someone on the Agile Testing list suggested we find a way to name it positively rather than negatively. It’s a good challenge. For a term that’s used as a catch-all for “everything other than ____”, can there be a way to state it positively, other than to use a list? I tend to think that the only thing that ties these classes of tests together is that they aren’t functional tests. That in turn makes me wonder if it’s a concept with much value, whatever name we give it.

I think the real issue is that, like many terms for “everything other than ____”, it’s a funny bucket to try to define. People seem to want the bucket though, and given that I think that using a term that describes it as accurately as possible is a good thing. “Para-” can mean “beside” or “in addition to” (as in paramedic). For that reason, plus the fact that I have yet to hear a more descriptive term suggested for it, and because it’s starting to get (at least a bit of) acceptance amongst testers, parafunctional works for me.

If we go back to the description above, Parafunctional is (to me) at least as clear as non-functional, and has the additional virtue of not sounding foolish.

The Logic of Failure, Part 1

April 5, 2007

I’ve been reading Dietrich Dorner’s The Logic of Failure recently. I am not quite done and will post more soon, but there are a few quotes that I keep coming back to. Here’s one:

An individual’s reality model can be right or wrong, complete or incomplete. As a rule it will be both incomplete and wrong, and one would do well to keep that probability in mind. But this is easier said than done. People are most inclined to insist that they are right when they are wrong and when they are beset by uncertainty. (It even happens that people prefer their incorrect hypotheses to correct ones and will fight tooth and nail rather than abandon an idea that is demonstrably false.) The ability to admit ignorance or mistaken assumptions is indeed a sign of wisdom, and most individuals in the thick of complex situations are not, or not yet, wise. (p. 42)

Dorner gives a fascinating analysis of the reasonable decisions which led to the Chernobyl disaster. It’s interesting to note that there was no glaring mistake here – no one who fell asleep on the job or did something ‘insane’. There were a number of places where folks acted in violation of safety regulations. Why? Most of these was an experienced operator who knew that that rule was a bit more conservative than it needed to be in this case, for an operator of his/her skill, and each operator had broken that saftey rule before with positive results. (e.g. they were able to avoid overtime without causing any problems). In fact, Dorner points out that the nature of safety regulations is that we tend to be rewarded for violating them most of the time. If I choose not to wear my bike helmet, the main effects are that I can enjoy the wind in my hair and don’t look quite as silly as usual…most of the time. If we break internal before releasing a product, we may get similarly positive results…most of the time. Happily, on the projects I’ve been a part of the potential negative consequences of releasing buggy software tend to be less frightening than a nuclear meltdown, or a bike accident without a helmet. There are plenty of bugs in the world though that have cost lives, and many more that have cost jobs and money.

I am a big believer in having a group of informed stakeholders assess the relative risks of releasing v. waiting. I believe that my job as a tester is to make sure that that is as informed of a discussion as it can be, but not to try to assert control of blocking the release. Dorner is a humbling reminder for me that while I attempt to shed light on the current state of the application under test, my model can be assumed to be incomplete and incorrect.

I believe that one of the major challenges a tester faces is how to communicate the perceived quality of the software, including a map of what is not known, and what may be incomplete and incorrect.