Feeds:
Posts
Comments

Jesper Ottosen @jlottosen owns a hoodie with the statement “As context-driven as context allows…” printed at the front.

The statement brilliantly phrases a dilemma I find testers often face: Testers often have to compromise between following professional values and beliefs, while acting in contexts which do not share them.

There are always people who matter around; people who has an opinion about what testing is and how it should be performed; opinions that do not necessarily align with the testers’ opinions; and sometimes, they are not even even good opinions.

img_7743_zoom

As professionals, we have values and beliefs. But sometimes we end up in situations, where we are tasked to do something that does not match our values and beliefs. It is easy then, to end up feeling that we are doing work that no one needs and wants.

This feeling points to something which can be a daily ethical dilemma: While our personal and professional values may be challenged, there is still a job to be done.

How can a tester make the compromise, accepting a challenge, while staying true to her values and beliefs? How can she find energy to work on improving things?

 

Social responsibility

To me, one of the most important things about being professional is taking social responsibility. This is an important value to me.

Social responsibility means that I keep myself open to what others have to say. I try to always respect people I’m working with, even though we may disagree about certain or several things.

Social responsibility is not about self-critique. My doubts and worries are more often right than wrong.

Instead it is about trying to give people what they need by understanding their situation, and helping them get better with what we are doing together.

That requires what I call personal leadership. But foremost, it requires conversation and negotiation.

 

“Yes, but…”

In his “Improv(e) your testing” talk at Let’s Test 2016, Damian Synadinos @dsynadinos reminded me of a simple and efficient strategy to opening conversations. In improv, a golden rule is to start replies with with “yes, and…”. This helps adding to whatever is happening on stage.

In professional situations, sometimes we need to subtract, however:

Yes, I will perform the test and report to you about it, but please explain me how the test case and bug count metrics you ask me to do will be useful?

The “yes“-part is about accepting the challenge. The “but” implies that I’m going to stay true to my knowledge, experience, values and beliefs and raise professional doubts about methods I’m asked to use, things I’m asked to report, processes I’m asked to follow.

It is not about asking rhetorical questions. Rhetorical questions shut my windows to the world and enclose me in my own thoughts and ideas.

Rather it is about keeping a thought in the back of my mind when I’m asked to do something in a certain way: “Is this really in the best interest of the people who matter: The project stakeholders?”

Replying “yes, but…” can be enable me to act on my personal values in contexts which have values of their own.

 

Masterclass in New York City

On September 26th, during Test Masters Academy‘s REINVENTING TESTERS WEEK in New York City, I will be doing a workshop titled: “Act on your values!” on values and personal leadership.

As testers and IT-professionals we have to quickly recognize and adapt to ever changing contexts in order to produce value for our employers, clients and various diverse customers. This can be challenging, both on the personal and the professional level. As leaders, team members and individuals we often have to lead ourselves.

The workshop will focus on how our personal and shared values can guide us. It will be based on the principles of protreptic dialogue, which is a philosophical facilitated conversation revolving around the values embedded in what we say, do and think. First described in ancient Greece in the fourth century, professor Ole Fogh Kirkeby of Copenhagen Business School has revived protreptic dialogue as both a concept, a leadership tool, and a coaching principle with the objective to “turn us towards ourselves”.

I plan for the workshop to be a safe space for exploration and learning. Participants are expected to share opinions, thoughts and ideas, and to treat others’ opinions, thoughts and ideas in a respectful and appreciative manner. No prior knowledge of leadership, dialogue, philosophy, or protreptic dialogue is required.

Key takeaways

  • Consciousness about personal values and values of the contexts we work in
  • Strategies for dealing with the dilemmas we face as testers
  • An introduction to protreptic concepts and dialogue

Get tickets here: Early bird has just been extended to July 15th.

28cbe1e4-a4e5-4bc8-860c-3c7584be81d9

 

I’m beginning to get quite excited about speaking at CPHContext about ”Value Centered Dialogue in Context Driven Testing”. It’s not the first time I speak at a testing conference, but I am going to demonstrate a type of dialogue for which there is no firm recipie and I can therefore only plan for mentally. And that is of course a bit exciting:)

To settle my nerves, I’m writing this blog to reveal something about what I’m going to tell people.

Recently, a good friend asked me: “What is leadership is to you?”

My answer came quicker than I thought it would: “It is about setting people free to do their best,” I said.

We were talking about personal leadership values.

Heuristics and values

There are many ways to lead people – we could call them leadership heuristics – and while you and I can attend the same courses or read the same books and therefore learn the same leadership heuristics, our personal values shape our actions and therefore the way we apply these heuristics.

Everything I’m going to say in the session will be about basic human values and how I have found a special type of dialogue can bring new energy into context driven testing leadership.

I have my slides ready, and I hope it will be a good experience for everyone attending my session.

A protreptic dialogue

I’d like to give show something about how a protreptic dialogue between me (the guide) and you would start out. I might start with a question to you:

What does it mean to be context driven?

I’ll listen carefully to your answer and depending on what you answer (there is no right or wrong here as it is about you) I might tell you something about the origins of the word context. Words are important in protreptic dialogue.

The word context is orignally latin and comes from contextus which means joining together. The danish word for context is sammenhæng, which means the same, so context is something we are joined to, or maybe even woven into, as the latin origins actually indicate.

Then, what does it mean to be context driven: Can something that we are joined to or even woven into drive us? It might if there is motion in it, so if we want to understand something about how the context is driving us, we should look at the dynamics in it. But perhaps the driving could be reversed: Can our testing set the context in motion?

This question was for you, and again I’ll listen carefully to what you say. If it was me, I might answer myself like this:

Of course we can set the context in motion, and we do, as testers. After all, testers discover stuff other people have not yet discovered, we build trust, create business value, spoil illusions and other things that send motion back into the context.

This is interesting. As a guide, I’ll listen to your value laden words: discovery, trust, value, illusions. In a human value-perspective they have meanings related to the four basic human values: The Good, The Beautiful, The Just and The True.

In the ongoing protreptic dialogue, we will explore these values together, getting very close to what they really mean to you. We might talk about your work or other things in your life, but only if you want to and bring it up. This is not a therapy session.

Protreptic dialogue is meant to be a nice and respectful experience for both. There are no roles to play, we are both ”ourselves”, but we are taking a journey together to discover something about ourselves, in this case about context driven testing.

Software testers evaluate quality in order to help others make descisions to improve quality. But it is not up to us to assure quality.

Projects need a culture in which people care for quality and worry about risk, i.e. threats to quality.

Astronaut and first man on the moon Neil Armstrong talked about reliability of components in the space craft in the same interview I quoted from in my last post:

Each of the components of our hardware were designed to certain reliability specifications, and far the majority, to my recollection, had a reliability requirement of 0.99996, which means that you have four failures in 100,000 operations. I’ve been told that if every component met its reliability specifications precisely, that a typical Apollo flight would have about 1,000 separate identifiable failures. In fact, we had more like 150 failures per flight, substantially better than statistical methods would tell you that you might have.

Neil Armstrong not only made it to the moon, he even made it back to Earth. The whole Apollo programme had to deal very carefully with the chance that things would not work as intended in order to make that happen.

In hardware design, avoiding failure depends on hardware not failing. To manage the risk of failure, engineers work with reliability requirements, e.g. in the form of a required MTBF – mean time between failure – for individual components. Components are tested to estimate their reliability in the real system, and a key part of reliability management is then to tediously add all the estimated relibility figures together to get an indication of the reliability of the whole system: In this case a rocket and space craft designed to fly men to the moon and bring them safely back.

But no matter how carefully the calculations and estimations are done, it will always end out with an estimate. There will be surprises.

The Apollo programme turned out to perform better than expected. Why?

When you build a system, it could be an it-system or a space craft, then how do you ensure that things work as intended? Following good engineering practices is always a good idea, but relying on them is not enough. It takes more.

Armstrong goes on in the interview (emphasized by me):

I can only attribute that to the fact that every guy in the project, every guy at the bench building something, every assembler, every inspector, every guy that’s setting up the tests, cranking the torque wrench, and so on, is saying, man or woman, “If anything goes wrong here, it’s not going to be my fault, because my part is going to be better than I have to make it.” And when you have hundreds of thousands of people all doing their job a little better than they have to, you get an improvement in performance. And that’s the only reason we could have pulled this whole thing off.

Please note: This post is being updated.

In his Why the testing/checking debate is so messy – a fruit salad analogy, my good friend Joep Schuurkes posts an absurd dialogue in which two persons become confused because they cannot distinguish between apples and fruit. He claims the dialogue could still happen if apples is replaced with checking and fruit with testing.

He is trying to show that in the same way that apples are a sort of fruit, checking is a sort of testing. And that discussing testing *versus* checking is bullshit.

I think Joep is wrong, and I shall discuss why and how here.

A little “versus”

The core of the discussion is the little “versus” between testing and checking, which Bolton and Bach insists on. And I insist on it too: It introduces a dichotomy, which is not only important, it is even necessary.

And it is necessary because it shapes our thoughts about testing.

To be precise, it leads us to think on a conceptual level instead of just an activity level. Once we accept the little ”versus” between the two, accept the dichotomy, we can start thinking about our craft. We are no longer forced to only think about the activites we do.

And just as important: We can distinguish our craft from something that it is not.

It’s like the way more and more people discriminate between leadership and management. Once you accept that the two are conceptually different, something interesting happens: A whole new understanding of the act of playing ”the boss” reveals itself.

In the same way, when we start discriminating between testing and checking, the way we talk about what we do as testers, change. And we change.

A humanistic and value producing view on testing has revealed itself to us through this dichotomy:

Testing was, but is no longer…

Testing is no longer a necessary evil, only done because programmers are sloppy, don’t read requirements and make mistakes. Instead, testing has become a craft, carried out by humans. A craft that adds value to the product, the organisation and society as a whole.

We are no longer little machines working under detailed instruction. We are testers, and therefore everything we do, our job satisfaction and even the value we produce, depends on this very dichotomy.

I will not let the confusion confuse me

So why the confusion? Well, I think the confusion arises because we confuse concepts with activities when we talk our daily, ambigous language.

As a tester, I carry out checks when I test, but when I do, the checks I am doing are elements in the testing and the whole activity is testing, not checking.

But if, on the other hand, I program a computer to run through a number of input combinations to a software program, have my program verify the results by comparing them to something ”expected”, and produce a report of boolean results on the basis of this, the whole activity of running that and distributing the report from the computer program is checking, not testing.

However, letting this confusion lead us to discard the difference between testing and checking would be a pity. The dichotomy is core to Boltons and Bachs testing philosophy. If I reject it, I have to reject more or less everything they say about testing.

And worse: I will have to give up my profession.

Continue Reading »

Some test managers and test consultants are very busy pointing out the right processes, organisational structures and methods to use in software testing.

But no methods, processes and structures can assure great testing. Great testing is created by people.

This quote by Neil Armstrong, which I came across a couple of years ago, is worth remembering whenever we lead people in testing:

“The way […] that made [the Apollo project] different from other sectors of the government to which some people are sometimes properly critical is that this was a project in which everybody involved was, (1) interested, (2) dedicated, and, (3) fascinated by the job they were doing. And whenever you have those ingredients, whether it be government or private industry or a retail store, you’re going to win.”

To me, his message is that as leaders, our aim should be to do whatever we can to make people just that: Interested, dedicated and fascinated by the job we are doing.

Source: Transcript of Neil Armstrong Interview with Stephen Ambrose and David Brinkley

Neil Armstrong, first man to walk on the moon. Photo: NASA.

Neil Armstrong, first man to walk on the moon. Photo: NASA.

There’s something about life that you won’t find anywhere else.

– Ole Brunsbjerg, headmaster.

The Copenhagen Context Driven Testing meetups are becoming a tradition thanks to the work of Carsten Feilberg and Agniezka Loza. In June, I chaired a workshop in Ballerup near Copenhagen during one of the meetups. 16 testers shared ideas about values in software testing.

There are four or five basic human values which everyone shares. The good, the beautiful, the true and the just. Freedom relates to these four. We express and rate them differently and they are intrisic to us, subjective, but still shared among humans.

My personal human values shape my actions, words and thoughts, and thus also the words and expressions I use in my daily language. My language can tell you about my values and therefore something about who I am.

Workshop and procastination

In the workshop I chaired in June, I asked the participants to pick picture cards to illustrate thoughts about testing. Then they spoke about the picture and about testing. We shared our words and statements on post-it’s and I collected them.

I was busy at work after the workshop, and the box with the words ended up on my desk. Summer and vacation came, and I procastrinated opening it. One of the last days of vacation, I finally read the words on the post-it’s.

Here are the words:

Knowledge; Information; Curiosity; Exploration; Investigation; Fight:-); Courage; Confidence; Balance; Collaboration; Evolvement; Surprise; Order; Performance; Discovering stuff, that others have not (yet) discovered; beautiness. Usability (easy/better ways of using stuff). Universatility. User experience design; Good (better) end user experience; user needs; user satisfaction; Sustainability. Creativity. Responsibility. Curiosity; Easeing somebody else’s job; Striving; Alertness; Communication; Added communication; added collaboration; information sharing; Building bridges; (Make it) fun; Excellence; Any word / anything; Getting a kick; Covering / exploring; Contradictions / paradoxes; Building trust; Finding (new) ways; Getting to know; Helping; Revealing; Avoiding losses; Whole solutions; Support descisions; Transparancy; Quality; Assessing quality; Avoid scandals; Improvement; Business needs; Filling gaps; People; To spoil illusions (own and others’); Digging for something deeper; Truth; Structure; Growth; Responsibility; Team work; Exploration; Progress; Seeing/finding possibilities; Erkendelse/Erkenntnis/realisation; Business value; Honesty.

DSC_1239

Truth and testing

It’s interesting to note that many of the words above relate to the value ‘truth’. Testing implies couriosity, gives a kick, spoils illusions, happens through exploration etc.

I consider ‘truth’ to be the fundamental core value in testing. Truth as a term is a complex thing, but when we use words that relate to the value ‘truth’, it’s easier to see.

As a tester, I prefer things that are true and don’t accept stories that can’t be verified. I rate things that are more true than other things. For example, I tend to dislike reducing truth to numbers, and prefer a more nuanced understanding of subjects.

I do have beleifs, hyphotesis, and test ideas, but at the end of the day, the ideas only prove their worth when they have been evaluated.

More than truth?

But look again.

Many (most?) of the words deal with things that are not related to ‘truth’: Reponsibility, easening other peoples jobs, evolvement, user experiences, whole solutions, improvement, business value etc.

This reminds us that testers are not just concerned with ‘truth’, i.e. testing, but also value how testing is used and the results that the whole team or company achieves.

What does this tell me as a testing leader?

It tells me that in my leadership, I cannot only focus on testing ideas, spoiling illusions, and telling the truth if I wish to motivate and encourage our teams to work efficiently and independently doing their testing.

I have to consider how the testing contributes to achieving other goals and higher goals.

I have to consider that coorporation with colleagues work well. That the product we somehow help with is something that makes users happy. That there are bottom line results because of our testing. That disasters are prevented.

These things are not just ‘context issues’. They are core to testing leadership.

Word play

I have played with the words on the cards and come up with a mission statement for a hypothetical testing team. The mission statement somehow expresses this.

Feel free to play with the words yourself.

We are testers. We are ready to spoil illusions, both our own and others’. We have courage to do so and generally like to be surprised. So we always dig for something deeper, a deeper understanding, a realization, an ‘erkenntnis’. We get a kick when that happens. Through testing, we seek truth, but we also feel a responsibility to make our testing useful to create user friendly and whole solutions, support growth and improvement, and sustainability. Our testing thus aims to assist the creation of pleasing and aestethic solutions, to serve other peoples needs and hopes, and in general to do good.

PS: The quote from my uncle Ole Brunsbjerg at the top of this article is to remind us that there is more to life than testing. Or anything else. Life is very rich and as humans, we value all of it.

I think most (if not all?) testers have witnessed situations like this: A new feature of the system put into production, only to crash weeks, days or just hours later.

”Why didn’t anybody think of that?!”

Truth is, quite often, somebody did actually think about the problem, but the issue was not realised, communicated or accepted.

Below is the story about the space shuttle Challenger accident in 1986.

Disaster…

Twentynine years ago, space shuttle Challenger exploded seven minutes into the flight killing the seven astronauts aboard.

Theoretical physicist Richard Feynman was a member of the accident commision. During the hearings he commented that the whole decision making in the shuttle project was ”a kind of Russian roulette”.

The analogy is striking. Russian roulette is only played by someone willing to take the risk to die.

I don’t know anyone who deliberately want to play the Russion roulette, so why did they play that game?

Feynman explains: [The Shuttle] flies [with O-ring erosion] and nothing happens. Then it is suggested, therefore, that the risk is no longer so high for the next flights. We can lower our standards a little bit because we got away with it last time…. You got away with it but it shouldn’t be done over and over again like that.

The problem that caused the explosion was traced down to leaking seals in one of the booster rockets. On this particular launch ambient temperatures were lower than usual and for that reason the seals all failed. The failed seals allowed very hot exhaust gasses to leak out of the rocket combustion chamber, and eventually, these hot gasses ignigted the many thusand litres of higly explosive rocket fuel.

Challenger blew up in a split second. The seven astronauts probably didn’t realise they were dying before their bodies were torn in pieces.

It was a horrible tragedy.

Chapter 6 of the official investigation report is titled: ”An accident rooted in history.”

The accident was made possible because of consistent misjudgements and systematically ignored issues, poor post flight investigations, and ignored technical reports. The accident was caused because three seals failed on this particular launch, but the problem was known and the failure was made possible because it was systematically ignored.

The tester’s fundamental responsibilites

As a tester, I have three fundamental responsibilities:

  1. Perfom the best possible testing in the context
  2. Do the best possible evaluation of what I’ve found and learnt during testing.  Identify and qualify bugs and product risks.
  3. Do my best to communicate and advocate these bugs and product risks in the organisation.

The Challenger accident was not caused by a single individual who failed detecting or reporting a problem.

The accident was made possible by systemic factors, i.e. factors outside the control of any individual in the programme. Eventually, everyone fell into the trap of relying on what seemed to be “good experience”. The facts should have been taken seriously.

A root cause analysis should never only identify individual and concrete factors, but also systemic factors which enabled the problem to survive into production.

Chapter 6 of the Challenger report reminds me that, when something goes wrong in production, performing a root cause analysis is a bigger task than just finding out the chain of events that lead to problem.

Many thanks to Chi Lieu @SomnaRev for taking time to comment early drafts of this post.

Photo of the space shuttle Challenger accident Jan. 28, 1986. Photo credit: NASA

Photo of the space shuttle Challenger accident Jan. 28, 1986. Photo credit: NASA

Follow

Get every new post delivered to your Inbox.

Join 984 other followers

%d bloggers like this: