Thursday, December 01, 2016

Test Manager vs. Project Manager : Changing motives and Perspectives

I am running a project as test manager - typical one, every day changing requirements, committed go live date, huge code churn, week ends included in schedule - this project has all fun, frustration, excitement - emotions galore.

As a test manager - what drives my energy is to find problems and report them in best possible way. Honestly - seeing more problems in the application drives me and my team. We get excited if more bugs are discovered. We celebrate every new find and I can see shine faces of my team members. Any news of erratic behavior, application crash, instability of code, environment down - makes us feel happy. Often I think, are we testers sadists?

Sitting next to me - is my friend, colleague - the PM. He is worried man. Every time someone in my team stands up and asks for some clarification - this PM's heart beat goes up and must be thinking - oh no... one more bug !!!  During our bug triage meetings, I speak proudly "40 new bugs today and that makes this week's overall tally of 370, 80 of these are critical". My PM friend after regaining calm says "ok - how many fixed bugs are retested? which areas of application are relatively stable? what positive news we can take to our stakeholders".

See the clear change in perspective?  PM wants to see what is working, working fine, what positive news we can report? Test manager wants to boast on what new problems testing team has found.  It makes sense for testers and test managers to get into shoes of PM's or Dev team once in while to understand what these folks think.

While tester should not lose their sight on finding problems and making sure that they are reported well - collaborating with PM/Dev and stakeholders to achieve a convergence of code towards release/golive date, can often be very useful for over all project stand point.

More often than not - due to changing requirements, unstable code, challenging deadlines - except testers, everyone in the team lose sight of golive. It is like being in a tunnel with no light from other end. PM's and Dev team would be watching with clenched fists to see the end of testing cycle.

The friction between Dev, Test and PM often is due to this differences in perspectives, motives and lack of communication on big picture on Go Live date.

Dear testers - when you find yourself in such situations - show empathy towards fellow team members. Pause sometimes and ask - can I see the project from their eyes, what are their worries and how I can help.

This will go long way in good team bonding and you will be called as "mature tester"

Sunday, November 27, 2016

Tester or Leborer ?

A friend of mine sent a link to this article on PMP and Project Managers that brings out an aspect of our profession - testing so beautifully. Are we knowledge workers paid for our expertise or laborers?

How does whatStuart is saying about PMP and Project management apply to Testing? I believe, more than certification, testing profession is hit by the way we poorly define testing and adopt a model of testing that eliminates need for skill, focuses on mindless repetition of some documented procedures.

Time to reflect on. If we define and accept that definition of testing that systematically undermines skill element and focuses on process, tools, metrics etc - there is no doubt that we will become laborers.

Is testing rule based?

How much of good testing is rule based?

Saturday, July 09, 2016

Testers are human ... so are Programmers

One important aspect of we humans as testers or programmers is how our day-today happenings impact our work at office. While this is not different for we software folks as opposed to any other profession that requires "presence of mind" - being occupied with thoughts about past or future can lead "knowledge workers" to make mistakes and/or forget things.

An incident that occurred this evening made me realize how important is for testers to be "present" while testing so that do not miss things and make mistakes. I went for a shopping mall with my family nearby. While entering I had an argument with a fellow who while reversing the car in parking happened to hit my car. While that incident fresh in my mind - I passed parking ticket counter, collected the ticket (while my mind of full of the car incident little while ago) and gave it to my wife. I generally has a designated place in the car where I keep all these tickets. This time my wife kept the ticket in a place that I generally cannot reach from driver's seat. I did not mindfully record nor my wife remembered clearly where she kept the ticket. Few hours passed by. While returning, my wife and kids went a nearby place and asked me to get the car and pick them up,  While walking back to parking lot - I was confused about where was parking ticket - thought of parking ticket was all over mind. When I reached car, I searched my usual places, did not find the ticket. I panicked on the prospect of paying almost a day's parking fee instead of few hours. I did few more rounds of check around drivers seat, usual places that I keep ticket and pillion seat - did not find the ticket. Finally called wife to check if ticket is with them. I was told that ticket should be in car. I finally gave up and paid the full day fare and came out of the parking.  When my wife comes in the car, she reaches out to glow box at pillion eat and hands over the ticket to me.

Why it did not occur to me check in glow box ? Why my blocked mind did not contemplate on various possibilities and locations for the ticket after all car is not such a big place? I guess two things happened. One - due to argument with other driver at the mall entrance filled my mind so that I did not mindfully register where my wife kept the ticket and second I gave up easily before exploring my options.

What did I learn from this incident that I can apply testing?

Good testing is about having wide range of testing ideas to cover mistakes that other folks do while constructing software. Programmers, Business analysts and others can make mistakes like I did. I urge testers to be mindful while testing, designing test cases and watch out for mistakes/misses that might lead to bugs. Every now and then put your mental abilities about test idea generation to test and develop the skill to look for misses/mistakes. Practice mindfulness and be vigilant at all times. This helps in your personal and life outside office as well - you, yourself are least likely to make mistakes.
This will save time, money, rework and will give you peaceful life. What more - you can do the same for others.

Role of human emotions in software development and testing has been the point of discussions at many testers meets and conferences. I guess the larger software community needs to acknowledge this and develop measures to be mindful.

I suggest mindful meditation and concentration exercises to testers having high level of mental activity (more often than not - nose) - like me. Being mindful and vigilant at all times - seems to be now a skill and capability for testers.

Sunday, May 22, 2016

Chocolate and Prayer - An Anti Pattern for BDD

In a school, for first graders, there was a practice or ritual in the morning every day. The kids used to assemble and sit in a designated place as they come in. They needed to say a prayer with closed eyes. When they finished the prayer, each kid would find a chocolate bar in front of her. The kids would happily take it, eat and proceed to their classes. This ritual ran for several years. Kids thought that chocolate is prize that they earn for saying prayers and none questioned the ritual. Years passed by. The length of prayer became smaller, kids got their share of prize - chocolate nonetheless.  On one day - kids assembled in their usual place and were preparing to say prayer - they saw chocolate bars in front of each of them - already. With none around  - few kids took initiative and grabbed chocolate while few sincere ones proceeded with prayer as usual.  After few days - following law of diminishing returns, these sincere kids to started to skip prayer and focused only on eating chocolate.
After several years of this ritual - one curious kid, unable to control his thought about why they get a chocolate everyday in the morning (note - prayer is long forgotten), asked his friend. "None knows why, my elder brother tells me that there used be some prayer before they got their chocolate" said the friend not so interested in the question.

Now, imagine this is a multi year social experiment conducted by school authorities in collaboration with educationists - what would you infer ? You might say, initially kids got their prize after prayer (a good and recommended activity to start the day in school) and when chocolate was given prior to any prayer, kids simply forgot or dropped the idea of prayer. Economists would call this as "incentive" to elicit a specific behavior from a group of people.

Let us come to our world and let us try to map prayer and chocolate to BDD (behavior driven development) and automation. As original proponents of BDD wanted it solve certain problems and automation apparently came out as chocolate, prize that follows doing BDD.

As I understand  - BDD was intended to bring business analysts into the party, develop a common vocabulary between Dev, BA, Testing and stakeholders and address some of the perceived problems close cousin of BDD - the TDD, test driven development. Dan North explains the background and history of how he landed with the idea of BDD. As Dan narrates - the practice of BDD proposes to focus on the behavior (change from keyword "Test"or "requirement") software should demonstrate for a feature that client wants. In order to develop a common vocabulary - BDD needed to restrict the representation of this behavior using a set of keywords and the behavior required to be in a non technical language (remember they needed to bring BA's that are non technical into the party). Thus using a class of languages (meta language, I guess) like Gherkin which is a type of DSL (domain specific language) BDD ushered a practice where intended software behavior and corresponding scenario or an example was represented in a format like the one below

As [Role/Stakeholder]
I want to [A feature or behavior]
So that [business outcome that is worth paying for]

Given [Initial or Preconditions]
When [ Action performed to invoke the feature]
Then [Expected result that software needs to demonstrate]

As Liz Keogh, one of early collaborators with Dan on BDD development, says -key challenge BDD was intended (broadly among other things) to solve is facilitate and improve communication, discussion and debates about what the behavior should be,among developers, testers, business analysts and stakeholders.

That was a prayer  - BDD's objective for effective communication.

After looking format of BDD scenario/user story, full of keywords - a smart developer would have thought "I can parse this and generate a skeleton code which can be implemented later as automated test". This is that chocolate that was promised to everyone in the team. Thus a strong distraction for original objective of BDD was born in the form of automated tests out of BDD story/scenario.

The theme of automation attached to BDD become so powerful with loads of frameworks such as jBehave, Cucumber and others overshadowed everything related BDD. At some point of time, doing BDD meant using jBehave or cuccumber and creating automated tests.

The power distraction of automation (chocolate from our story) instantly hijacked communication/discussion about behavior (prayer) and practitioners BDD started doing only automation. This is the anti pattern that I wanted to highlight in this post. I have seen several instances where testers/developers/BA's were worried only about which tool or framework to use for BDD and which automation framework/library to use. The stakeholders on their part were sold on the idea that they would get "Executable specifications that come with dual benefit - representation of behavior and automated test". They could not ask more.

Alas, in the process, BA's, testers and developers instead of sitting together and discussing about what "Given" should lead to what "Then" or what "When" leads to what "Then's" - sat in silos and happily created loads of BDD stories and some tester or developer jumped straight away to implement automation.

I am not complaining about automation that is embedded in BDD per se - I would like people to reinstate the prayer - the focus on cross function collaboration, you can have your chocolate (automation) anyway.

Time to read Dan's post on introduction to BDD and also posts from Liz on the aspect of communication ?

Wednesday, April 08, 2015

What do you call something - Name matters !!!

In the recent times - I came across two instances where names/phrases we use in our daily life as software people - programmers and testers make a huge impact on what we do. The names we use to indicate things create objects/actions larger than the life.

Unit testing is something that only developers do
A colleague mine recently demonstrate me a testing framework (some code, library that drives some portion of application under test) as a unit testing framework. I applied some knowledge I gained by reading on unit testing to this framework and realized that the framework did not do or support unit testing.  A unit test by definition is self contained and attempts to validate logic supposedly implemented by piece of code under test with all other dependencies mocked out.  I confronted my colleague on why is he calling the framework as unit testing framework. His answer surprised me - he said while agreeing to definition of unit test that I quoted here "If I do not use the phrase unit testing here - developers would not use this saying it is testers job". Here the word "unit testing" is inappropriately used to effect some change in the behavior of programmers/developers. I can sympathize with my colleague - such is the power of names/words/phrase that we have created.

Behavior is more useful word than Test
Dan North in his introductory article on Behavior Driven Development - says people misunderstood the word "test" in TDD. He observed removing the word "test" from TDD, replacing the word with "behavior" made the whole activity more acceptable to programmers. While there is more to BDD than TDD and word "test" - this instance made me think yet another case of how names create effects with far reaching impact than we seem to think. I guess very name of "test" makes some programmers think "not my job".  All of a sudden the wall between dev and test becomes and we have stereotype developers and testers out there.

We need to be more careful while creating or using words and phrases - development, testing, unit testing etc are few examples here that are creating practices that inhibit effective collaboration between various functions in a software team. Who said "what is in a name" ? We know now - that there is something in name !!!

Saturday, December 13, 2014

Being away from blogging

Ten years since I posted my first post - it has been a long journey. Some years very active with many posts and some very lean - like this year. I want to avoid creating a weird record of having exactly one in 1st and 10th year. This is not so happy state in being.

From work-wise this has been a very hectic year for me. I get very little time (including the week ends) to reflect and write. Some if it is attributed to writers block and some of it is related to the puzzle of what to write.

Recently I spoke at QAI STC conference on "Feynmanism for testers" - a phrase to indicate "Feynman" way of thinking for testers. I had about 30 mins to cover the idea like this. I surely, struggled to make justice to the topic. However - I had some very interesting discussions and met many nice people post the talk. So - my talk did touch few of these people who overcame their hesitation to come up to me and talk.

It is nice to see many of these conferences posting conference talks on youtube. While I wait for this year's STC video appearing on youtube, you can check out my 2012 talk here.

I am planning to start small 3-5 mins video podcast sessions on testing topics as an alternative way to keep this blog going. One very personal reason for this is to improve my presentation skills. Watching yourself doing a talk can teach you lot as how to improve the same.

Let us see how this goes... I thank my readers for their interest shown on me.

Sunday, June 22, 2014

There is no such thing called Agile Testing

I struggled since long to find a reasonable meaning and definition for phrase 'Agile Testing". So far I have been unsuccessful in finding one definition that can stand my scrutiny. Probably there is nothing called "Agile Testing" exists. Possibly yes.

Before I proceed – let me make a distinction between "Agile" and "agile". James Bach has been suggesting since long - this difference. The word "agile" as a dictionary word meaning "swift", "quick" – when applied to software it simply means what it means in dictionary. Good and reasonable software people have attempting to be "agile" in the project context as demanded by stakeholders. This has been happening much before the industry invented the buzz word "Agile" (note caps "A" here). This word "Agile" is more of a marketing term invented to describe a ceremony-laden model of developing software. It promises continuous, small iterative and quicker pieces of deployable software – straight to market. It is fashion of the day and often seen as panacea to all problems of slow, buggy and boring year long projects draining millions of dollars where first 4-6 months of the project would be spent in agreeing upon the requirements or initial design. In Today's world Market demands speed and flexibility for businesses making software or using software – days of big upfront design and yea- long software projects are getting over.

You can consider "agile" as drinking water that you get in the tap and "Agile" as your favorite brand of mineral water bottled specifically and sold for a price promising certain level of purity.

Also, let me define for the purpose of this post – what is testing. Testing is an open ended activity of evaluation, questioning, investigation and information gathering activity around software and its related artifacts. This is typically done to inform stakeholders about potential problems in the product, advise them about risks of failures as quickly and as cheaply as possible. There is NO one "right"(certified) way to do testing and one right time in the project lifecycle to start with it. The context of the project defined by the people in the project including stakeholders – dictates the form and essence of testing. Testing does not assure ANYTHING, it informs (to best of testers ability and intent) about problems in the software that can threaten the value of software. Given constraints of time and money – testing (even though an open ended evaluation/investigation activity) constantly seeks to optimize its course to find problems faster and report them in right perspective. This requires testers to be good/quick learners, skeptics, thinkers with diverse set of skills in business, technology, economics, science, philosophy, maths/statistics amongst others. In some sense testing is like a sport or a performing art that becomes better with practice and improvisation. A professional tester needs to practice (meaning doing) testing as a professional musician or sports person.

Good testing thus -

  • Focused on working closely with programmer
  • Uses tools/automation to perform tasks that are best done by computer
  • Favored light weight bug tracking process that primarily focused on faster feedback cycle to developer and speed fixing of important bugs (important to stakeholders)

When books, blog posts, articles, conference presentations talk about "Agile testing" – it is always in contrast with so called "Traditional testing". Any meaning or interpretation of traditional testing assumes a stereotypic "traditional" tester. So, let me attempt to define one.

A traditional tester is one who has worked in a waterfall software project and was part of a dedicated (independent) testing team. There would be a wall between development and testing teams and code to be tested would be thrown over the wall for testing purposes. Testers used heavily documented test cases and relied on elaborate requirement documentation. Bugs are reported in a formal bug tracking system and it would be testers pride to fight to prove that bugs logged were to be defended. Testers resisted changes to requirements in the middle of the project and insisted that it would make them to rework on test cases, retest application and hence adds to cost of the project overall. Testers assumed the role of quality police and took pride in being final arbitrators of "ship" decision.

For the uninitiated few examples of what (I believe) is NOT Agile testing

  • Writing units tests in xUnit framework – you are not testing
  • Doing xDD – There are host of 3/4 letter acronyms on the lines of /something/ driven development. As many agile folks admit these are development methodologies – let me not go deep in explaining why these are not related to testing.
  • If you are working on continuous integration tools and your automation gets kicked off in response to a new build/check-in – you are not testing
  • If you are writing stories or participating in scrum meetings - you are not testing

Finally here are 3 reasons – why I believe that there is no such thing called "Agile Testing" -

Agile Testing people do not talk about testing skills

If you know what is testing and you do it – it is obvious that you know what skills you need and how do you work to improve them. Agile people often confused about what is testing and what is not. Hence you cannot expect them to articulate testing skills by them. You typically hear things like "collaboration", "programming skills", "think like customer" etc. I strongly feel that these folks have no clue about testing or testing skills. I bet they are just making it up. Software testing is now special skill in itself. Many people study and practice it as a profession and life time pursuit. There are testing conferences happen all over the world. There is a growing body of knowledge about craft of software testing.

Sad that Agile folks have no idea about these skills. All they are talking about is how developers or team members in Agile projects work and believe. This is what really bothers me about the idea of Agile testing. The idea is being badly articulated.

"Something that everyone in the team does" – that is how an Agile folks define testing. While everyone in a project team owning responsibility to make sure project succeeds is a noble and unquestionable idea – making testing as everyone's responsibility is shooting on own foot. Very soon we will get into "everybody-anybody-somebody" type of problem. Expecting developers do excel in their bit of testing is OK, expecting business analysts/story writers in capturing requirements well is fine too. But making everyone responsible for testing is about turning blind eye to skills required for professional testers. This idea of everyone-does-testing is rampant in Agile teams. Why call this testing with a special name "Agile testing"? In terms of roles – as everyone does testing – you may not have a designated role called testers.

Agile Testing is different from Traditional Testing – but not quite

Inevitably, I now need to introduce term "traditional testing". Agile folks would argue that testing that happens in Agile project is different from "traditional testing" – they point to testing against user stories as opposed to detailed requirements. Wow – if you are testing basis is a story instead of a detailed requirement document- you are doing Agile testing. But how different is that?

Much of the trouble for testers transitioning to agile projects is about their dominant beliefs about testing. For someone who worked in a typical outsourced IT environment – it was difficult to work with stories instead of elaborate requirement documents. It would be challenging to work closely with developers/programmers and speak their language while all along they worked with a wall between them and development team. Automation, for these testers was something on the lines of QTP or any GUI automation tool – where as agile teams used likes of Selenium and API or unit testing.

  • Many testers cannot work with leaner documentation (requirements)
  • When requirements constantly change – they are thrown off the track – they cannot test without test cases
  • No more there wall between dev and test – hence a tester is exposed to work directly with dev. Some are intimidated with this possibility
  • Testers that are familiar with GUI automation tools like QTP are suddenly exposed to tools that work under the skin – expectation is to understand and work with formal programming languages. This is terrifying to many testers.

So, there is no such thing called "Agile Testing" but there is "good testing". If you are a "good tester" and asked to work on an Agile project – what do you do? Fit yourself in project context – keep doing good testing that you always did. Do not get distracted by jargons and marketing terms that you might find people, consultants throwing around.

I think there are some ideas that I have not touched upon – Agile testing quaderants, why exploratory testing is such a hit with "Agile" people – well, that is for part 2. Let's see how this pans out.