Wednesday, 25 April 2018

How do you choose a test automation tool?

When you’ve agreed what you want to automate, the next thing you’ll need to do is choose a tool. As a tester most of the conversations I observed from a distance, between managers and people selecting a tool, focused on only one aspect.

Cost.

Managers do care about money, don’t get me wrong. But choosing a tool based on cost alone is foolish, and I believe that most managers will recognise this.

Instead of making cost the primary decision making attribute, where possible defer the question of cost until you’re asking for investment. Your role as the testing expert in a conversation about choosing a tool is to advocate for the other aspects of the tool, beyond cost, that are important to consider.

Support

You want to choose a tool that is well supported so that you know help is available if you encounter problems.

If you’re purchasing a tool from a vendor, is it formally supported? Will your organisation have a contractual arrangement with the tool vendor in the event that something goes wrong? What type of response time can you expect when you encounter issues, or have questions? Is the support offered within your time zone? In New Zealand, there is rarely support offered within our working hours, which makes every issue an overnight feedback loop. This has a big impact in a fast-paced delivery environment.

If it’s an open source tool, or a popular vendor tool, how is it supported by documentation and the user community? When you search for information about the tool online, do you see a lot of posts? Are they from people across the world? Are their posts in your language? Have questions posted in forums or discussion groups been responded to? Does the provided documentation look useful? Can you gauge whether there are people operating at an expert level within the toolset, or is everything you see online about people experimenting or encountering problems at an entry level?

The other aspect of support is to consider how often you’ll need it. If the tool is well designed, hopefully it’s relatively intuitive to use. A lack of online community may mean that the tool itself is of a quality where people don’t need to seek assistance beyond it. They know what to do. They can reach an expert level using the resources provided with the tool.

Longevity

How long has the tool been around? There’s no right or wrong with longevity. If you’re extending a legacy application you may require a similarly aged test tool. If you’re working in a new JavaScript framework you may require a test tool that’s still in beta. But it’s important to go into either situation with eyes open about the history of the tool and it’s possible future.

Longevity is not just about the first release, but how the tool has been maintained and updated. How often is the tool itself changing? When did the last update occur? Is it still under active development? As a general rule, you probably don’t want to pick a tool that isn’t evolving to test a technology that is.

Integration

Integration is a broad term and there are different things to consider here.

How does the tool integrate with the other parts of your technology? Depending on what you’re going to test, this may or may not be important. In my organisation, our web services automation usually sits in a separate code base to our web applications, but our selenium-based browser tests sit in the same code base as the product. This means that it doesn’t really matter what technology our web services automation uses, but the implementation of our selenium-based suite must remain compatible with the decisions and direction of developers and architects.

What skills are required to use the tool and do they align with the skills in your organisation? If you have existing frameworks for the same type of testing that use a different tool, consider whether a divergent solution really makes sense. Pause to evaluate how the tool might integrate with the people in your organisation, including training and the impact of shifting expectations, before you look further afield.

Integration is also about whether the test tool will integrate with the development environment or software development process that the team use. If you want the developers in your team to get involved in your test automation, then this is a really important factor to consider when choosing a tool. The smaller the context switch for them to contribute to test code, the better. If they can use the same code repository, the same IDE development environment on their machine, and their existing skills in a particular coding language, then you’re much more likely to succeed in getting them active in your solution. Similarly, if the tool can support multiple people working on tests at the same time, do clean merging from multiple authors, offer a mechanism for review or feedback, these are all useful points to consider related to integration of the tool to your team.

Analyse

In a conversation with management about choosing a tool, your success comes from your ability to articulate why a particular tool is better than another, not just in it’s technical solution. Demonstrate that you’ve thought broadly about the implications of your choice, and how it will impact on your organisation now and in the future.

It’s worth spending time to prepare for a conversation about tools. Look at all the options available in the bucket of possible tools and evaluate them broadly. Make cost the lesser concern, by speaking passionately about the benefits of your chosen solution in terms of support, longevity and integration, along with any other aspects that may be a consideration in your environment.

You may not always get the tool that you’d like, but a decision that has been made by a manager based on information you’ve shared, that has come from a well-thought through analysis of the options available, is a lot easier to accept than a decision made blindly or solely based on cost.

Wednesday, 11 April 2018

Setting strategy in a Test Practice

Part of my role is figuring out where to lead testing within my organisation. When thinking strategically about testing I consider:

  • how testing is influenced by other activities and strategies in the organisation,
  • where our competitors and the wider industry are heading, and
  • what the testers believe to be important.

I prefer to seek answers to these questions collaboratively rather than independently and, having recently completed a round of consultation and reflection, I thought it was a good opportunity to share my approach.

Consultation

In late March I facilitated a series of sessions for the testers in my organisation. These were opt-in, small group discussions, each with a collection of testers from different parts of the organisation.

The sessions were promoted in early March with a clear agenda. I wanted people to understand exactly what I would ask so that they could prepare their thoughts prior to the session if they preferred. While primarily an information gathering exercise, I also listed the outcomes for the testers in attending the sessions and explained how their feedback would be used.

***
AGENDA
Introductions
Round table introduction to share your name, your team, whereabouts in the organisation you work

Transformation Talking Points
8 minute time-boxed discussion on each of the following questions:
  • What is your biggest challenge as a tester right now?
  • What has changed in your test approach over the past 12 months?
  • How are the expectations of your team and your stakeholders changing?
  • What worries you about how you are being asked to work differently in the future?
  • What have you learned in the past 12 months to prepare for the future?
  • How do you see our competitors and industry evolving?
  
If you attend, you have the opportunity to connect with testing colleagues outside your usual sphere, learn a little about the different delivery environments within <organisation>, discover how our testing challenges align or differ, understand what the future might look like in your team, and share your concerns about preparation for that future. The Test Coaches will be taking notes through the sessions to help shape the support that we provide over the next quarter and beyond.
***

The format worked well to keep the discussion flowing. The first three questions targeted current state and recent evolution, to focus testers on present and past. The second three questions targeted future thinking, to focus testers on their contribution to the continuous changes in our workplace and through the wider industry. Every session completed on time, though some groups had more of a challenge with the 8 minute limit than others!

Just over 50 testers opted to participate, which is roughly 50% of our Test Practice. There were volunteers from every delivery area that included testing, which meant that I could create the cross-team grouping within each session as I intended. There were ten sessions in total. Each ran with the same agenda, a maximum of six testers, and two Test Coaches.

Reflection

From ten hours of sessions with over 50 people, there were a lot of notes. The second phase of this exercise was turning this raw input into a set of themes. I used a series of questions to guide my actions and prompt targeted thinking.

What did I hear? I browsed all of the session notes and pulled out general themes. I needed to read through the notes several times to feel comfortable that I had included every piece of feedback in the summary.

What did I hear that I can contribute to? With open questions, you create an opportunity for open answers. I filtered the themes to those relevant to action from the Test Practice, and removed anything that I felt was beyond the boundaries of our responsibilities or that we were unable to influence.

What didn't I hear? The Test Coaches regularly seek feedback from the testers through one-on-one sessions or surveys. I was curious about what had come up in previous rounds of feedback that wasn't heard in this round. This reflected success in our past activities or indicated changes elsewhere in the organisation that should influence our strategy too.

How did the audience skew this? Because the sessions were opt-in, I used my map of the entire Test Practice to consider whose views were missing from the aggregated summary. There were particular teams who were represented by individuals and, in some instances, we may seek a broader set of opinions from that area. I'd also like to seek non-tester opinion, as in parts of the organisation there is shared ownership of quality and testing by non-testers that makes a wider view important.

How did the questions skew this? You only get answers to the questions that you ask. I attempted to consider what I didn't hear by asking only these six questions, but I found that the other Test Coaches, who didn't write the questions themselves, were much better at identifying the gaps.

From this reflection I ended up with a set of about ten themes that will influence our strategy. The themes will be present in the short-term outcomes that we seek to achieve, and the long-term vision that we are aiming towards. The volume of feedback against each theme, along with work currently in progress, will influence how our work is prioritised.

I found this whole process energising. It refreshed my perspective and reset my focus as a leader. I'm looking forward to developing clear actions with the Test Coach team and seeing more changes across the Test Practice as a result.

Friday, 16 February 2018

How do you decide what to automate?

When you start to think about a new automated test suite, the first conversations you have will focus on what you're going to automate. Whether it's your manager that is requesting automation or you're advocating for it yourself, you need to set a strategy for test coverage before you choose a tool.

There are a lot of factors that contribute to the decision of what to automate. If you try to decide your scope in isolation, you may get it wrong. Here are some prompts to help you think broadly and engage a wide audience.

Product Strategy

The strategy for the product under test can heavily influence the scope of the associated test automation solution.

Imagine that you're part of a team who are releasing a prototype for fast feedback from the market. The next iteration of the product is likely to be significantly different to the prototype that you're working on now. There is probably a strong emphasis on speed to release.

Imagine that you're part of a team who are working on the first release version of a product that is expected to become a flagship for your organisation over the next 5 - 10 years. The product will evolve, but in contrast to the first team it may be at a steadier pace and from a more solid foundation. There is probably a strong emphasis on technical design for future growth.

Imagine that you're part of a team who will deliver the first product in a line of new products that will all use the same technology and software development process. An example may be a program of work that switches legacy applications to a new infrastructure. The goal may be to shift each piece in a like-for-like manner. There is probably a strong emphasis on standard practices and common implementation.

What you automate and the way that you approach automation will vary in each case.

You need to think about, and ask questions about, what's happening at a high-level for your product so that you can align your test automation strategy appropriately. You probably won't invest heavily in framework design for a prototype. Your automation may be shallow, quick and dirty. The same approach would be inappropriate in other contexts.

At a slightly lower level of product strategy, which features in your backlog are most important to the business? Are they also the most important to your customers? There may be a list of features that drive sales and a different set that drive customer loyalty. Your automation may target both areas or focus only on the areas of the application that are most important to one set of stakeholders.

Can you map the upcoming features to your product to determine where change might happen in your code base? This can help you prepare appropriate regression coverage to focus on areas where there is a high amount of change.

Risk Assessment

When testers talk about the benefits of automation, we often focus on the number of defects it finds or the time that we’re saving in development. We describe the benefits in a way that matters to us. This may not be the right language to talk about the benefits of automation with a manager.

In the 2016 World Quality Report the number one reason that managers invest in testing is to protect the corporate image. Testing is a form of reputation insurance. It’s a way to mitigate risk.

When you’re deciding what to automate, you need to think about the management perspective of your underlying purpose and spend your effort in automation in a way that directly considers risk to the organisation. To do this, you need to engage people in a conversation about what those risks are.

There are a lot of resources that can help support you in a conversation about risk. James Bach has a paper titled Heuristic Risk-Based Testing that describes an approach to risk assessment and offers common questions to ask. I've used this as a resource to help target my own conversations about risk.

Both the strategy and the risk conversation are removed from the implementation detail of automation, but they set the scene for what you will do technically and your ongoing relationships with management. These conversations help you to establish credibility and build trust with your stakeholders before you consider the specifics of your test automation solution. Building a strong foundation in these working relationships will make future conversations easier.

How do you consider what to automate at a more technical level?

Objective Criteria

As a general rule, start by automating a set of objective statements about your application that address the highest risk areas.

Objective measures are those that can be assessed without being influenced by personal feelings or opinion. They’re factual. The kind of measure that you can imagine in a checklist, where each item is passed or failed. For example, on a construction site a hard hat must be worn. When you look at a construction worker it is easy to determine whether they are wearing a hat or not.

On the flip side, subjective measures are where we find it difficult to articulate the exact criteria that we’re using to make a decision. Imagine standing in front of a completed construction project and trying to determine whether it should win the “Home of the Year”. Some of the decision making in this process is subjective.

If you don’t know exactly how you’re assessing your product, then a tool is unlikely to know how to assess the product. Automation in a subjective area may be useful to assemble a set of information for a person to evaluate, but it can’t give you a definitive result like an objective check will do.

A common practice to support conversations about objective criteria is the gherkin syntax for creating examples using Given, When, Then. A team that develop these scenarios collaboratively through BDD, or Three Amigos, etc. are likely to end up with a more robust set of automation than a person who identifies scenarios independently.

Repetitive Tasks

Repetition can indicate an opportunity for automation, both repetition in our product code and repetition in our test activities.

In the product, you might have widgets that are reused throughout an application e.g. a paginated table. If you automate a check that assess that widget in one place, you can probably reuse parts of the test code to assess the same widget in other places. This creates confidence in the existing use of the widgets, and offers a quick way to check behaviour when that same widget is used in future.

In our testing, you might perform repetitive data setup, execute the same tests against many active versions of a product, or repeat tests to determine whether your web application can be used on multiple browsers. These situations offer an opportunity to develop automation that can be reused for many scenarios.

By contrast, if we have a product that is unique, or an activity that’s a one-off, then we may not want to automate an evaluation that will only happen once. There are always exceptions, but as a general rule we want to seek out repetition.

There is benefit to collaboration here too, particularly in identifying repetitive process. Your colleagues may perform a slightly different set of tasks to what you do, which could mean that you're unaware of opportunities to automate within their area. Start conversations to discover these differences.

Opportunity Cost

When you automate it’s at the expense of something else that you could be doing. In 1998 Brian Marick wrote a paper titled When should a test be automated?. In his summary at the conclusion of the paper he says:

“The cost of automating a test is best measured by the number of manual tests it prevents you from running and the bugs it will therefore cause you to miss.”

I agree this is an important consideration in deciding what to automate and believe it is often overlooked.

When you decide what to automate, it’s important to discuss what proportion of time you should spend in automating and the opportunity cost of what you're giving up. Agreeing on how long you can spend in developing, maintaining, and monitoring your automation will influence the scope of your solution.

Organisational Change

So far we've considered five factors when deciding what to automate: product strategy, risk, objectivity, repetition, and opportunity cost. All five of these will change.

Product strategy will evolve as the market changes. Risks will appear and disappear. As the features of the product change, what is objective and repetitive in its evaluation will evolve. The time you have available, and your opportunity costs, will shift.

When deciding what to automate, you need to consider the pace and nature of your change.

Is change in your organisation predictable, like the lifecycle of a fish? You know what steps the change will take, and the cadence of it. In 2 weeks, this will happen. In 6 months, that will happen.

Or is change in your organisation unpredictable, like the weather? You might think that rain is coming, but you don’t know exactly how heavy it will be or when it will happen, and actually it might not rain at all.

The pace and nature of change may influence the flexibility of your scope and the adaptability of your implementation. You can decide what to automate now with a view to how regularly you will need to revisit these conversations to keep your solution relevant. In environment with a lot of change, it may be less important to get it right the first time.

Collaboration

Deciding what to automate shouldn’t be a purely technical decision. Reach out for information from across your organisation. Establish credible relationships with management by starting conversations in their language. Talk to your team about objective measures and repetitive tasks. Discuss time and opportunity costs of automation with a wide audience. Consider the pace of change in your environment.

It it important to agree the scope of automation collaboratively.

Collaboration doesn’t mean that you call a meeting to tell people what you think. It isn't a one-way information flow. Allow yourself to be influenced alongside offering your opinion. Make it clear what you believe is suited to automation and what isn’t. Be confident in explaining your rationale, but be open to hearing different perspectives and discovering new information too.

You don't decide what to automate. The decision does not belong to you individually. Instead you lead conversations that create shared agreement. Your team decide what to automate together, and regularly review those choices.

Tuesday, 30 January 2018

A stability strategy for test automation

As part of the continuous integration strategy for one of our products, we run stability builds each night. The purpose is to detect changes in the product or the tests that cause intermittent issues, which can be obscured during the day. Stability builds give us test results against a consistent code base during a period of time that our test environments are not under heavy load.

The stability builds execute a suite of web-based user interface automation against mocked back-end test data. They run to a schedule and, on a good night, we see six successful builds:


The builds do not run sequentially. At 1am and 4am we trigger two builds in close succession. These execute in parallel so that we use more of our Selenium Grid, which can give early warning of problems caused by load and thread contention.

When things are not going well, we rarely see six failed builds. As problems emerge, the stability test result trend starts to look like this:


In a suite of over 250 tests, there might be a handful of failures. The number of failing tests, and the specific tests that fail, will often vary between builds. Sometimes there is an obvious pattern e.g. tests with an image picker dialog. Sometimes there appears to be no common link.

Why don't we catch these problems during the day?

These tests are part of a build pipeline that includes a large unit test suite. In the build that is run during the day, the test result trend is skewed by unit test failures. The developers are actively working on the code and using our continuous integration for fast feedback.

Once the unit tests are successful, intermittent issues in the user interface tests are often resolved in a subsequent build without code changes. This means that the development team are not blocked, once the build executes successfully they can merge their code.

The overnight stability build is a collective conscience for everyone who works on the product. When the build status changes state, a notification is pushed into the shared chat channel:


Each morning someone will look at the failed builds, then share a short comment about their investigation in a thread of conversation spawned from the original notification message. The team decide whether additional investigation is warranted and how the problem might be addressed.

It can be difficult to prioritise technical debt tasks in test automation. The stability build makes problems visible quickly, to a wide audience. It is rare that these failures are systematically neglected. We know from experience that ignoring the problems has a negative impact on cycle time of our development teams. When it becomes part of the culture to repeatedly trigger a build in order to get a clean set of test results, everything slows down and people become frustrated.

If your user interface tests are integrated into your pipeline, you may find value in adopting a similar approach to stability. We see benefits in early detection, raising awareness of automation across a broad audience, and creating shared ownership of issue resolution.

Thursday, 18 January 2018

Three types of coding dojo for test automation

The Test Coaches in my organisation provide support for our test automation frameworks. We create tailored training material, investigate new tools in the market, agree our automation strategy, monitor the stability of our suites, and establish practices that keep our code clean.

A lot of this work is achieved in partnership with the testers. We create opportunities for shared learning experiences. We facilitate agreement of standards and strategy that emerge through conversation. I like using coding dojos to establish these collaborative environments for test automation.

When I run a coding dojo, all the testers of a product gather with a single laptop that is connected to a projector. The group set a clear objective for a test automation task that they would like to complete together. Then everyone participates in the code being written, by contributing verbally and taking a turn at the keyboard. Though we do not adopt the strict principles of a coding dojo as they were originally defined, we operate almost exactly as described in this short video titled 'How to run a coding dojo'.

When using this format for three different types of test automation task - training, refactoring, and discovery - I've observed some interesting patterns in the mechanics of each dojo.

Three types of coding dojo for test automation


Training

Where there are many testers working on a product with multiple types of test automation, individuals may specialize e.g. user interface testing or API testing. It isn't feasible for every person to be an expert in every tool.

Often those who have specialized in one area are curious about others. A coding dojo in this context is about transfer of knowledge to satisfy this curiosity. It also allows those who don't regularly see the code to ask questions about it.

The participants vary in skill from beginner to expert. There may be a cluster of people at each end of the spectrum based on whether they are active in the suite day-to-day.

Training dojo

Communication in this dojo can feel like it has a single direction, from expert to learner. Though everyone participates, it is comfortable for the expert and challenging for the beginner. In supporting the people who are unfamiliar, the experts need to provide a lot of explanation and direction.

This can create quite different individual experiences within the same shared environment. A beginner may feel flooded by new information while an expert may become bored by the slow pace. Even with active participation and rotation of duties, it can be difficult to facilitate this session so that everyone stays engaged.
 

Refactoring

When many people contribute to the same automation suite, variation can emerge in coding practices through time. Occasionally refactoring is required, particularly where older code has become unstable or unreadable.

A coding dojo in this context is useful to agree the patterns for change. Rather than the scope and nature of refactoring being set by the first individual to tackle a particular type of problem, or dictated by a Test Coach, a group of testers collectively agree on how they would like to shape the code.

Though the skill of the participants will vary, they skew towards expert level. The audience for refactoring work is usually those who are regularly active in the code - perhaps people across different teams who all work with the same product.

Refactoring dojo

Communication in this dojo is convergent. There are usually competing ideas and the purpose of the session is to reach agreement on a single solution. As everyone participates in the conversation, the outcome will often include ideas from many different people.

In this example I've included one beginner tester, who might be someone new to the team or unfamiliar with the code. Where the context is refactoring, these people can become observers. Though they take their turn at the keyboard, the other people in the room become their strong-style pair, which means that "for ideas to reach the computer they must go through someone else's hands".

Discovery

As our existing suites become out-dated, or as people hear of new tools that they would like to experiment with, there is opportunity for discovery. An individual might explore a little on their own, then a coding dojo is an invitation to others to join the journey.

The participants of this dojo will skew to beginner. The nature of prototyping and experimentation is that nobody knows the answer!

Discovery dojo

Communication in this dojo is divergent, but directed towards a goal. People have different ideas and want to try different things, but all suggestions are in the spirit of learning more about the tool.

The outcome of this dojo is likely to be greater understanding rather than shared understanding. Though we probably won't agree, but we'll all know a little bit more.

For training, refactoring, and discovery, I enjoy the dynamics of a coding dojo format. I would be curious to know how these experiences match your own, or where you've participated in a dojo for test automation work in a different context.

Thursday, 4 January 2018

30 articles for tech leaders written by women

When I was first promoted to a leadership role in tech, I looked for leadership resources that were written by women with advice targeted to a tech environment.

It took some time to discover these articles, which resonated with me and have each contributed to my leadership style in some way. They are written by a variety of women in the US, UK, Europe and New Zealand, many have ties to the software testing community.

This list includes several themes: leadership, communication, learning, inclusion, and recruitment. I would love your recommendations for other articles that could be added.

Why we should care about doing better - Lynne Cazaly
Follow the leader - Marlena Compton
Put your paddle in the air - Lillian Grace
Dealing with surprising human emotions: desk moves - Lara Hogan
You follow the leader because you want to - Kinga Witko
Entering Groups - Esther Derby
Agile Managers: The Essence of Leadership - Johanna Rothman
Recovering from a toxic job - Nat Dudley
Yes, and... - Liz Keogh
Ask vs. Guess cultures - Katherine Wu
"I just can't get her to engage!" - Gnarly Retrospective Problems - Corinna Baldauf
Eight reasons why no one's listening to you - Amy Phillips
Don't argue with sleepwalkers - Fiona Charles
What learning to knit has reminded me about learning - Emily Webber
Effective learning strategies for programmers - Allison Kaptur
Five models for making sense of complex systems - Christina Wodtke
The comfort zone - Christina Ohanian
WTF are you doing? Tell your teams! - Cassandra Leung
We don't do that here - Aja Hammerly
Here's how to wield empathy and data to build an inclusive team - Ciara Trinidad
Tracking compensation and promotion inequity - Lara Hogan
The other side of diversity - Erica Joy
Hiring isn't enough - Catt Small
'Ladies' is gender neutral - Alice Goldfuss
Where does white privilege show up? - Kirstin Hull
Better hiring with less bias - Trish Khoo
1000 different people, the same words - Kieran Snyder

Wednesday, 13 December 2017

Pairing for skill vs. Pairing for confidence

I went to a WeTest leadership breakfast this morning. We run in a Lean Coffee format and today we had a conversation about how to build confidence in people who have learned basic automation skills but seem fearful of applying those skills in their work.

I was fortunate to be sitting in a group with Vicki Hann, a Test Automation Coach, who had a lot of practical suggestions. To build confidence she suggested asking people to:
  • Explain a coding concept to a non-technical team mate
  • Be involved in regular code reviews
  • Practice the same type of coding challenge repeatedly

Then she talked about how she buddies these people within her testing team.

Traditionally when you have someone who is learning you would buddy them with someone who is experienced. You create an environment where the experienced person can transfer their knowledge or skill to the other.

In a situation where the person who is learning has established some basic knowledge and skills, their requirements for a buddy diversify. The types of activities that build confidence can be different to those that teach the material.

Confidence comes from repetition and experimentation in a safe environment. The experienced buddy might not be able to create that space, or the person who is learning may have their own inhibitions about making mistakes in front of their teacher.

Vicki talked about two people in her organisation who are both learning to code. Rather than pairing each person with someone experienced, she paired them with each other. Not day-to-day in the same delivery team, but they regularly work together to build confidence in their newly acquired automation skills.

In their buddy session, each person explains a piece of code that they’ve written to the other. Without an experienced person in the pair, both operate on a level footing. Each person has strengths and weaknesses in their knowledge and skills. They feel safe to make mistakes, correct each other, and explore together when neither know the answer.

I hadn’t considered that there would be a difference in pairing for skill vs. pairing for confidence. In the past, I have attempted to address both learning opportunities in a single pairing by putting the cautious learner with an exuberant mentor. I thought that confidence might be contagious. Sometimes this approach has worked well and others not.

Vicki gave me a new approach to this problem, switching my thinking about confidence from something that is contagious to something that is constructed. I can imagine situations where I’ll want to pair two people who are learning, so that they can build their confidence together. Each person developing a belief in their ability alongside a peer who is going through the same process.