Answering Anand Bagmar’s questions on Functional Automation

Anand Bagmar is a student of the craft. He is an example of how I would have loved to be if I had not chosen to wear a business hat a decade ago. Anand published this post in his blog and asked some pretty good questions at the end.

How do I know? I have had an opportunity to see Anand in close action. He nailed it.

He then took to Linkedin to ask people those questions. I put in as sincere attempt as possible to reply to each of Anand’s questions and I wanted to capture it into a blog post here.

Does your functional automation really add value?

I love automation that helps add value. 

80% of the automation I have personally seen did not add test value.

Why I qualify test value is because I believe automated consistency checks help testers to progress and move on to fresh code that might have unidentified and far to unknown risks. 

20% did add test value.

  • The 20% doesn’t mean 20% in every project. 
  • Some projects had 100% zero test value automation.
  • Some had 90% useful test value automation.
  • Also in the 20% some of them were not automating tests. 

They were also about automating :

  • Setup
  • Test Data
  • Environment
  • Reporting

Numbers are approx. I haven’t measured to be precise.

Do you “rerun” the failing tests to see if this was an intermittent issue?

While rerun of tests look obvious – it remains obvious only till it is qualified as a “stable test”. Post which people become oblivious to it unless there is an event that makes them to look at it.

I have seen automation that has built in re-runs before concluding if it was intermittent. Cost of re-run + value of the test matters. Re-running a useless test because it failed won’t help anyone. Not that people don’t know.

What is the test passing percentage?

This question should be named the question of the century in the space of test automation (largely front end).

Taking inspiration from what Marcus Merrell from Sauce Labs mentioned in Test Warez 2019 conference : “What Sauce Labs has seen that hosts a significant market share of Selenium Grid – an incredibly high % of the tests don’t pass consistently”.

I know orgs that want to achieve a 90% stability with their tests. The orgs who achieve it have test and dev binded very well.

How long does it take to add a new test?

Good question. Instead of talking about time to code, I want to talk about how much time do people take to realize that they need to add new tests.

  • People feel they are done with tests.
  • I think the test of the usefulness of automation begins when people think they are done with it.
  • How often are previous tests retired?
  • How often are we adding new tests?
  • How often are we testing our own test’s usefulness?

Answer to these questions help understand if people are thinking about new tests and in what way.

How long does it take for tests to run and generate reports?

In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?

Just yesterday (7th July 2020) someone told me that a particular Android App Automation (front end) was being run for 16 hours. I was like, “Really? Where is the fast feedback loop?”

The team has fantastic engineers and a poor management. The team built all scripts the management asked but hey – the management just wanted to see 100% automation. Sure enough they got it. After hearing that – my hunger to move towards a more minimalistic approach to testing has increased multifold.

Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?

In cases I have seen : Aspiration is to get it to run “automatically” Achieved mostly with web automation [80% cases] Needs a trigger for mobile automation [20% cases] High maturity tech companies achieve automatic run via CI.

Automation is getting the same treatment testing got a few years ago. Automation and Tooling needs hardcore technology and software engineering skills. Not just tools and frameworks.

How easy is it to debug and get to the root cause of failures?

It varies per project I have seen. It also has a bearing from the culture of the org.

Best case I have seen teams that have some sorta RCA / Logging / Observability built into their automation that helps isolate the issues to

Average case Taking time to find if was a product failure or automation failure to

Worst case People first fixing their script to make it a Pass – so that they get time to investigate what the real issue is.

What makes you say it does add value / or does not add value?

Test has two aspects:

  • Unknown or unexplored territory.
  • Known or previously explored territory.

Automation is a great candidate for known or previously explored territory.

Automation, tooling and testability are great candidates to aid new unexplored territory. 98% of people (I have seen) claim their interest in automation and build skills to automate “running of tests” which falls as a subset in known or previously explored territory.

Within that, the obsession to show pass or a green is a big driver (due to the culture of the org) PLUS done on front end with flakiness PLUS Lack of testability PLUS Done as a silo activity – Results in value and time off things that could have been done.

The world is moving from “Anyone can test” to “Anyone can automate”.

Good questions, right? What questions do you have? Send me those on Linkedin and I will try to answer them and keep this post updated.

8
0 Shares:
You May Also Like