Let’s Talk Testing with Paul Holland – Part 1

Paul Holland, one of the few Rapid Software Testing instructors visited Moolya and we had the privilege to interview him would be a cliche but chatted with him about several software testing aspects including why testing is not dead yet, why testers alone are blamed for software quality issues though we didn’t put the bug in the code, pros and cons of choosing one form of testing, test automation, conferring experiences and networking with the global community of testers.

You are in for a treat as this Q&A has left us enlightened and encouraged to continue to be one of the torchbearers of good testing practices in India.

Q: Question, A: Answer

Q: What’s it like to be a former air pilot and then make this transition to testing?

A: That was a bit of a shock. Actually, that’s a good question, I’ve never been asked that question. There is a bit of a mental transition, as piloting is not a nine to five job. I’d be up at 3 am and go looking for the submarine or I might not fly for a few days as the seas were too rough. And then to switch to a nine to five job that definitely doesn’t have the same public opinion like ‘oh you’re a pilot that’s impressive’ to ‘you’re a software tester, who cares’. That was a bit of a struggle but I think the lack of emergencies and the fact that I almost didn’t die every day made the transition easier. But the biggest thing was going to a job from nine to five was a bit of a struggle but not too hard.

So the other big difference is that when I was a pilot I went through almost two full years of training on how to be a pilot, how to talk on the radio, how to control the plane, how the weather is, how aerodynamics works, all of the components of the plane to be a software tester where it was ‘here is a product go test it.’ No training, no support system, I was the only tester. The small division that I was in, had no testers so that was it. You’re on your own so that was a big difference as well from the very structured military to incredibly unstructured testing/the least structured testing I have ever done. That was the big transition as well, but it was also ‘freeing’. I could just test and turns out that I was pretty good at finding bugs just by doing what I thought I should do as opposed to try and do anything structured.

Q:  Why according to you software testing is not dead yet?

A: The software testing how it was in the 90’s where here is your requirement write a script, execute the script over and over again for regression testing that approach to software testing is dead. And it should be dead as it is horribly ineffective.

The testing that we do at Moolya and Medidata is not dead. And the people who are trying to replace thoughtful, creative testing with full automation they don’t understand testing and are going to end up in trouble. There are lots of big companies that have tried to go full automation route and they have stopped and have gone back to having testers as well as automation. You need the combination. I have a sticker on my laptop that says only one type of testing is bad. It’s not that only one type of testing that is bad. If you only have one type of testing, you are not doing good unit testing automation and integration level automation followed by exploratory testing and creative thinking at the upper level plus security testing,  performance testing and all the quality criteria that are defined in the Rapid Software Testing(RST). That approach to testing is not dead.

Q: Why are testers alone blamed when the product quality declines?

A: I think that depends a lot on the company you are at. The testers should be blamed if there are obviously bugs that they haven’t caught. In my mind, that’s the only time the testers should be blamed. And again, blame isn’t the right term. But you’re right testers do get blamed at some companies. The way I push back against that is I say we didn’t put the bugs in the product, we didn’t decide to ship before we had completed the testing or with incomplete information. We didn’t decide not to fix the bugs and keep them on the backlog. That’s the developers that put the bugs in the software originally by their coding. The project manager is trying to keep us to a schedule and is pushing it out the door before we have a complete picture of the quality from the test team. The product owners tend to de-emphasize fixing the bugs at many companies. They will say ‘we will just ship with that bug that won’t bother our customers’ which of course it does. And then the testers can be at fault if they haven’t tested the product effectively enough to find the bugs that the customers find. So it is definitely a team effort, the entire team is responsible.

The testers tend to get blamed mainly because they often have the weakest voice at the table. And if you have the weakest voice, you are the one to get blamed. Relatively speaking, now we have a strong voice in testing, Su Finlay has a good position with the CTO Julie and they have good respect for each other. As a result at Medidata we the testers tend not to get blamed. We often get questions why didn’t you find that but that’s a big difference from being blamed. If you are working at a company where the testers get blamed then just go back to developers put the bugs in the code, the project manager pushes it out the door and the product owner de-emphasizes the need to fix the bugs that are identified already.

Q: Why is it that the testers have a weak voice?

A: I believe that the weak voice comes from a history of testing being thought of as it can be done by anybody.

Testing got commoditized back in the late ’90s and early 2000’s by big outsourcing companies. Because if they commoditized it and said that you are just a cog in the wheel, I can replace you with any other cog you can take this script and execute it. Then they found a way of essentially printing money by doing horrible testing. As a result, the good testers got pushed out. Some companies did a lot of outsourcing to commoditize testing and stopped and started bringing the jobs to actual creative testing. And perform exploratory testing to find the bugs, which is why having companies like Moolya as an option to get good quality testing is really good.

The weakest voice I think comes from the fact that testing is viewed as an entry-level position. If you get ok in testing then you can move into development or you can become a product owner. It is possible to be a good tester without a lot of formal training.

But typically people who are good testers have an inquisitive, creative mind that they have developed their whole childhood and their whole life. Good testers when they see something broken, they try and fix it. They tend to be helpful, looking to fix problems, trying to contribute in any way they can to make the situation better. That’s the type of person who is a good tester typically does regardless of the official training. People who tend to be bad testers are the people who have had formal training in testing but not the life training to go with it.

Q: What is the role of a tester as a firefighter during testing troubles?

A: I guess there are two parts to this question. If our customers find something that we should have found, then having an education process across the team to say here’s a type of bug or a bug that we missed so that the whole team is aware of it so the next time they all could be looking for that type of bug.

As far as the role during firefighting, when a customer reports an urgent bug and we have to get something out of the door quickly then the role is really to help the development team identify, recreate the bug, work with the developers to help identify a fix, with minimalist recreation steps and being available to quickly test new code as it comes out. And then not just testing that the code fixed the problem, but testing that the code didn’t introduce something else.

Q: Isn’t it professional to work without contributing to the blame culture or invest in developing discomfort between the programmers and the testers?

A: Thankfully at least in the New York office where I work, there tends not-to-be-blamed for bugs that haven’t been found and as long as there’s an action plan presented if it’s something that we as a testing team feel we should have caught, if we can present – this is what we are doing to catch this type of bug in the future.

The solutions:

Something that should also be presented as a solution is: What unit tests can be put in place by the development team to catch that type of bug in the future. Because as far as a tester doing the testing we can’t test everything all the time. But if we have a unit test in the system that will find that type of bug in the future, then that type of bug shouldn’t occur in the future because we have a good unit test to catch it.

So really part of the solution should be to generate unit tests to cover it of. And an education on the part of the developers to say this type of bug is something that we all missed. Because the way I like to think of the role of the tester is we have to test all aspects of the code but the developer should write sufficient unit tests to show that their code behaves as it’s expected to and the types of bugs the testers should be finding are the unexpected bugs for interacting with other pieces of our own software and the platform that we depend on.

1
0 Shares:
You May Also Like