QA Myths


There are a handful of misconceptions about QA out there which can linger around and embed themselves into the culture of a company.  Whether they stem from purely financial reasons or a firm belief that QA simply is not needed they can still persist and silently erode software quality.  Below are some more of the common myths that need clarifying.


Testing causes bottlenecks
Back in the day this may have been true.  Releases would need to stop perhaps for days or weeks while a release was to be thoroughly tested.  Testers would have loads of requirements and test plans to go through and would only supply feedback on failures at the very end of their testing.  Those bugs would get fixed and testing would commence once again. However, modern day testing is should be baked in throughout the entire process.  With unit testing, API testing, automated frameworks and more emphasis on proven testing principles QA can spend a minimal amount of time focusing on specific problem areas which enables releases to be deployed faster.


QA is expensive
Yes, there is an associated cost with maintaining a qualified testing team. They are employees and they require payment. Their focus is on testing as they should be independent from the developers which helps them work without bias.  However, if managed correctly coupled with their desire to grow they can someday help the company on a broader scale by becoming developers themselves or even program managers.  Also, brand reputation can be hurt quicker than ever these days by bugs and security risks which will affect the bottom line and cause many other financial headaches.  QA is a preemptive strike against those potential financial losses.


Developers can do their own testing
This is not so black and white as it may seem.  There are some developers that are great testers too. I've known and worked with them.  But they are still only one person and can be a relatively biased person at that. Having a second pair of skilled testing eyes help uncover blind spots that my have been glossed over. The developer also may have interpreted the requirements wrong which can lead to a bad UI experience. A knowledge QA engineer helps smooth out those UI kinks so they do not get to the customer. Also developers are expensive, they should be focusing most of their time on coding....and writing unit tests.


All testing can be 100% automated
While you can create functional automation tests that cover almost all of the happy path scenarios in a piece of software it is still nearly impossible to cover all the possible negative scenarios customers can wrangle themselves into.  And if you could come up with all those scenarios it would add loads of time to the running of the tests.  Manual exploratory testing helps cover those scenarios.  Which leads to the UI/UX/usability aspect of testing.  Cognitively thinking individuals who are familiar with the product are still your best defense against annoying UI bugs.  There are also software applications that still can't be automated.  Programs dealing with a map of some sort, which utilizes a canvas, can be extremely hard to automate. There is an enormous amount of legwork to verify points of data on a map due to figuring out where those objects are placed.

These myths talked about above should not stop a company from investing in a QA staff.  Money will always play a big part of what position a company hires for but there is an importance to having this layer of defense against the war with the bugs.

Comments