What do these Tests do?

John Paliotta

January 26, 2016

Our engineering team is constantly re-evaluating how we work to make our jobs as friction-free as possible. I often tell customers that we are not perfect, and that we share many of their challenges in terms of getting new software features to our customers as quickly as possible.   In addition to improving our products, it is critical that we improve our development processes to reduce time to market.  I think that the beginning of a new year is a great time to target the parts of your development process that are the 'most broken', and often the area that is most broken is testing.

Socrates famously said: 'The unexamined life is not worth living', with this in mind, my question today: 'Is the unexamined test case worth maintaining?'

We all have thousands of test cases for our applications, but are they the right tests?  Do we evaluate our existing tests and prune the ones that no longer make sense?  Or do we continually maintain the old tests, because we don't really understand what the old tests do and are afraid to delete them?  Just like refactoring a code base, refactoring your test cases is critical to maintain efficiency; and the first step is to ensure that you understand those tests. 

Last month we decided to revisit our automated GUI tests.  These tests had been built over the last 15 years, and had become a maintenance nightmare. They were fragile and ran slowly.  About half of them frequently broke due to timing issues, but never actually caught a real bug.  For our team, this meant lots of test automation time was being used to run tests that provided limited value, and lots of engineering time was being used to manually evaluate test failures to convince QA that the failures were not real bugs.  After a month of analysis we split our legacy GUI tests into three buckets:

      1. Tests that frequently broke but found no bugs
      2. Tests that frequently broke but did find bugs
      3. Tests that were stable.

In our case, we stopped running the tests in category 1, and we assigned category 2 tests to our test automation engineers to refactor.  We expect the category 2 tests to be refactored within 3 months, our total test time for GUI testing to decrease by 50%, and the engineering time associated with interpreting test failures to decline to near 0.  I'm sure some of you have these same challenges, and I would encourage you to take some time to understand your existing tests, and think about how they could be improved.

If you are overwhelmed with where to start, give us a call. Our Global Services group would be happy to help you get started.