Saturday, September 16, 2006

The Testing Anti-Patterns Drafts Vol. 1 (draft2)

George and I both consider ourselves fortunate enough to work as members of teams where TDD is practiced by default.

There are two reasons beyond the obvious as to why we find it relatively harder to code in a non TDD approach. Test Driving our code enhances our vision on how to design/model/instrument the application's universe. Also, starting with a test means work begins and ends with coding, not meetings, discussions or modelling our vision in pictures bound to be proven unrealistic when the first coding bottlenecks arise.

This is not to say that discussions or modelling are intrinsically bad things, a good brisk white board discussion can really help – but don't ever forget that this is a relatively abstract activity.

Tests have a much closer relationship with your code that allows you to discover and document the behaviour of your system in a much more detailed manner. In fact it is the close relationship between tests and code that can lead to problems. This relationship must be kept in balance if the process is to be successful.

Production code is the bread winner. This is where your business value lies. However, due to the nature of the TDD process, the production code is also somewhat dim-witted; it doesn't really have a clear vision or motivation in life. The test code is there to explain to the code what is expected of it and to highlight any mistakes the code makes by identifying in a precise manner why the code doesn't quite do the right thing and to point out how to head off in the correct direction. In some respects this could be likened to the relationship between a boxer and his trainer. Ultimately, it is the boxer who has to go into the ring and win the fight, but it is the trainer who gets the boxer into the zone mentally and physically.

Another equally important role of the test code is to document the expected behaviour of the production system. This is fantastic! Suddenly, the technical documentation is no longer a static and dusty tome on a shelf, rather an ever accurate and clear guide to what the code really does. If your code is not fit for purpose, this should be made glaringly obvious in your tests, and by changing your documentation you should be able to change the behaviour of your code to make it do the right thing in the right way.

However, we don't live in a perfect world and testing properly is not easy, despite the presence of opposable thumbs and shiny new Macs. Once you get past the simplistic examples and into the real world you find that it is difficult to write effective tests, and often the tests are much harder to craft than the actual code. In many ways, this should not come as a surprise. When you write a test, you are attempting to satisfy a number of aspects of the system: quality, design, documentation. And all of this expressed though the medium of a software language!

So, what makes our tests go bad? There seem to be a number of patterns starting to emerge, but to list them all here is going to take too long – I think George and I are going to have to break this out (hopefully with lots of assistance from our friends ;-)). In my mind the coarse categories of TDD anti patterns are:

  1. Driving the code the wrong way.
  2. Hiding deficiencies in the code.
  3. General test smells

Test Driving The Wrong Way

This is the situation where your code starts to bend towards the test code to meet requirements of the testing framework rather than to meet the needs of the application. Examples of this include:
  • Adding calls to your production data layer code to allow setup/teardown/query operations which are not required by the application.

  • Providing access to properties which should not be made visible normally (this is what George is getting at with Design Pervasive Testing)

  • Forcing the use of IOC where in fact it makes more sense to create objects internally or access static methods.

Clearly there is a sliding scale of smelliness here. Overuse of IOC is not really such a bad thing – the code will still function as required. It just leaves you thinking that there has to be a better way.

Adding a deleteAllClients() method to your production code is a big deal though!

Sometimes, this can be sign that the test framework that you are using to test your application is not suitable, as switching to a different technology can help get around the smell.

For example - the use of DbUnit may make it easier to put your database into the correct state for a functional test and remove the need for the dodgy setup code in your data layer. Newer mocking libraries like JMockit can remove the absolute requirement for IOC based design patterns by allowing you to hook the creation of new objects and accessing static methods.

Hiding deficiencies in the code

This is where we seem to be suffering the most. It is very easy to hide problems in the code base instead of actually driving them out with your tests. Examples of this phenomenon:
  • Hidden behaviour. The production code performs a variety of complex activities in a given situation. However this is not clear because the test has buried the behaviour in deep in a number of (often obscurely named) helper functions. This is often a sign that you have too many complex relationships between your objects, or that the conversations the component has with its collaborators is overly chatty.

  • Supporting actors stealing the show. The test is so polluted with setup code that you can't actually make out what the test is trying to show you. This can often be a sign that the step is too complex.

General Test Smells

  • Badly named tests. This is especially bad when you consider the need to document the code through the medium of tests.

  • Inappropriate use of stubs. For example, stubbing a simple data type.

  • Etc.
In summary, I think we really need to care about the quality or our test code and learn to treat it quite differently to the production code, realising that the two portions of our code base serve different purposes in our development activities.

So...over to George to fill in some more blanks ;-)

“The Testing Anti-Patterns Drafts” is a collaborative effort between George “spring is overrated” Malamidis and myself which aims to identify cases of Testing gone bad. It consists of a single document that will undergo constant enhancements and modifications, in a “pair-authoring” manner, utilising our respective weblogs as the platform. We hope to get input from anyone following the document, our goal being to produce an interesting resource for the TDD, or Testing Oriented in general community.

2 comments:

Jon Skeet said...

One random comment before I think much more about it. This is a sort of "pair-blogging" exercise - so was this also a test driven blog entry? I know this sounds silly, but as I'm currently finding it hard to write an article about reference types, I'm wondering whether it's a good idea.

Of course, the tests couldn't really be automated, but they could be at least documented, as a way of making it clear what should be achieved.

Hmm.

Matt Savage said...

Suggestion for entry-

Tests for things like builders, adapters, O/R mapping, equality, etc. that check the fields available on the class at the time of writing the test but won't break when new fields appear that aren't covered.

See the ClassCoverageChecker we used on the last project.