Thursday, October 19, 2006

Buried Intent. Bad for campers, bad for TDD

Remember that your test code is not production code. The purpose of your test code is to prove that your production code works and to provide live documentation. Qualities which make good test and production code are not necessarily the same. For example, it is important that a test case clearly demonstrates the expectations of a test case inline. This is especially true when working on an XP project, as these tests are your primary means of documenting the expected behaviour of your test classes. If another developer cannot read your tests easily and determine what was intended of the production code then trouble will ensue

Consider the following test case:


public void testValidateOffPeakUserConnectionRejectedDuringPeakTimes() {
ClientConnectionRequest connectRequest = new ConnectionRequest("bob", "12345");
setMonitorExpectations("bob", "Some account", "12345", ON_PEAK, true, false, true);

boolean requestResult = connectionMonitor.validate(connectRequest);
assertEquals(false, requestResult);
}

Superficially, this test seems to be OK. The test is short and sweet, and we are checking that our monitor rejects the request. But is going on exactly? Well, to work this out we have to dig down into the setMonitorExpectations method. More information is revealed:


private void setMonitorExpectations(
String clientName,
String accountName,
String clientId,
boolean introductory,
boolean unlimited,
boolean offPeakOnly) {

UserAccount account = new UserAccount(12345L);
account.setAccountName(accountName);
account.setClientName(clientName);

buildAccountType(introductory, account);

repositoryMock.expects(atLeastOnce()).method("find").will(returnValue(account));
repositoryMock.expects(once()).method("isUnlimited").will(returnValue(unlimited));

if (offPeakOnly) {
account.setAccountRestrictions(AccountUsage.RESTRICTED);
repositoryMock.expects(once()).method("getAllowedConnectPeriods")
.with(eq(12345L),eq(clientId)).will(returnValue(offPeakOnly));
}
}

Oh, so we are setting up a bunch of attributes of the account, conditionally flagging the account as restricted, and then going on to add an expectation. But we still don't have the full picture, as there is yet another call out to buildAccountType.


private void buildAccountType(boolean introductory, UserAccount account) {
if(introductory) {
account.setAccountType(AccountType.INTRODUCTORY);
}
}

Now we have a a number of problems.
  • Although the test code works, and is compact it is failing to fulfil the role of documenting the production code clearly. In order to determine what my test methohd is trying to prove I have to dig down a number of levels.
  • I suspect that the helper method is actually too rich for this method. There is conditional logic in there to change the expected test behaviour based on the test situation. This may be acceptable in production code, but conditional logic has no place in test code.
  • The fact that the interactions with other objects are buried in the helper functions is also hiding potential issues in the production code. In this case it is the repeated calls to the repository. By inlining the expectations it would be more obvious that the classes have a chatty relationship, and this could be harnessed by changing the test expectations and driving the production code to become more efficient.
This is a natural trap which we can fall into when we are constructing unit tests. Our IDE allows us to extract methods so easily these days that we often find that we are extracting "common code" from our test calls because we believe that this is helping to keep the code base compact and efficient. In fact, this naive refactoring can case real problems. Now that our test code is less clear to a casual reader, the overhead of maintaining the test suite could rise!

Ways Out Of this Situation?
In this situation the simplest route out is to inline all the buried methods and step back. In this case we initially end up with:


public void testValidateOffPeakUserConnectionRejectedDuringPeakTimes() {
ClientConnectionRequest connectRequest = new ConnectionRequest("bob", "12345");

UserAccount account = new UserAccount(12345L);
account.setAccountName("Some account");
account.setClientName("bob");

if (ON_PEAK) {
account.setAccountType(AccountType.INTRODUCTORY);
}

repositoryMock.expects(atLeastOnce()).method("find").will(returnValue(account));
repositoryMock.expects(once()).method("isUnlimited").will(returnValue(true));

if (true) {
account.setAccountRestrictions(AccountUsage.RESTRICTED);
repositoryMock.expects(once()).method("getAllowedConnectPeriods")
.with(eq(12345L), eq("12345")).will(returnValue(true));
}

boolean requestResult = connectionMonitor.validate(connectRequest);
assertEquals(false, requestResult);
}


This is clearer than we had before, and when the method is rearranged and the account creation logic extracted into a helper method we end up with:


public void testValidateOffPeakUserConnectionRejectedDuringPeakTimes() {
UserAccount account = buildAccount(
12345L, "Some account name", "bob client", INTRODUCTORY, RESTRICTED);
ClientConnectionRequest connectRequest =
new ConnectionRequest(account.getClientName, account.getId());

repositoryMock.expects(atLeastOnce()).method("find").will(returnValue(account));
repositoryMock.expects(once()).method("isUnlimited").will(returnValue(true));
repositoryMock.expects(once()).method("getAllowedConnectPeriods")
.with(eq(12345L), eq("12345")).will(returnValue(true));

boolean requestResult = connectionMonitor.validate(connectRequest);
assertEquals(false, requestResult);
}


Note that this refactored test is really not much larger than the original. However the difference is that the test expectations are clearly visible. As a result, the developer is more likely to be in a position to improve the code. Here the mock object interactions are showing that we have an overly chatty relationship with another object. This could be a sign that we are suffering from the 'feature-envy' anti pattern in our production code, which in this example could lead to us making bitty calls to the database and could become a performance bottleneck. By exposing the relationship explicitly in the test case, we can now choose what to do about the situation.
  • Live with the problem - this is not an important feature
  • Change the expectations to drive improvements in the production code. It would now be easy to change the test case to forbid the look up calls to the repository and drive the production code down a more performant path.
  • Break up the method under test.