Thursday, October 19, 2006

Buried Intent. Bad for campers, bad for TDD

Remember that your test code is not production code. The purpose of your test code is to prove that your production code works and to provide live documentation. Qualities which make good test and production code are not necessarily the same. For example, it is important that a test case clearly demonstrates the expectations of a test case inline. This is especially true when working on an XP project, as these tests are your primary means of documenting the expected behaviour of your test classes. If another developer cannot read your tests easily and determine what was intended of the production code then trouble will ensue

Consider the following test case:


public void testValidateOffPeakUserConnectionRejectedDuringPeakTimes() {
ClientConnectionRequest connectRequest = new ConnectionRequest("bob", "12345");
setMonitorExpectations("bob", "Some account", "12345", ON_PEAK, true, false, true);

boolean requestResult = connectionMonitor.validate(connectRequest);
assertEquals(false, requestResult);
}

Superficially, this test seems to be OK. The test is short and sweet, and we are checking that our monitor rejects the request. But is going on exactly? Well, to work this out we have to dig down into the setMonitorExpectations method. More information is revealed:


private void setMonitorExpectations(
String clientName,
String accountName,
String clientId,
boolean introductory,
boolean unlimited,
boolean offPeakOnly) {

UserAccount account = new UserAccount(12345L);
account.setAccountName(accountName);
account.setClientName(clientName);

buildAccountType(introductory, account);

repositoryMock.expects(atLeastOnce()).method("find").will(returnValue(account));
repositoryMock.expects(once()).method("isUnlimited").will(returnValue(unlimited));

if (offPeakOnly) {
account.setAccountRestrictions(AccountUsage.RESTRICTED);
repositoryMock.expects(once()).method("getAllowedConnectPeriods")
.with(eq(12345L),eq(clientId)).will(returnValue(offPeakOnly));
}
}

Oh, so we are setting up a bunch of attributes of the account, conditionally flagging the account as restricted, and then going on to add an expectation. But we still don't have the full picture, as there is yet another call out to buildAccountType.


private void buildAccountType(boolean introductory, UserAccount account) {
if(introductory) {
account.setAccountType(AccountType.INTRODUCTORY);
}
}

Now we have a a number of problems.
  • Although the test code works, and is compact it is failing to fulfil the role of documenting the production code clearly. In order to determine what my test methohd is trying to prove I have to dig down a number of levels.
  • I suspect that the helper method is actually too rich for this method. There is conditional logic in there to change the expected test behaviour based on the test situation. This may be acceptable in production code, but conditional logic has no place in test code.
  • The fact that the interactions with other objects are buried in the helper functions is also hiding potential issues in the production code. In this case it is the repeated calls to the repository. By inlining the expectations it would be more obvious that the classes have a chatty relationship, and this could be harnessed by changing the test expectations and driving the production code to become more efficient.
This is a natural trap which we can fall into when we are constructing unit tests. Our IDE allows us to extract methods so easily these days that we often find that we are extracting "common code" from our test calls because we believe that this is helping to keep the code base compact and efficient. In fact, this naive refactoring can case real problems. Now that our test code is less clear to a casual reader, the overhead of maintaining the test suite could rise!

Ways Out Of this Situation?
In this situation the simplest route out is to inline all the buried methods and step back. In this case we initially end up with:


public void testValidateOffPeakUserConnectionRejectedDuringPeakTimes() {
ClientConnectionRequest connectRequest = new ConnectionRequest("bob", "12345");

UserAccount account = new UserAccount(12345L);
account.setAccountName("Some account");
account.setClientName("bob");

if (ON_PEAK) {
account.setAccountType(AccountType.INTRODUCTORY);
}

repositoryMock.expects(atLeastOnce()).method("find").will(returnValue(account));
repositoryMock.expects(once()).method("isUnlimited").will(returnValue(true));

if (true) {
account.setAccountRestrictions(AccountUsage.RESTRICTED);
repositoryMock.expects(once()).method("getAllowedConnectPeriods")
.with(eq(12345L), eq("12345")).will(returnValue(true));
}

boolean requestResult = connectionMonitor.validate(connectRequest);
assertEquals(false, requestResult);
}


This is clearer than we had before, and when the method is rearranged and the account creation logic extracted into a helper method we end up with:


public void testValidateOffPeakUserConnectionRejectedDuringPeakTimes() {
UserAccount account = buildAccount(
12345L, "Some account name", "bob client", INTRODUCTORY, RESTRICTED);
ClientConnectionRequest connectRequest =
new ConnectionRequest(account.getClientName, account.getId());

repositoryMock.expects(atLeastOnce()).method("find").will(returnValue(account));
repositoryMock.expects(once()).method("isUnlimited").will(returnValue(true));
repositoryMock.expects(once()).method("getAllowedConnectPeriods")
.with(eq(12345L), eq("12345")).will(returnValue(true));

boolean requestResult = connectionMonitor.validate(connectRequest);
assertEquals(false, requestResult);
}


Note that this refactored test is really not much larger than the original. However the difference is that the test expectations are clearly visible. As a result, the developer is more likely to be in a position to improve the code. Here the mock object interactions are showing that we have an overly chatty relationship with another object. This could be a sign that we are suffering from the 'feature-envy' anti pattern in our production code, which in this example could lead to us making bitty calls to the database and could become a performance bottleneck. By exposing the relationship explicitly in the test case, we can now choose what to do about the situation.
  • Live with the problem - this is not an important feature
  • Change the expectations to drive improvements in the production code. It would now be easy to change the test case to forbid the look up calls to the repository and drive the production code down a more performant path.
  • Break up the method under test.

2 comments:

Jeff Santini said...

Great Thought and demonstrated procedure. But I still have a complaint about the test.

All this mock stuff confuses the test.

repositoryMock.expects(atLeastOnce()).method("find").will(returnValue(account)); repositoryMock.expects(once()).method("isUnlimited").will(returnValue(true)); repositoryMock.expects(once()).method("getAllowedConnectPeriods")

This test does not care about interactions as stated in the title. Therefore the mock lines do not contribute to understanding the test. A strategy that does not inject long train-wreck lines into a sweet simple test would improve the readability greatly.

If testing interactions was the intent of the test these lines would be appropriate, interactions are not mentioned in the title, therefore these lines pertaining to mocks are not appropriate

Stoofer said...

Hi Jeffrey,

I agree - that is exactly what I was trying to get at with

"Here the mock object interactions are showing that we have an overly chatty relationship with another object"

I think that the problem here though is not that the test is bad, rather, the production code is at fault. It is wrong that the code requires such a degree of interation with the repository. So instead of acknowledging this and deciding to put a stop to it, the developer buried the problem in nested methods.

So - step one in refactoring this problem is to inline and rearrange to problem so that you can stare your ugly ducking square in the face.

I would have expected the enlightened developer to carry on from where I left off>

(2) reduce the mock lines to:
repositoryMock.expects(once())....

(3) Fix the production code - by moving more behaviour into the account object and out of the repository.

The issue I have is that often developers ar tempted to pretty up their test code at the expense of sorting out the production code. Ironically, this drives our design away from the good code we were trying to get to. I would rather see an ugly test first and then try to make it cleaner through improving production code.

However - mock ineractions are not the only way that this situation manifests - there could be a bunch of equaly buried interactions with the test object or manipulations of object state. It just seems to be more prevalent in mock interations.