Some think Agile is all seat-of-your-pants. That’s not true at all. Agile requires knowing where you are right now and what you need to do in the near future; it just doesn’t put a lot of faith in guesses where you’ll be 8 months from now.
The various forms of Agile all contain a set of practices to follow and others to stay away from, each with their strengths and weaknesses. Successful implementation of Agile requires selecting and implementing practices that complement and support each other, covering all required needs without duplication (which is where a good deal of the efficiency comes in). It’s bad enough if you pick a set of practices that don’t support each other, but it’s much worse to chose the right practices but not implement them right. That’s because it can create a false sense of security.
One practice that is particularly susceptible to this is TDD/unit testing/acceptance tests. Agile developers can code faster and with more confidence because they know if their code doesn’t work, or if it works but it breaks something else, or if it works as the developer intended but not as the customer wants, it will become evident in short order. When this assumption proves fault, it’s a double hit on productivity; the bugs are harder to find since they were created longer ago, and the developer loses momentum on their current development so they can go back and fix their previous work. There are several ways this this can fail:
- Missing tests: Developers, in general, don’t like writing tests. Even when following the “letter of the law”, edge cases get left out.
- Insufficient negative testing: Positive testing is making sure your component outputs the right results when you give it good input. Negative tests make sure your component handles bad input. If you pass it a null pointer, does it handle it gracefully, or will fire come out of your keyboard? Does it check for “divide by zero”? SQL injection? Someone typing in “mañana”? Someone accessing your website using Safari?
- Misguided tests: If the developer didn’t understand the requirements when they wrote the code, they sure aren’t going to understand them when they write the unit tests. This is why I feel strongly in having someone else write the unit tests, or at least the integration/acceptance tests.
- Expected failures: I have seen several build systems that spew out all sorts of ominous portents of doom with every build. But the more of your output you cheerfully ignore, the more likely some legitimate error output will be missed. You are expecting your build system to lie to you, so you become desensitised to truly wrong output.
- Dropped errors: Some build systems require one tool to call another tool to do the actual work, and sometimes those components communicate with each other better than other times. A common example is developing Java in Eclipse, using Ant to do the actual build, in turn calling javac/java to do the actual compilation and execution. But I’ve seen many circumstances where Ant reports “Build Successful” at the end but some subprocess clearly threw out error messages.
Agilistas don’t like long-range estimates because they don’t like lying to themselves. If your build system (or the people that interpret its output) is lying to you, it’s just as bad. Be true to your tool.