Agile Requires Accurate Tests

Some think Agile is all seat-of-your-pants.  That’s not true at all.  Agile requires knowing where you are right now and what you need to do in the near future; it just doesn’t put a lot of faith in guesses where you’ll be 8 months from now.

The various forms of Agile all contain a set of practices to follow and others to stay away from, each with their strengths and weaknesses.  Successful implementation of Agile requires selecting and implementing practices that complement and support each other, covering all required needs without duplication (which is where a good deal of the efficiency comes in).  It’s bad enough if you pick a set of practices that don’t support each other, but it’s much worse to chose the right practices but not implement them right.  That’s because it can create a false sense of security.

One practice that is particularly susceptible to this is TDD/unit testing/acceptance tests.  Agile developers can code faster and with more confidence because they know if their code doesn’t work, or if it works but it breaks something else, or if it works as the developer intended but not as the customer wants, it will become evident in short order.  When this assumption proves fault, it’s a double hit on productivity; the bugs are harder to find since they were created longer ago, and the developer loses momentum on their current development so they can go back and fix their previous work.  There are several ways this this can fail:

  1. Missing tests: Developers, in general, don’t like writing tests.  Even when following the “letter of the law”, edge cases get left out.
  2. Insufficient negative testing: Positive testing is making sure your component outputs the right results when you give it good input.  Negative tests make sure your component handles bad input.  If you pass it a null pointer, does it handle it gracefully, or will fire come out of your keyboard?  Does it check for “divide by zero”?  SQL injection?  Someone typing in “mañana”? Someone accessing your website using Safari?
  3. Misguided tests: If the developer didn’t understand the requirements when they wrote the code, they sure aren’t going to understand them when they write the unit tests.  This is why I feel strongly in having someone else write the unit tests, or at least the integration/acceptance tests.
  4. Expected failures: I have seen several build systems that spew out all sorts of ominous portents of doom with every build.  But the more of your output you cheerfully ignore, the more likely some legitimate error output will be missed.  You are expecting your build system to lie to you, so you become desensitised to truly wrong output.
  5. Dropped errors: Some build systems require one tool to call another tool to do the actual work, and sometimes those components communicate with each other better than other times.  A common example is developing Java in Eclipse, using Ant to do the actual build, in turn calling javac/java to do the actual compilation and execution.  But I’ve seen many circumstances where Ant reports “Build Successful” at the end but some subprocess clearly threw out error messages.

Agilistas don’t like long-range estimates because they don’t like lying to themselves.  If your build system (or the people that interpret its output) is lying to you, it’s just as bad.  Be true to your tool.

Share

2 comments

  1. Since I’ve been writing fiction, I’m using something I call “Agile Storytelling,” a set of writing principles and practices that draw on my experience with Agile Software Development.

    I’ve found the same dichotomy between writers as I’ve seen in SD, but without the prejudice toward the extremes. Some writers plan everything out ahead of time, but then when they discover that they need to change something in the middle of the process, they can’t (unless they throw out their carefully laid plans). At the other end are the true seat-of-the-pantsers, putting pen to paper without knowing what’s going to come out, and then throwing away and rewriting most of their work during the revision process. Agile (both in in writing and in software) lies somewhere in between, accepting that change will occur, and honing the process to deal with change in the quickest, cheapest manner possible.

    WRT testing, a couple other situations I’ve run into, off the top of my head: unmaintained tests (e.g., missing tests, insufficient testing, and expected failures due to a module that was modified without corresponding tests written/changed FIRST, especially common in teams that take a “when I get a round tuit” attitude towards testing); mismatched testing (e.g., trying to test multiple modules with a unit test—usually should be an acceptance test, which can lead to poor module design and poorly factored code).

    -TimK

    P.S. Since I can’t do automated testing (yet) in writing prose, I’ve been experimenting with checklists that I can apply at each stage of the writing process—the next best thing to automated testing… seems I recall software testers doing something similar back in the day…

    1. In real Agile, you’re running your tests all the time, so there’s little opportunity for a test to not match the code for very long. However, there is opportunity for tests to become insufficient, as more functionality is added.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.