Development, Testing

TDD Quick tip: “What test should I choose next?”

When test-driving your code, you shift between red (write a failing test), green (write the code needed to make the test pass) & refactor (clean up). But how do you kickstart this loop? What heuristics can you use to guide selection of the next test you write? This article summarizes the most common heuristics and highlights the one I use predominantly:

Pick the test that forces you to add one (and only one) new facet.

Introduction

This week I was participating in a coderetreat with my fellow crafters when an interesting question popped up during the closing circle: “When test-driving your code, how do you decide what test to write next?“. This led to an interesting discussion and triggered some personal reflection on how I make that decision myself. In this post we’ll go over some common heuristics you might encounter that answer this question and we’ll end with the three-step process I personally use.

Want to go directly to the video? You can find it here.

These are “red bar” heuristics you can use during the “write a failing test” TDD phase

Prerequisite: maintaining a test list

You can only select from a list if tests if you actually have a list of tests somewhere. For a long time I kept lists like these in my head, but I’ve learned to appreciate offloading this list to a piece of paper or on a virtual checklist (which you’ll see me do in the accompanying screencast). One less thing to worry about!

The idea of a test list comes from Kent Beck:

 What should you test? Before you begin [a programming session], write a list of all the tests you know you will have to write. […] As you make the tests run, the implementation will imply new tests. Write the new tests down on the list. Likewise with refactorings.

Kent Beck, Test Driven Development By Example

Some observations:

  • Your test list does not have to be exhaustive from the start. I usually start by listing all the interesting cases along with some edge cases and failure scenarios.
  • When conjuring up test cases I like to think about the possible inputs in terms of equivalence classes. Which test cases are truly distinct? Which cases are similar and instances of the same underlying rule? I only list test cases that belong to truly different equivalence classes.
  • When I see the need for a new test case while I’m working on something else, I don’t just drop everything I’m doing and pick it up. I just add it to the list and finish what I’m doing. If I think I might need to refactor something but am currently doing something else, these end up on the list as well. This way I won’t forget, but I won’t get distracted either.
  • When everything on a test list is checked we know we’re done. Neat!

Now that we have an actual list of tests, let’s take a look at how you would choose the next one to tackle. There’s multiple flavors of this advice, but they all boil down to the same essence. Let’s dig into each heuristic separately first:

  1. Kernel tests
  2. Gold and Thorns
  3. Transformation Priority Premise
  4. One Step tests

#0 kernel tests

Number 0? Why yes, because you can only use this heuristic for the first test you choose.

Kernel tests [JB Rainsberger] (also called starter tests [Beck]) are an answer to “which test should I pick first?”.

If I want to make some fresh popcorn, I’ll pour a bit of oil in a pot and throw in just one kernel. When that kernel pops I know that the oil is hot enough to start making popcorn! A kernel test has a similar purpose: an easy (often degenerate) case that takes zero to no implementation effort. Why start with something like this? It gives me time to think about the API and usability of the thing I’m designing. Where in the codebase will the code end up? Will it take a string as input or a custom type? What will the return type look like? Will this functionality run synchronously or async?

I think I picked up the popcorn metaphor from JB Rainsberger, but could not find any references online.

Once we have the API thought out using our kernel test, we can move on to the more interesting iterative test selection heuristics.

#1 Gold and thorns

Another heuristic I commonly use I borrowed from Thorns around the gold by Robert C. Martin. This heuristic splits your test list in 2 categories: gold and thorns. It states that there are 2 different aspects of behavior: the happy path scenarios (the gold in the metaphor) and the failure modes (the thorns). The author proposes to get rid of all the thorns before reaching for the gold.

I personally get rid of some thorns to get a better feel for the resulting API, but focus on the gold after that. Cleaning up all the remaining thorns is something I usually do at the end. Do I want to throw ValidationExceptions when things go wrong? Or do I want to bundle all validation errors in a single Result<TOk, TError[]> instead? Do I need to call out to an async database to get some necessary data first? Getting rid of some thorns will give me a clearer picture on these fronts and it’s easier to iterate on your-best-guess-of-the-final-API from the start before writing tens of tests that couple to this API.

#2 Transformation Priority Premise

The transformation priority premise (TPP) states that “every code change is either a refactoring or one of a finite number of transformations” and moreover, that “there exists a priority between these transformations”. The premise is that by using this priority as a heuristic to pick and implement the next test or to pick between alternative implementation options, you won’t “take a wrong turn” and end up in a situation where the next test requires you to completely rewrite the algorithm from scratch because you painted yourself in a corner.

TPP is referenced quite a lot in modern TDD/XP literature and I encourage you to read up on it and practice using it. I do use it myself when reasoning about code but I find it tricky to use as a test selection heuristic. It’s just too much mental gymnastics to go over the test list each time and see which one could be implemented by using the “highest priority transformation”. There’s a great write-up on how you can use TPP this way on the 8th light blog.

What TPP does provide is a prioritised and explicit list that can be used if you’re new to this gig. A great addition to your toolbox if you’re just starting with your TDD journey and could use some shu-level guidance!

#3 One step tests

This is another gem from [Beck], the “one step tests” heuristic:

Which test should you pick next from the list? Pick a test that will teach you something and that you are confident you can implement.[…]When I look at a Test List, I think, “That’s obvious, that’s obvious, I have no idea, obvious, what was I thinking about with that one, ah, this one I can do.” That last test is the test I implement next. It didn’t strike me as obvious, but I’m also confident I can make it work.

Kent Beck, Test Driven Development By Example

Again there are some subtle points to discover when working with this heuristic. It has both a “lower bound” and an “upper bound” on the step size:

  • Only write a test that will influence the design. If a test will not force any change to the code under test, it might not be worth writing.
  • From all tests that will force a code change, pick the one that introduces the smallest new “facet”.

Lots of other people use this heuristic. In my research I encountered these two great explanations:

A worked example

All that theory might sound fine and dandy, but what does it mean in practice? You can watch a screencast of me using these heuristics on an example right here.

Further reading

Here are some other heuristicsfor picking the next test I’ve come across over the years:

  • Zero-One-Many by Bill Wake
  • ZOMBIES (zero, one, many, boundary behavior, inerfaces/interaction, exeptions/edge cases) by James Grenning

Conclusion

Thinking about these heuristics has led me to arrive at the following conclusion on how to “pick the next test” when test-driving code.

Both TPP and one step tests are expressing the same idea, be it framed as “select the test that requires the highest-priority transformation” or “select the test that forces you to take one and only one step ahead”. Gold and thorns splits the test list in 2 distinct categories which you can put focus on. TPP provides an explicit-but-very-cognitively-intensive approach, “one step tests” have more of a “gut feeling” approach.

So in closing, the full heuristic I use when selecting the next test:

Start with a kernel. Get rid of some thorns but focus on the gold.

Pick the test that forces you to add one (and only one) new facet.

Now it’s your turn! How do you pick “the next test”? Do you use a different heuristic? Let me know in the comments below or on @jovaneyck

Happy coding!

Advertisement
Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.