How the most interesting IT debate is revealing our values as software developers

TDD is dead. Is TDD dead? A question that seems to divide our profession.
On the one side: developers which write their tests first and let them drive their code. They prefer the mockist approach to testing. Code should be tested in isolation, under lab like circumstances. Clean code is their book. Practices and principles guide their thinking. An application should not be bound to frameworks and have a hexagonal architecture. The GOOS book showed how it can be done.
On the other side: developers which focus on readability and clarity. They use their experience and gut to drive their decisions. Because of past experiences they test their the code the classical way. They are pragmatic. Practices and principles are used when they improve the understanding of the code. Code is there to be refactored. Just like a gardener trims bushes and a writer edits his prose they work with their code.

What are your values?

What does this debate have to do with you?

Ask yourself:
What if you could write a proof of your program costing 10 or just 5 times as much as the implementation? It would prove your code would work correctly under all possible circumstances. Would you do it?

Or would you rather improve the existing architecture, design or clarity of your code? So that you remove technical debt and are better positioned for future changes.

Or would you write new features and improve your application for the people using it?

What are your values?

History

At the beginnings of my developer life in the late 80s/early 90s I remember that the industry was focussed on one goal: code reuse. Modules, components, libraries, frameworks were introduced. Then patterns came. All of that was working towards one side of the equation: low coupling.
High cohesion was neglected in pursuit of a noble goal. But what happened? The imbalance produced layer after layer, indirection after indirection, over-separation and over-abstraction. You had to deal with dependency injection (containers), configuration, class hierarchies, interfaces, event buses, callbacks, … just to understand a hello world.
Today we have more computing power and are solving more and more complex things. We think in higher abstractions. Much more people benefit from our skills and our works.
On the user facing side design focusses on simplicity and usability. Even complex relationships can be made understandable and manageable. A wise man once said: design is about intent.
The same with code: Code is about intent. Intent should be the measure of the quality of our code. Not testability, not coupling: intent. If the code (and this includes the code comments) would reveal its intent, you could fix bugs in it, improve it, change it, refactor it. Tests would be your safety net to ensure you are not breaking your intent.
You might say: but this is what TDD is all about! But I think we got it all backwards. The code and its intention revealing nature is more important than the tests. The tests support. But tests should never replace or even harm the clarity of the code.
The quality of the code is important. But most important are the people using your application.
My goal is to delight the people who use my software and my way there is writing intention revealing software. I am not there and I am learning every day but I take step after step.

What are your values?

Should I test this?

Writing software is hard, writing correct software is even harder. So everything that helps you writing better or more correct software should be used to your advantage. But does every test help? And does every code to be automatically tested? How do I decide what to test and how?
Given a typical web CRUD application, take a look at the following piece of functionality:
We have a model class Element which has a Type type:

class Element {
  ...
  Type type
  ...
}

The view contains a select tag which lets you choose a type:

...
<g:select name="filterByTypeId" from="${types}" value="${filterByType?.id}">
...

And finally in the controller we filter the list of shown elements via the selected type:

...
Type filterByType = Type.get(params['filterByTypeId'])
return [elements: filterByType ? Element.findAllByType(filterByType) : Element.list(), types: Type.list(), filterByType: filterByType]
...

Now ask yourself: would you write an automatic test for this? A functional / acceptance or some unit / integration tests? Would you really test this automatically or just by hand? And how do you decide this?

Dogma

According to TDD you should test everything, there does not exist any code without a test (first). If you really live by TDD the choice is already made: you test this code. But is this pragmatic? Effecient? Productive? And what about the aspects you forgot to test? The order of the types for example. The user wanted to list them lexicographically or by a priority or numbered. What if this part changes and your test is so coupled that you need to change it, too. There are some TDD enthusiasts out there but if you are more pragmatic there are other criteria to help you decide.

Cost

If you look at the code in question and think: how much effort is it to create the test(s)? Or to run the test? If the feedback cycle is too long you lose track of it. I need a test for the controller, this is the easy part. Then I need to test that the view passes the correct parameter and accepts and shows the correct list.
I also can write an acceptance test but this seems like a big gun for a small bird. In our case it heavily depends on the framework how easy or difficult and costly it is to write tests for our filter. What do you have to mock or to simulate? You also have to take the hidden costs into account: how much does it cost to maintain this test? When the requirement changes? When there are more filter criteria? Or if an element can have more than one type?

Value

Another question you can ask is: what is the value for the customer? How much does he need it to work? What is the cost of an error? What happens when the code in question does not work? The value for the customer is not only determined by the functionality it provides. Software can be seen as giving your users capabilities, to enable them. The capability is implemented by two things: implementation (your functionality) and affordance (the UI). The value is determined by both parts. So you hardly can decide on the value of a functionality alone. What if you need to change the UI (in our case the select tag) to increase the value? How does this effect your tests? Does the user reach his goal if the functionality part is broken? What is when the code is correct but it is slow? Or the UI isn’t visible on your user’s screen?

Personal / Team profile

You could decide what and if to test by looking at your past: your personal or team mistakes. Typical problems and bugs you made. Habits you have. You could test more when the (business or technical) domain or the underlying technology is new for you. You could write only few tests when you know the area you work in but more when it is unknown and you need to explore it. You can write more tests if you work in a dynamic language and few in a static language. Or vice versa.

Area / Type of code

You can write tests for every bug you find to prevent regression. You could write tests only for algorithms or data structures. For certain core parts or for interaction with other systems. Or only for (public) interfaces. The area or type of code can help you decide if to test or not.

Visibility

Also you could take a look at how easy it is to spot a bug when manually invoke the code. Do you or your user see the bug immediately? Is it hidden? In our case you should easily see when the list is not filtered or filtered by the wrong criteria. But what if it is just a rounding error or an error where cause and effect is separated by time or location?

Conclusion

Do you have or use additional criteria? How do you decide? I have to admit that I didn’t and I wouldn’t test the above code because I can easily spot problems in the code and try it out by hand if it works (visibility). If the code grows more complex and I cannot easily see the problem (again visibility) or the value (or cost of an error) for the customer is high I would write one.

Testing C++ code with OpenCV dependencies

The story:

Pushing for more quality and stability we integrate google test into our existing projects or extend test coverage. One of such cases was the creation of tests to document and verify a bugfix. They called a single function and checked the fields of the returned cv::Scalar.

TEST(ScalarTest, SingleValue) {
  ...
  cv::Scalar actual = target.compute();
  ASSERT_DOUBLE_EQ(90, actual[0]);
  ASSERT_DOUBLE_EQ(0, actual[1]);
  ASSERT_DOUBLE_EQ(0, actual[2]);
  ASSERT_DOUBLE_EQ(0, actual[3]);
}

Because this was the first test using OpenCV, the CMakeLists.txt also had to be modified:

target_link_libraries(
  ...
  ${OpenCV_LIBS}
  ...
)

Unfortunately, the test didn’t run through: it ended either with a core dump or a segmentation fault. The analysis of the called function showed that it used no pointers and all variables were referenced while still in scope. What did gdb say to the segmentation fault?

(gdb) bt
#0  0x00007ffff426bd25 in raise () from /lib64/libc.so.6
#1  0x00007ffff426d1a8 in abort () from /lib64/libc.so.6
#2  0x00007ffff42a9fbb in __libc_message () from /lib64/libc.so.6
#3  0x00007ffff42afb56 in malloc_printerr () from /lib64/libc.so.6
#4  0x00007ffff54d5135 in void std::_Destroy_aux&amp;lt;false&amp;gt;::__destroy&amp;lt;testing::internal::String*&amp;gt;(testing::internal::String*, testing::internal::String*) () from /usr/lib64/libopencv_ts.so.2.4
#5  0x00007ffff54d5168 in std::vector&amp;lt;testing::internal::String, std::allocator&amp;lt;testing::internal::String&amp;gt; &amp;gt;::~vector() ()
from /usr/lib64/libopencv_ts.so.2.4
#6  0x00007ffff426ec4f in __cxa_finalize () from /lib64/libc.so.6
#7  0x00007ffff54a6a33 in ?? () from /usr/lib64/libopencv_ts.so.2.4
#8  0x00007fffffffe110 in ?? ()
#9  0x00007ffff7de9ddf in _dl_fini () from /lib64/ld-linux-x86-64.so.2
Backtrace stopped: frame did not save the PC

Apparently my test had problems at the end of the test, at the time of object destruction. So I started to eliminate every statement until the problem vanished or no statements were left. The result:

#include &quot;gtest/gtest.h&quot;
TEST(DemoTest, FailsBadly) {
  ASSERT_EQ(1, 0);
}

And it still crashed! So the code under test wasn’t the culprit. Another change introduced previously was the addition of OpenCV libs to the linker call. An incompatibility between OpenCV and google test? A quick search spitted out posts from users experiencing the same problems, eventually leading to the entry in OpenCVs bug tracker: http://code.opencv.org/issues/1608 or http://code.opencv.org/issues/3225. The opencv_ts library which appeared in the stack trace, exports symbols that conflict with google test version we link against. Since we didn’t need opencv_ts library, the solution was to clean up our linker dependencies:

Before:

find_package(OpenCV)

 

/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab ../gtest-1.7.0/libgtest.a -lpthread -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab

After:


find_package(OpenCV REQUIRED core highgui)

 

/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_highgui -lopencv_core ../gtest-1.7.0/libgtest.a -lpthread

Lessons learned:

Know what you really want to depend on and explicitly name it. Ignorance or trust in build tools’ black magic is a recipe for blog posts.

Integrating googletest in CMake-based projects and Jenkins

In my – admittedly limited – perception unit testing in C++ projects does not seem as widespread as in Java or the dynamic languages like Ruby or Python. Therefore I would like to show how easy it can be to integrate unit testing in a CMake-based project and a continuous integration (CI) server. I will briefly cover why we picked googletest, adding unit testing to the build process and publishing the results.

Why we chose googletest

There are a plethora of unit testing frameworks for C++ making it difficult to choose the right one for your needs. Here are our reasons for googletest:

  • Easy publishing of result because of JUnit-compatible XML output. Many other frameworks need either a Jenkins-plugin or a XSLT-script to make that work.
  • Moderate compiler requirements and cross-platform support. This rules out xUnit++ and to a certain degree boost.test because they need quite modern compilers.
  • Easy to use and integrate. Since our projects use CMake as a build system googletest really shines here. CppUnit fails because of its verbose syntax and manual test registration.
  • No external dependencies. It is recommended to put googletest into your source tree and build it together with your project. This kind of self-containment is really what we love. With many of the other frameworks it is not as easy, CxxTest even requiring a Perl interpreter.

Integrating googletest into CMake project

  1. Putting googletest into your source tree
  2. Adding googletest to your toplevel CMakeLists.txt to build it as part of your project:
    add_subdirectory(gtest-1.7.0)
  3. Adding the directory with your (future) tests to your toplevel CMakeLists.txt:
    add_subdirectory(test)
  4. Creating a CMakeLists.txt for the test executables:
    include_directories(${gtest_SOURCE_DIR}/include)
    set(test_sources
    # files containing the actual tests
    )
    add_executable(sample_tests ${test_sources})
    target_link_libraries(sample_tests gtest_main)
    
  5. Implementing the actual tests like so (@see examples):
    #include "gtest/gtest.h"
    
    TEST(SampleTest, AssertionTrue) {
        ASSERT_EQ(1, 1);
    }
    

Integrating test execution and result publishing in Jenkins

  1. Additional build step with shell execution containing something like:
    cd build_dir && test/sample_tests --gtest_output="xml:testresults.xml"
  2. Activate “Publish JUnit test results” post-build action.

Conclusion

The setup of a unit testing environment for a C++ project is easier than many developers think. Using CMake, googletest and Jenkins makes it very similar to unit testing in Java projects.

From ugly to pretty – Three steps is all it takes

makeupI hold lectures in software engineering for over a decade now. One major topic is testing, specifically unit tests. Other corner stones are refactorings and code readability. So whenever I have the chance to challenge my students in cross-topic aspects of software development, it’s almost always a source of insight for them and especially for me. But one golden moment holds a special place in my memory. This is the (rather elaborate, sorry) story of this moment.

During a lecture about unit tests with JUnit, my students had the task to develop tests for a bank account class. That’s about as boring as testing can be – the account was related to a customer and had a current balance. The customer can withdraw money, but only some customers can overdraw their account. To spice things up a bit, we also added the mock object framework EasyMock to the mix. While I would recommend other mock frameworks for production usage, the learning curve of EasyMock is just about right for first time exposure in a “sheep dip” fashion.

Our first test dealt with drawing money from an empty account that can be overdrawn:

@Test
public void canWithdrawOnCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(true);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(new Euro(30), cash);
  assertEquals(new Euro(-30), account.balance());
  EasyMock.verify(customer);
}

The second test made sure that this withdrawal behaviour only works for customers with sufficient credit standing. We decided to pay out nothing (0 Euro) if the customer tries to withdraw more money than his account currently holds:

@Test
public void cannotTakeUpCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(false);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(Euro.ZERO, cash);
  assertEquals(Euro.ZERO, account.balance());
  EasyMock.verify(customer);
}

As you can tell, a lot of copy and paste was going on in the creation of this test. Just look at the name of the local variable “required” – it’s misleading now. Right up to this point, my main topic was the usage of the mock framework, not perfect code. So I explained the five stages of normalized mock-based unit tests (initialize, train mocks, execute tested code, assert results, verify mocks) and then changed the topic by expressing my displeasure about the duplication and the inferior readability of the code (it even tries to trick you with the “required” variable!). Now it was up to my students to improve our situation (this trick works only a few times for every course before they preventively become even pickier than me). A student accepted the challenge and gave advice:

First step: Extract Method refactoring

The obvious first step was to extract the duplication in its own method and adjust the calls by their parameters. This is an easy refactoring that will almost always improve the situation. Let’s see where it got us. Here is the extracted method:

protected void performWithdrawalTestWith(
    boolean customerCanOverdraw,
    Euro amountOfWithdrawal,
    Euro expectedCash,
    Euro expectedBalance) {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(customerCanOverdraw);
  EasyMock.replay(customer);
  Account account = new Account(customer);

  Euro cash = account.withdraw(amountOfWithdrawal);

  assertEquals(expectedCash, cash);
  assertEquals(expectedBalance, customer.balance());
  EasyMock.verify(customer);
}

And the two tests, now really concise:

@Test
public void canWithdrawOnCredit() {
  performWithdrawalTestWith(
      true,
      new Euro(30),
      new Euro(30),
      new Euro(-30));
}

 

@Test
public void cannotTakeUpCredit() {
  performWithdrawalTestWith(
      false,
      new Euro(30),
      Euro.ZERO,
      Euro.ZERO);
}

Well, that did resolve the duplication indeed. But the test methods now lacked any readability. They appeared as if somebody had extracted all the semantics out of the code. We were unhappy, but decided to interpret the current code as an intermediate step to the second refactoring:

Second step: Introduce Explaining Variable refactoring

In the second step, the task was to re-introduce the semantics back into the test methods. All parameters were nameless, so that was our angle of attack. By introducing local variables, we gave the parameters meaning again:

@Test
public void canWithdrawOnCredit() {
  boolean canOverdraw = true;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = new Euro(30);
  Euro resultingBalance = new Euro(-30);

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean canOverdraw = false;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = Euro.ZERO;
  Euro resultingBalance = Euro.ZERO;

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

That brought back the meaning to the test methods, but didn’t improve readability. The code wasn’t intentionally cryptic any more, but still far from being intuitively understandable – and that’s what really readable code should be. If even novices can read your code fluently and grasp the main concepts in the first pass, you’ve created expert code. I challenged the student to further transform the code, without any idea how to carry on myself. My student hesitated, but came up with the decisive refactoring within seconds:

Third step: Rename Variable refactoring

The third step doesn’t change the structure of the code, but its approachability. Instead of naming the local variables after their usage in the extracted method, we name them after their purpose in the test method. A first time reader won’t know about the extracted method (and preferably shouldn’t need to know), so it’s not in the best interest of the reader to foreshadow its details. Instead, we concentrate about telling the reader a coherent story:

@Test
public void canWithdrawOnCredit() {
  boolean aCustomerThatCanOverdraw = true;
  Euro heWithdraws30Euro = new Euro(30);
  Euro receivesTheFullAmount = new Euro(30);
  Euro andIsNow30EuroInTheRed = new Euro(-30);

  performWithdrawalTestWith(
      aCustomerThatCanOverdraw,
      heWithdraws30Euro,
      receivesTheFullAmount,
      andIsNow30EuroInTheRed);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean aCustomerThatCannotOverdraw = false;
  Euro heTriesToWithdraw30Euro = new Euro(30);
  Euro butReceivesNothing = Euro.ZERO;
  Euro andStillHasABalanceOfZero = Euro.ZERO;

  performWithdrawalTestWith(
      aCustomerThatCannotOverdraw,
      heTriesToWithdraw30Euro,
      butReceivesNothing,
      andStillHasABalanceOfZero);
}

If the reader is able to ignore some crude verbalization and special characters, he can read the test out loud and instantly grasp its meaning. The first lines of every test method are a bit confusing, but necessary given Java’s lack of named parameters.

The result might remind you a lot of Behavior Driven Development notation and that’s probably not by chance. In a few minutes during that programming exercise, my students taught themselves to think in scenarios or stories when approaching unit tests. I couldn’t have taught it any better – instead, I got enlightened by this exercise, too.

How to use partial mocks in real life

Partial mocks are an advanced feature of modern mocking libraries like mockito. Partial mocks retain the original code of a class only stubbing the methods you specify. If you build your system largely from scratch you most likely will not need to use them. Sometimes there is no easy way around them when working with dependencies not designed for testability. Let us look at an example:

/**
 * Evil dependency we cannot change
 */
public final class CarvedInStone {

    public CarvedInStone() {
        // may do unwanted things
    }

    public int thisHasSideEffects(int i) {
        return 31337;
    }

    // many more methods
}

public class ClassUnderTest {

    public Result computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = new CarvedInStone().thisHasSideEffects(42);
        // more interesting code
        return new Result(intermediateResult * 1337);
    }
}

We want to test the computeSomethingInteresting() method of our ClassUnderTest. Unfortunately we cannot replace CarvedInStone, because it is final and does not implement an interface containing the methods of interest. With a small refactoring and partial mocks we can still test almost the complete class:

public class ClassUnderTest {
    public int computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = intermediateResultsFromCarvedInStone(42);
        // more interesting code
        return intermediateResult * 1337;
    }

    protected int intermediateResultsFromCarvedInStone(int input) {
        return new CarvedInStone().thisHasSideEffects(input);
    }
}

We refactored our dependency into a protected method we can use to stub out with our partial mocking to be tested like this:

public class ClassUnderTestTest {
    @Test
    public void interestingComputation() throws Exception {
        ClassUnderTest cut = spy(new ClassUnderTest());
        doReturn(1234).when(cut).intermediateResultsFromCarvedInStone(42);
        assertEquals(1649858, cut.computeSomethingInteresting());
    }
}

Caveat: Do not use the usual when-thenReturn-style:

when(cut.intermediateResultsFromCarvedInStone(42)).thenReturn(1234);

with partial mocks because the real method will get called once!

So the only untested code is a simple delegation. Measures like that refactoring and partial mocking generally serve as a first step and not the destination.

Where to go from here

To go the whole way we would encapsulate all unmockable dependencies into wrapper objects providing the functionality we need here and inject them into our ClassUnderTest. Then we can replace our wrapper(s) easily using regular mocking.

Doing all this can be a lot of work and/or risk depending on the situation so the depicted process serves as an low risk intermediate step for getting as much important code under test as possible.

Note that the wrappers themselves stay largely untestable like our protected delegating method.

Object Calisthenics: Change the way you think

Some time ago I spoke with my colleague about skill sharpening and training the brain to come up with new solutions. He proposed a two hour session at the weekend implementing a small game using object calisthenics.

Rules

The rules are described in The ThoughtWorks Anthology book. Here is the list for quick reference.

  1. Use only one level of indentation per method.
  2. Don’t use the else keyword.
  3. Wrap all primitives and strings.
  4. Use only one dot per line.
  5. Don’t abbreviate.
  6. Keep all entities small.
  7. Don’use any classes with more than two instance variables.
  8. Use first-class collections.
  9. Don’t use any getters/setters/properties.

Most of the rules seemed simple enough. Rules 2 and 5 are standard in Softwareschneiderei, 1, 4, 6 and 8 are stricter versions of common sense, 3 is a tedious object wrapping. The rules I was anxious about were 7 and 9. To increase the learning effect, I added an extra rule to the list that is critical in real life programming:

  1.   Write tests for your code.

It doesn’t matter whether to write test first, test after or even test driven. Only then is the code “value added”.

Experiences

The game was minesweeper. It contains a nice mix of algorithms, data structures and UI. I concentrated the efforts on the algorithmic part. My first step was to analyse and create the needed data structures.

  • The smallest unit is the cell.
  • A cell can be either hidden or revealed, have a mine or be empty.
  • The game field contains such cells in rows and columns.
  • The position of a cell in a field is defined by its coordinate that contains the x and y position.

To associate anything with coordinates the coordinates had to be comparable to each other. Rule 9 forbids exposure of internal state, so the Coordinate class got its equals() and hashCode(). Only the creator of the coordinate had the knowledge about the number of dimensions and the values of the positions. Even the tests had no access to the inner state and tested only those two methods.

Since the revealed flag concept and a mine flag concept had similar properties, I decided not to track cells but to track their flags. Through this architectural decision, I had a field with two flag containers, one for revealed cells and one for cells with mines. An additional benefit was that it was enough to put only the coordinate into the container to mark a cell as a mine.

The next step was to link the parts together and add some behaviour. Setting a mine, then revealing a cell and obtaining the number of mines also. Setting a mine and marking the cell as revealed is a simple task with the containers. Testing that the revealed cell contained the mine was more tricky. To achieve that, the reveal method got an additional parameter, a closure with a hasMine parameter.

public void reveal(final Coordinate coordinate, final CellContainerVisitor revealedCellsVisitor) {
    revealedCells.mark(coordinate);
    visit(coordinate, revealedCellsVisitor);
}

private void visit(final Coordinate coordinate, final CellContainerVisitor revealedCellsVisitor) {
    revealedCellsVisitor.visit(coordinate, hasMineAt(coordinate));
}

@Test
public void containsMines() {
    final CellContainer target = new CellContainer();
    target.placeMineAt(someCoordinate());

    final List<Coordinate> mineCells = new ArrayList<Coordinate>();
    target.reveal(someCoordinate(), (coordinate, hasMine) -> {
        if (hasMine.equals(new HasMine(true))) {
           mineCells.add(coordinate);
        }
    });

    assertThat(mineCells, hasSize(1));
    assertThat(mineCells, contains(someCoordinate()));
}

The next game rule consumed the rest of the session: calculating the number of mines in the neighborhood. The main obstacle was to compute the coordinate of the neighbour. To do this it is necessary to add an offset to a position in a coordinate without exposing its internal structure. In the end I reverted to using more closures.

Conclusion

To achieve my goal I had to reverse the order in which I normally develop business logic: Rule 9 seems to support top-down approach: The interfaces of domain objects are nearly completely dominated by the way they are used by their containers.

Most of the time in this two hour session was spent staring at the screen and to think how to write readable code and readable tests without exposing internal details of the objects. Time well spent.