Should I test this?

March 24, 2014

Writing software is hard, writing correct software is even harder. So everything that helps you writing better or more correct software should be used to your advantage. But does every test help? And does every code to be automatically tested? How do I decide what to test and how?
Given a typical web CRUD application, take a look at the following piece of functionality:
We have a model class Element which has a Type type:

class Element {
  ...
  Type type
  ...
}

The view contains a select tag which lets you choose a type:

...
<g:select name="filterByTypeId" from="${types}" value="${filterByType?.id}">
...

And finally in the controller we filter the list of shown elements via the selected type:

...
Type filterByType = Type.get(params['filterByTypeId'])
return [elements: filterByType ? Element.findAllByType(filterByType) : Element.list(), types: Type.list(), filterByType: filterByType]
...

Now ask yourself: would you write an automatic test for this? A functional / acceptance or some unit / integration tests? Would you really test this automatically or just by hand? And how do you decide this?

Dogma

According to TDD you should test everything, there does not exist any code without a test (first). If you really live by TDD the choice is already made: you test this code. But is this pragmatic? Effecient? Productive? And what about the aspects you forgot to test? The order of the types for example. The user wanted to list them lexicographically or by a priority or numbered. What if this part changes and your test is so coupled that you need to change it, too. There are some TDD enthusiasts out there but if you are more pragmatic there are other criteria to help you decide.

Cost

If you look at the code in question and think: how much effort is it to create the test(s)? Or to run the test? If the feedback cycle is too long you lose track of it. I need a test for the controller, this is the easy part. Then I need to test that the view passes the correct parameter and accepts and shows the correct list.
I also can write an acceptance test but this seems like a big gun for a small bird. In our case it heavily depends on the framework how easy or difficult and costly it is to write tests for our filter. What do you have to mock or to simulate? You also have to take the hidden costs into account: how much does it cost to maintain this test? When the requirement changes? When there are more filter criteria? Or if an element can have more than one type?

Value

Another question you can ask is: what is the value for the customer? How much does he need it to work? What is the cost of an error? What happens when the code in question does not work? The value for the customer is not only determined by the functionality it provides. Software can be seen as giving your users capabilities, to enable them. The capability is implemented by two things: implementation (your functionality) and affordance (the UI). The value is determined by both parts. So you hardly can decide on the value of a functionality alone. What if you need to change the UI (in our case the select tag) to increase the value? How does this effect your tests? Does the user reach his goal if the functionality part is broken? What is when the code is correct but it is slow? Or the UI isn’t visible on your user’s screen?

Personal / Team profile

You could decide what and if to test by looking at your past: your personal or team mistakes. Typical problems and bugs you made. Habits you have. You could test more when the (business or technical) domain or the underlying technology is new for you. You could write only few tests when you know the area you work in but more when it is unknown and you need to explore it. You can write more tests if you work in a dynamic language and few in a static language. Or vice versa.

Area / Type of code

You can write tests for every bug you find to prevent regression. You could write tests only for algorithms or data structures. For certain core parts or for interaction with other systems. Or only for (public) interfaces. The area or type of code can help you decide if to test or not.

Visibility

Also you could take a look at how easy it is to spot a bug when manually invoke the code. Do you or your user see the bug immediately? Is it hidden? In our case you should easily see when the list is not filtered or filtered by the wrong criteria. But what if it is just a rounding error or an error where cause and effect is separated by time or location?

Conclusion

Do you have or use additional criteria? How do you decide? I have to admit that I didn’t and I wouldn’t test the above code because I can easily spot problems in the code and try it out by hand if it works (visibility). If the code grows more complex and I cannot easily see the problem (again visibility) or the value (or cost of an error) for the customer is high I would write one.


Testing C++ code with OpenCV dependencies

February 3, 2014

The story:

Pushing for more quality and stability we integrate google test into our existing projects or extend test coverage. One of such cases was the creation of tests to document and verify a bugfix. They called a single function and checked the fields of the returned cv::Scalar.

TEST(ScalarTest, SingleValue) {
  ...
  cv::Scalar actual = target.compute();
  ASSERT_DOUBLE_EQ(90, actual[0]);
  ASSERT_DOUBLE_EQ(0, actual[1]);
  ASSERT_DOUBLE_EQ(0, actual[2]);
  ASSERT_DOUBLE_EQ(0, actual[3]);
}

Because this was the first test using OpenCV, the CMakeLists.txt also had to be modified:

target_link_libraries(
  ...
  ${OpenCV_LIBS}
  ...
)

Unfortunately, the test didn’t run through: it ended either with a core dump or a segmentation fault. The analysis of the called function showed that it used no pointers and all variables were referenced while still in scope. What did gdb say to the segmentation fault?

(gdb) bt
#0  0x00007ffff426bd25 in raise () from /lib64/libc.so.6
#1  0x00007ffff426d1a8 in abort () from /lib64/libc.so.6
#2  0x00007ffff42a9fbb in __libc_message () from /lib64/libc.so.6
#3  0x00007ffff42afb56 in malloc_printerr () from /lib64/libc.so.6
#4  0x00007ffff54d5135 in void std::_Destroy_aux&amp;lt;false&amp;gt;::__destroy&amp;lt;testing::internal::String*&amp;gt;(testing::internal::String*, testing::internal::String*) () from /usr/lib64/libopencv_ts.so.2.4
#5  0x00007ffff54d5168 in std::vector&amp;lt;testing::internal::String, std::allocator&amp;lt;testing::internal::String&amp;gt; &amp;gt;::~vector() ()
from /usr/lib64/libopencv_ts.so.2.4
#6  0x00007ffff426ec4f in __cxa_finalize () from /lib64/libc.so.6
#7  0x00007ffff54a6a33 in ?? () from /usr/lib64/libopencv_ts.so.2.4
#8  0x00007fffffffe110 in ?? ()
#9  0x00007ffff7de9ddf in _dl_fini () from /lib64/ld-linux-x86-64.so.2
Backtrace stopped: frame did not save the PC

Apparently my test had problems at the end of the test, at the time of object destruction. So I started to eliminate every statement until the problem vanished or no statements were left. The result:

#include &quot;gtest/gtest.h&quot;
TEST(DemoTest, FailsBadly) {
  ASSERT_EQ(1, 0);
}

And it still crashed! So the code under test wasn’t the culprit. Another change introduced previously was the addition of OpenCV libs to the linker call. An incompatibility between OpenCV and google test? A quick search spitted out posts from users experiencing the same problems, eventually leading to the entry in OpenCVs bug tracker: http://code.opencv.org/issues/1608 or http://code.opencv.org/issues/3225. The opencv_ts library which appeared in the stack trace, exports symbols that conflict with google test version we link against. Since we didn’t need opencv_ts library, the solution was to clean up our linker dependencies:

Before:

find_package(OpenCV)

 

/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab ../gtest-1.7.0/libgtest.a -lpthread -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab

After:


find_package(OpenCV REQUIRED core highgui)

 

/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_highgui -lopencv_core ../gtest-1.7.0/libgtest.a -lpthread

Lessons learned:

Know what you really want to depend on and explicitly name it. Ignorance or trust in build tools’ black magic is a recipe for blog posts.


Integrating googletest in CMake-based projects and Jenkins

January 27, 2014

In my – admittedly limited – perception unit testing in C++ projects does not seem as widespread as in Java or the dynamic languages like Ruby or Python. Therefore I would like to show how easy it can be to integrate unit testing in a CMake-based project and a continuous integration (CI) server. I will briefly cover why we picked googletest, adding unit testing to the build process and publishing the results.

Why we chose googletest

There are a plethora of unit testing frameworks for C++ making it difficult to choose the right one for your needs. Here are our reasons for googletest:

  • Easy publishing of result because of JUnit-compatible XML output. Many other frameworks need either a Jenkins-plugin or a XSLT-script to make that work.
  • Moderate compiler requirements and cross-platform support. This rules out xUnit++ and to a certain degree boost.test because they need quite modern compilers.
  • Easy to use and integrate. Since our projects use CMake as a build system googletest really shines here. CppUnit fails because of its verbose syntax and manual test registration.
  • No external dependencies. It is recommended to put googletest into your source tree and build it together with your project. This kind of self-containment is really what we love. With many of the other frameworks it is not as easy, CxxTest even requiring a Perl interpreter.

Integrating googletest into CMake project

  1. Putting googletest into your source tree
  2. Adding googletest to your toplevel CMakeLists.txt to build it as part of your project:
    add_subdirectory(gtest-1.7.0)
  3. Adding the directory with your (future) tests to your toplevel CMakeLists.txt:
    add_subdirectory(test)
  4. Creating a CMakeLists.txt for the test executables:
    include_directories(${gtest_SOURCE_DIR}/include)
    set(test_sources
    # files containing the actual tests
    )
    add_executable(sample_tests ${test_sources})
    target_link_libraries(sample_tests gtest_main)
    
  5. Implementing the actual tests like so (@see examples):
    #include "gtest/gtest.h"
    
    TEST(SampleTest, AssertionTrue) {
        ASSERT_EQ(1, 1);
    }
    

Integrating test execution and result publishing in Jenkins

  1. Additional build step with shell execution containing something like:
    cd build_dir && test/sample_tests --gtest_output="xml:testresults.xml"
  2. Activate “Publish JUnit test results” post-build action.

Conclusion

The setup of a unit testing environment for a C++ project is easier than many developers think. Using CMake, googletest and Jenkins makes it very similar to unit testing in Java projects.


From ugly to pretty – Three steps is all it takes

December 30, 2013

makeupI hold lectures in software engineering for over a decade now. One major topic is testing, specifically unit tests. Other corner stones are refactorings and code readability. So whenever I have the chance to challenge my students in cross-topic aspects of software development, it’s almost always a source of insight for them and especially for me. But one golden moment holds a special place in my memory. This is the (rather elaborate, sorry) story of this moment.

During a lecture about unit tests with JUnit, my students had the task to develop tests for a bank account class. That’s about as boring as testing can be – the account was related to a customer and had a current balance. The customer can withdraw money, but only some customers can overdraw their account. To spice things up a bit, we also added the mock object framework EasyMock to the mix. While I would recommend other mock frameworks for production usage, the learning curve of EasyMock is just about right for first time exposure in a “sheep dip” fashion.

Our first test dealt with drawing money from an empty account that can be overdrawn:

@Test
public void canWithdrawOnCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(true);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(new Euro(30), cash);
  assertEquals(new Euro(-30), account.balance());
  EasyMock.verify(customer);
}

The second test made sure that this withdrawal behaviour only works for customers with sufficient credit standing. We decided to pay out nothing (0 Euro) if the customer tries to withdraw more money than his account currently holds:

@Test
public void cannotTakeUpCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(false);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(Euro.ZERO, cash);
  assertEquals(Euro.ZERO, account.balance());
  EasyMock.verify(customer);
}

As you can tell, a lot of copy and paste was going on in the creation of this test. Just look at the name of the local variable “required” – it’s misleading now. Right up to this point, my main topic was the usage of the mock framework, not perfect code. So I explained the five stages of normalized mock-based unit tests (initialize, train mocks, execute tested code, assert results, verify mocks) and then changed the topic by expressing my displeasure about the duplication and the inferior readability of the code (it even tries to trick you with the “required” variable!). Now it was up to my students to improve our situation (this trick works only a few times for every course before they preventively become even pickier than me). A student accepted the challenge and gave advice:

First step: Extract Method refactoring

The obvious first step was to extract the duplication in its own method and adjust the calls by their parameters. This is an easy refactoring that will almost always improve the situation. Let’s see where it got us. Here is the extracted method:

protected void performWithdrawalTestWith(
    boolean customerCanOverdraw,
    Euro amountOfWithdrawal,
    Euro expectedCash,
    Euro expectedBalance) {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(customerCanOverdraw);
  EasyMock.replay(customer);
  Account account = new Account(customer);

  Euro cash = account.withdraw(amountOfWithdrawal);

  assertEquals(expectedCash, cash);
  assertEquals(expectedBalance, customer.balance());
  EasyMock.verify(customer);
}

And the two tests, now really concise:

@Test
public void canWithdrawOnCredit() {
  performWithdrawalTestWith(
      true,
      new Euro(30),
      new Euro(30),
      new Euro(-30));
}

 

@Test
public void cannotTakeUpCredit() {
  performWithdrawalTestWith(
      false,
      new Euro(30),
      Euro.ZERO,
      Euro.ZERO);
}

Well, that did resolve the duplication indeed. But the test methods now lacked any readability. They appeared as if somebody had extracted all the semantics out of the code. We were unhappy, but decided to interpret the current code as an intermediate step to the second refactoring:

Second step: Introduce Explaining Variable refactoring

In the second step, the task was to re-introduce the semantics back into the test methods. All parameters were nameless, so that was our angle of attack. By introducing local variables, we gave the parameters meaning again:

@Test
public void canWithdrawOnCredit() {
  boolean canOverdraw = true;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = new Euro(30);
  Euro resultingBalance = new Euro(-30);

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean canOverdraw = false;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = Euro.ZERO;
  Euro resultingBalance = Euro.ZERO;

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

That brought back the meaning to the test methods, but didn’t improve readability. The code wasn’t intentionally cryptic any more, but still far from being intuitively understandable – and that’s what really readable code should be. If even novices can read your code fluently and grasp the main concepts in the first pass, you’ve created expert code. I challenged the student to further transform the code, without any idea how to carry on myself. My student hesitated, but came up with the decisive refactoring within seconds:

Third step: Rename Variable refactoring

The third step doesn’t change the structure of the code, but its approachability. Instead of naming the local variables after their usage in the extracted method, we name them after their purpose in the test method. A first time reader won’t know about the extracted method (and preferably shouldn’t need to know), so it’s not in the best interest of the reader to foreshadow its details. Instead, we concentrate about telling the reader a coherent story:

@Test
public void canWithdrawOnCredit() {
  boolean aCustomerThatCanOverdraw = true;
  Euro heWithdraws30Euro = new Euro(30);
  Euro receivesTheFullAmount = new Euro(30);
  Euro andIsNow30EuroInTheRed = new Euro(-30);

  performWithdrawalTestWith(
      aCustomerThatCanOverdraw,
      heWithdraws30Euro,
      receivesTheFullAmount,
      andIsNow30EuroInTheRed);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean aCustomerThatCannotOverdraw = false;
  Euro heTriesToWithdraw30Euro = new Euro(30);
  Euro butReceivesNothing = Euro.ZERO;
  Euro andStillHasABalanceOfZero = Euro.ZERO;

  performWithdrawalTestWith(
      aCustomerThatCannotOverdraw,
      heTriesToWithdraw30Euro,
      butReceivesNothing,
      andStillHasABalanceOfZero);
}

If the reader is able to ignore some crude verbalization and special characters, he can read the test out loud and instantly grasp its meaning. The first lines of every test method are a bit confusing, but necessary given Java’s lack of named parameters.

The result might remind you a lot of Behavior Driven Development notation and that’s probably not by chance. In a few minutes during that programming exercise, my students taught themselves to think in scenarios or stories when approaching unit tests. I couldn’t have taught it any better – instead, I got enlightened by this exercise, too.


How to use partial mocks in real life

December 23, 2013

Partial mocks are an advanced feature of modern mocking libraries like mockito. Partial mocks retain the original code of a class only stubbing the methods you specify. If you build your system largely from scratch you most likely will not need to use them. Sometimes there is no easy way around them when working with dependencies not designed for testability. Let us look at an example:

/**
 * Evil dependency we cannot change
 */
public final class CarvedInStone {

    public CarvedInStone() {
        // may do unwanted things
    }

    public int thisHasSideEffects(int i) {
        return 31337;
    }

    // many more methods
}

public class ClassUnderTest {

    public Result computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = new CarvedInStone().thisHasSideEffects(42);
        // more interesting code
        return new Result(intermediateResult * 1337);
    }
}

We want to test the computeSomethingInteresting() method of our ClassUnderTest. Unfortunately we cannot replace CarvedInStone, because it is final and does not implement an interface containing the methods of interest. With a small refactoring and partial mocks we can still test almost the complete class:

public class ClassUnderTest {
    public int computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = intermediateResultsFromCarvedInStone(42);
        // more interesting code
        return intermediateResult * 1337;
    }

    protected int intermediateResultsFromCarvedInStone(int input) {
        return new CarvedInStone().thisHasSideEffects(input);
    }
}

We refactored our dependency into a protected method we can use to stub out with our partial mocking to be tested like this:

public class ClassUnderTestTest {
    @Test
    public void interestingComputation() throws Exception {
        ClassUnderTest cut = spy(new ClassUnderTest());
        doReturn(1234).when(cut).intermediateResultsFromCarvedInStone(42);
        assertEquals(1649858, cut.computeSomethingInteresting());
    }
}

Caveat: Do not use the usual when-thenReturn-style:

when(cut.intermediateResultsFromCarvedInStone(42)).thenReturn(1234);

with partial mocks because the real method will get called once!

So the only untested code is a simple delegation. Measures like that refactoring and partial mocking generally serve as a first step and not the destination.

Where to go from here

To go the whole way we would encapsulate all unmockable dependencies into wrapper objects providing the functionality we need here and inject them into our ClassUnderTest. Then we can replace our wrapper(s) easily using regular mocking.

Doing all this can be a lot of work and/or risk depending on the situation so the depicted process serves as an low risk intermediate step for getting as much important code under test as possible.

Note that the wrappers themselves stay largely untestable like our protected delegating method.


Object Calisthenics: Change the way you think

December 3, 2013

Some time ago I spoke with my colleague about skill sharpening and training the brain to come up with new solutions. He proposed a two hour session at the weekend implementing a small game using object calisthenics.

Rules

The rules are described in The ThoughtWorks Anthology book. Here is the list for quick reference.

  1. Use only one level of indentation per method.
  2. Don’t use the else keyword.
  3. Wrap all primitives and strings.
  4. Use only one dot per line.
  5. Don’t abbreviate.
  6. Keep all entities small.
  7. Don’use any classes with more than two instance variables.
  8. Use first-class collections.
  9. Don’t use any getters/setters/properties.

Most of the rules seemed simple enough. Rules 2 and 5 are standard in Softwareschneiderei, 1, 4, 6 and 8 are stricter versions of common sense, 3 is a tedious object wrapping. The rules I was anxious about were 7 and 9. To increase the learning effect, I added an extra rule to the list that is critical in real life programming:

  1.   Write tests for your code.

It doesn’t matter whether to write test first, test after or even test driven. Only then is the code “value added”.

Experiences

The game was minesweeper. It contains a nice mix of algorithms, data structures and UI. I concentrated the efforts on the algorithmic part. My first step was to analyse and create the needed data structures.

  • The smallest unit is the cell.
  • A cell can be either hidden or revealed, have a mine or be empty.
  • The game field contains such cells in rows and columns.
  • The position of a cell in a field is defined by its coordinate that contains the x and y position.

To associate anything with coordinates the coordinates had to be comparable to each other. Rule 9 forbids exposure of internal state, so the Coordinate class got its equals() and hashCode(). Only the creator of the coordinate had the knowledge about the number of dimensions and the values of the positions. Even the tests had no access to the inner state and tested only those two methods.

Since the revealed flag concept and a mine flag concept had similar properties, I decided not to track cells but to track their flags. Through this architectural decision, I had a field with two flag containers, one for revealed cells and one for cells with mines. An additional benefit was that it was enough to put only the coordinate into the container to mark a cell as a mine.

The next step was to link the parts together and add some behaviour. Setting a mine, then revealing a cell and obtaining the number of mines also. Setting a mine and marking the cell as revealed is a simple task with the containers. Testing that the revealed cell contained the mine was more tricky. To achieve that, the reveal method got an additional parameter, a closure with a hasMine parameter.

public void reveal(final Coordinate coordinate, final CellContainerVisitor revealedCellsVisitor) {
    revealedCells.mark(coordinate);
    visit(coordinate, revealedCellsVisitor);
}

private void visit(final Coordinate coordinate, final CellContainerVisitor revealedCellsVisitor) {
    revealedCellsVisitor.visit(coordinate, hasMineAt(coordinate));
}

@Test
public void containsMines() {
    final CellContainer target = new CellContainer();
    target.placeMineAt(someCoordinate());

    final List<Coordinate> mineCells = new ArrayList<Coordinate>();
    target.reveal(someCoordinate(), (coordinate, hasMine) -> {
        if (hasMine.equals(new HasMine(true))) {
           mineCells.add(coordinate);
        }
    });

    assertThat(mineCells, hasSize(1));
    assertThat(mineCells, contains(someCoordinate()));
}

The next game rule consumed the rest of the session: calculating the number of mines in the neighborhood. The main obstacle was to compute the coordinate of the neighbour. To do this it is necessary to add an offset to a position in a coordinate without exposing its internal structure. In the end I reverted to using more closures.

Conclusion

To achieve my goal I had to reverse the order in which I normally develop business logic: Rule 9 seems to support top-down approach: The interfaces of domain objects are nearly completely dominated by the way they are used by their containers.

Most of the time in this two hour session was spent staring at the screen and to think how to write readable code and readable tests without exposing internal details of the objects. Time well spent.


Guide to better Unit Tests: Focused Tests

November 25, 2013

Every now and then we stumble over unit tests with much setup and numerous checked aspects. These tests easily become a maintenance nightmare. While J.B. Rainsberger advocates getting rid of integration tests in his somewhat lengthy but very insightful talk at Agile 2009 he gives some advice I would like to use as a guide to better unit tests. His goal is basic correctness achieved by the means of what he aptly calls focused tests. Focused tests test exactly one interesting behaviour.

The proposed way to write these focused tests is to look at three different topics for each unit under test:

  1. Interactions (Do I ask my collaborators the right questions?)
  2. Do I handle all answers correctly?
  3. Do I answer questions correctly?

Conventional unit testing emphasizes on the third topic which works fine for leave classes that do not need collaborators. Usually, your programming world is not as simple, so you need mocking and stubbing to check all these aspects without turning your unit test into some large integration test that is slow to run and potentially difficult to maintain.
I will try to show you the approach using a simple and admittedly a bit contrived example. Hopefully, it illustrates Rainsberger’s technique good enough. Assume the IllustrationController below is our unit under test:

public class IllustrationController {
    private final PermissionService permissionService;
    private final IllustrationAction action;

    public IllustrationController(PermissionService permissionService, IllustrationAction action) {
        super();
        this.permissionService = permissionService;
        this.action = action;
    }

/**
* @return true, if the action was executed, false otherwise
*/
    public boolean performIfAllowed(Role r) {
        if (!permissionService.allowed(r)) {
            return false;
        }
        this.action.execute();
        return true;
    }
}

It has two collaborators: PermissionService and IllustrationAction. The first thing to check is:

Do I ask my collaborators the right questions?

In this case this is quite simple to answer, as we only have a few cases: Do we pass the right role to the PermissionService? This results in tests like below:

@Test
public void asksForPermissionWithCorrectRole() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.User);
// this question needs a test in PermissionService
verify(ps, atLeastOnce()).allowed(Role.User);
ic.performIfAllowed(Role.Admin);
// this question needs a test in PermissionService
verify(ps, atLeastOnce()).allowed(Role.Admin);
}

Do I handle all answers correctly?

In our example only the PermissionService provides two different answers, so we can easily test that:

@Test
public void interactsWithActionBecausePermitted() {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
// there has to be a case when PermissionService returns true, so write a test for it!
when(ps.allowed(any(Role.class))).thenReturn(true);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.Admin);

verify(ps, atLeastOnce()).allowed(any(Role.class));
verify(action, times(1)).execute();
}

@Test
public void noActionInteractionBecauseForbidden() {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
// there has to be a case when PermissionService returns false, so write a test for it!
when(ps.allowed(any(Role.class))).thenReturn(false);

IllustrationController ic = new IllustrationController(ps, action);
ic.performIfAllowed(Role.User);

verify(ps, atLeastOnce()).allowed(any(Role.class));
verify(action, never()).execute();
}

Note here, that not only return values are answers but also exceptions. If our action may throw exceptions on execution we can handle, we have to test that too!

Do I answer questions correctly?

Our controller answers the question, if the operation was performed or not by returning a boolean from its performIfAllowed()-method so lets check that:

@Test
public void handlesForbiddenExecution() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
when(ps.allowed(any(Role.class))).thenReturn(false);

IllustrationController ic = new IllustrationController(ps, action);
assertFalse("Perform returned success even though it was forbidden.", ic.performIfAllowed(Role.User));
}

@Test
public void handlesSuccessfulExecution() throws Exception {
PermissionService ps = mock(PermissionService.class);
IllustrationAction action = mock(IllustrationAction.class);
when(ps.allowed(any(Role.class))).thenReturn(true);

IllustrationController ic = new IllustrationController(ps, action);
assertTrue("Perform returned failure even though it was allowed.", ic.performIfAllowed(Role.Admin));
}

Conclusion
What we are doing here is essentially splitting different aspects of interesting behaviour in their own tests. The first two questions define the contract between our unit under test and its collaborators. For every question we ask and therefore stub using our mocking framework there has to be a test, that verifies that this question is answered like we expect it. If we handle all the answers correctly, our interaction is deemed to be correct, too. And finally, if our class implements its class contract correctly by answering the third question our clients also know what to expect and can rely on us.

Because each test focuses on only one aspect it tends to be simple and should only break if that aspect changes. In many cases these kind of tests can make your integration tests obsolete like Rainsberger states. I think there are cases in modern frameworks like grails where you do not want to mock all the framework magic because it is too easy to make wrong assumptions about the behaviour of the framework. So imho integration tests provide some additional value there because the behaviour of the platform stays part of the tests without being tests explicitly.


Know Your Tools: Why Mockitos when() works

October 14, 2013

Some days ago, my colleague asked how Mockito can differentiate between a method invocation outside of an expectation and one inside. If you want to know it too, read on.

The difference

Typically a mocking framework follows a Record/Replay/Verify model. In the first phase the expectations are recorded, in the second the mocked methods are called by the code under test and finally the expectations are verified. Consider an example with EasyMock straight from their documentation:

//record
mock = createMock(Collaborator.class);
mock.documentAdded("New Document");
//replay
replay(mock);
classUnderTest.addDocument("New Document", new byte[0]);
//verify
verify(mock);

Now, with Mockito the difference between the phases is not as clear as with EasyMock:

//record
LinkedList mockedList = mock(LinkedList.class);
when(mockedList.get(0)).thenReturn("first");
//replay
System.out.println(mockedList.get(0));
//verify
verify(mockedList).get(0);

The invocation of get() is evaluated before the invocations of when() or println() so there is no way to change the phase before the call. There is also no way to tell whether the current expectation is the last to start the replay mode automatically. How does it work then? All necessary code is contained in the following classes: MockitoCore, MockHandlerImpl, OngoingStubbing and MockingProgressImpl with its wrapper ThreadSafeMockingProgress.

Record

//record
LinkedList mockedList = mock(LinkedList.class);
when(mockedList.get(0)).thenReturn("first");

In the second line, a mock is created via the mock method. This call is delegated to MockitoCore, which initiates a creation of a proxy and a registration of MockHandlerImpl as the handler for its invocations.

The third line actually contains three steps. First the method to mock is invoked on the mock. Because MockHandlerImpl has been registered for all method calls on this proxy, it is now called. It keeps the current invocation, adds it to the list of all invocations recorded and creates the object to collect the expectations, the “OngoingStubbing”. The instance of OngoingStubbing is stored in an instance of the MockingProgressImpl. To keep the instance between the calls to the framework, a ThreadLocal member of singleton ThreadSafeMockingProgress is used. Since no mocked answer for the call to mock exists, a default result is returned. The second step is the invocation of when(), which returns the instance of OngoingStubbing previously deposited by MockHandlerImpl in MockingProgressImpl. OngoingStubbing implements the method then(), which is used as a means of recording the expected result in the third step. The result and the cached invocation are then saved together, ready to be retrieved. During this process, the invocation call is “consumed” and removed from the list of recorded invocations.

Replay

//replay
System.out.println(mockedList.get(0));

In the line five the method get() is called again. Since the result for it has been defined, MockHandlerImpl returns the retrieved result to the caller. The call is recorded and stored for for further use.

Verify

//verify
verify(mockedList).get(0);

Verification also consists of multiple steps. The call to verify() marks the end of stubbing and sets the verification mode. In the following call to get() on the basis of set verification mode MockHandlerImpl is able to differentiate between the phases and passes the invocations recorded to the verification code.

Final thoughts

The developers of Mockito achieved much with simple constructs like singletons and shared state. The stuff behind the syntax sugar is sometimes even considered magic. I hope that, after reading this article, you no longer believe in magic but use your knowledge to create similar great frameworks.

Another point: Since Mockito uses ThreadLocal as storage for its state, is it possible to confuse it by using multiple threads? What do you think?



Testing Java with Grails 2.2

July 29, 2013

Let us look at the following fictive example. You want to test a static method of a java class “BlogAction”. This method should tell you whether a user can trigger a delete action depending on a configuration property.

The classes under test:

public class BlogAction {
    public static boolean isDeletePossible() {
        return ConfigurationOption.allowsDeletion();
    }
}
class ConfigurationOption {
    static boolean allowsDeletion() {
        // code of the Option class omitted, here it always returns false
        return Option.isEnabled('userCanDelete');
    }
}

In our test we mock the method of ConfigurationOption to return some special value and test for it:

@TestMixin([GrailsUnitTestMixin])
class BlogActionTest {
    @Test
    void postCanBeDeletedWhenOptionIsSet() {
        def option = mockFor(ConfigurationOption)
        option.demand.static.allowsDeletion(0..(Integer.MAX_VALUE-1)) { -> true }

        assertTrue(BlogAction.isDeletePossible())
    }
}

As result, the test runner greets us with a nice message:

junit.framework.AssertionFailedError
    at junit.framework.Assert.fail(Assert.java:48)
    at junit.framework.Assert.assertTrue(Assert.java:20)
    at junit.framework.Assert.assertTrue(Assert.java:27)
    at junit.framework.Assert$assertTrue.callStatic(Unknown Source)
    ...
    at BlogActionTest.postCanBeDeletedWhenOptionIsSet(BlogActionTest.groovy:21)
    ...

Why? There is not much code to mock, what is missing? An additional assert statement adds clarity:

assertTrue(ConfigurationOption.allowsDeletion())

The static method still returns false! The metaclass “magic” provided by mockFor() is not used by my java class. BlogAction simply ignores it and calls the allowsDeletion method directly. But there is a solution: we can mock the call to the “Option” class.

@Test
void postCanBeDeletedWhenOptionIsSet() {
    def option = mockFor(Option)
    option.demand.static.isEnabled(0..(Integer.MAX_VALUE-1)) { String propertyName -> true }

    assertTrue(BlogAction.isDeletePossible())
}

Lessons learned: The more happens behind the scenes, the more important becomes the context of execution.


Follow

Get every new post delivered to your Inbox.

Join 64 other followers