The four archetypes of cloud users – part 1 of 2

In the occupational field of accounting, the strong trend towards cloud services is noticeable. Everything needs to be digital, and with digital, they mean online, and with online, they mean in the cloud. Every expense voucher needs to be scanned and uploaded, because in many cases, it can be booked automatically. In the new era of accounting, human intervention is only needed for special cases.

I see this as a good example of how digital online services can transform the world. Every step in the process would have technically been possible for the last twenty years, but only the cloud could unify the different participants enough so that a streamlined end-to-end process is marketable to the masses. And in this marketing ecstasy, the stakeholders that profit the most (the accountants) often forget that their benefits are just a part of the whole picture. In order to assess the perceived and actual benefits of all stakeholders, you at least need to apply an archetype to each participant.

The four archetypes

In my opinion, there are four different archetypes of cloud users. Let’s have a look at them and then assess the risks and potentials when selling a digital online service to them. I’ll list the archetypes in the order from biggest risk to biggest potential.

Archetype 1: The tinfoil hat

A person that could be identified as a “tinfoil hat” doesn’t need to be a conspiracy weirdo or paranoid maniac. In fact, the person probably has deep and broad knowledge about technology and examines new technologies in detail. The one distinctive feature of the tinfoil hats is that they take security, including IT security, very seriously. They don’t take security for granted, don’t trust asseverations and demand proof. You can’t convince a tinfoil hat by saying that the data transfer is “encrypted”, you need to specify the actual encryption algorithm. Using RC4 ciphers for the SSL protocol isn’t good enough for the tinfoil. You need at least proof that you understood the last sentence and took actions to mitigate the problem. Even then, the tinfoil will hesitate to give any data out of hands and often choose the cumbersome way in order to stay safe. “Better safe than sorry” is his everyday motto.

Tinfoil hats always search for scenarios that could compromise their data or infrastructure. They are paranoid by default and actively invest in security. “On premises” is the only way they deploy their own services, and “on premises” is how they prefer to keep their data.

Typical signs of a tinfoil hat archetype include:

  • self-hosted applications
  • physical servers
  • lack of (open) wireless network
  • physically separated networks
  • signed and encrypted e-mails

Trying to sell a cloud service to a tinfoil hat is like trying to sell a flight to an aviophobian (somebody with fear of flying). There is always another way to get from A to B, seemingly safer and more controllable. If you are selling cloud services, tinfoil hats are your worst nightmare. If you can convince a tinfoil hat, your product is probably made of fairy dust and employs lots of unicorns.

Archetype 2: The clipboard

Clipboard people are wary of new technologies, but assess them in the context of usability. They demand high security, but will compromise if the potential of the new technology far exceeds the risk. Other than the tinfoil hat, the clipboard sees his role as an enabler, but will not rest to increase the perceived or actual safety of the product. You can appease a clipboard by giving evidence of security audits from a third party. They will trust known authorities, because it means that they can always deflect blame in case of an accident to these authorities.

Clipboards run on checklists, safety protocols and recurring audits. They don’t try to avert every possibility of a security breach, but will examine each incident in detail and update their checklists. They don’t care about “on premises” or “off premises” as long as the service is reachable, safe enough and reliable. If a cloud service has an higher availability than the local counter-part, the clipboard will think about a migration.

Typical signs of a clipboard archetype include:

  • Virtual Private Networks (VPN)
  • Two-Factor Authentication
  • Token-Based Authentication
  • Strong Encryption

The clipboard will listen if you pitch your cloud service and can be enticed by the new or better capabilities. But in the very next sentence, he will ask about security and be insistent until you provide proof – first-hand or by credible third parties. You can convince a clipboard if your product is designed with safety in mind. As long as the safety is state-of-the-art, you’ll close the deal.

Outlook on the second part

In the second part of this blog entry, we will look at the remaining two archetypes, namely the “combination lock” and the “smartphone”. Stay tuned.

Did you identify with one of the archetypes? What are your most important aspects of cloud services? I would love to hear from you.


C++17: The two line visitor explained

If you have ever used an “idiomatic” C++ variant datatype like Boost.Variant or the new C++17 std::variant, you probably wished you could assemble a visitor to dispatch on the type by assembling a couple of lambda expressions like this:

auto my_visitor = visitor{
  [&](int value) { /* ... */ },
  [&](std::string const& value) { /* ... */ },

The code in question

While reading through the code for lager I stumbled upon a curious way to to make this happen. And it is just two lines of code! Wow, that is cool. A comment in the code indicated that the code was copied from where I quickly found the source on the page for std::visit, albeit with a different name:

template<class... Ts> struct overloaded : Ts... { using Ts::operator()...; };
template<class... Ts> overloaded(Ts...) -> overloaded<Ts...>;

Multiple inheritance to the rescue

Lambda expressions in C++ are just syntactic sugar for callables, pretty much like a struct with an operator(). As such, you can derive from them. which is what the first line does.
It uses variadic templates and multiple inheritance to assemble the types of the lambdas into one type. Without the content in the struct body, an instantiation with our example would be roughly equivalent to this:

struct int_visitor {
  void operator()(int value)
  {/* ... */}

struct string_visitor {
  void operator()(std::string const& value)
  {/* ... */}

struct visitor : int_visitor, string_visitor {

Using all of it

Now this cannot yet be called, as overload resolution (by design) does not work across different types. Hence the using in the structs body. It pulls the operator() implementations into the visitor type where overload resolution can work across all of them.
With it, our hypothetical instantiation becomes:

struct visitor : int_visitor, string_visitor {
  using int_visitor::operator();
  using string_visitor::operator();

Now an instance of that type can actually be called with both our types, which is what the interface for, e.g. std::visit demands.

Don’t go without a guide

The second line intruiged me. It looks a bit like a function declaration but that is not what it is. The fact that I had to ask in the (very helpful!) C++ slack made me realize that I did not keep up with the new features in C++17 as much as I would have liked.
This is, in fact, a class template argument deducation (CTAD) guide. It is a new feature in C++17 that allows you do deduce template arguments for a type based on constructor parameters. In a way, it supercedes the Object Generator idiom of old.
The syntax is really quite straight-forward. Given a list of constructor parameter types, resolve to a specific template instance based on those.


The last piece of the puzzle is how the visitor gets initialized. The real advantage of using lambdas instead of writing the struct yourself is that you can capture variables from your context. Therefore, you cannot just default-initialize most lambdas – you need to transport its values, its bound context.
In our example, this uses another new C++17 feature: extended aggregate initialization. Aggregate initialization is how you initialized structs way back in C with curly-brackets. Previously, it was forbidden to do this with structs that have a base class. The C++17 extension now lifts this restriction, thus making it possible to initialize this visitor with curly brackets,.

Book review: Clean Architecture by Robert C. Martin

In 2008, a book changed the way software developers around the globe talked (and hopefully) acted about their code. Robert C. Martin’s “Clean Code” was and still is a cornerstone of modern software development. The book itself is remarkably weak in its code examples, but has strong and effective messages on the level of practices and principles. Even today, ten years later, this is the one book that most of my students read and are passionate about. It’s a book that speaks reason to them, albeit with some contortion because of high volume. Robert C. Martin has the tendency to preach 200 percent in order to still get the half-convinced to an acceptable level.

So when a new book from him, called “Clean Architecture”, appeared on the horizon, I was thrilled. Would it be groundbreaking like “Clean Code” or a dud like “The Clean Coder” (sorry, my opinion – this is a personal review, not an academic evaluation)? I’ve read some very good books about software development (like “The Pragmatic Programmer”), fantastic books about programming (“Refactoring” and “Working Effectively With Legacy Code” come to mind) and even some mind-blowing pieces about design and emerging architecture (my first read of “Growing Object-Oriented Software, Guided by Tests” felt like a personal audience with Steve Freeman and Nat Pryce). But all these books dealt with tactics, with the immediacies of software development. Don’t get me wrong! This is the most important part and it helped me tremendously. But there are parts “above” the footwork that needs to be addressed in bigger systems, too.

And there, the literature got thin or stale. Books about software architecture talked about large-scale architectures (so-called “enterprise scale” systems that span from horizon to horizon, like in “Patterns of Enterprise Application Architecture”) or had the taste of dry plywood because it was clear that the findings were from another era and would translate badly into modern software development.

“Clean Architecture” begins with a quick and focused overview over the current programming paradigms and a conclusion that there are no different “eras”. We didn’t get better in designing systems, we just changed the aroma and color of our failures. Future generations will look at our code and architectures as scornful as we looked at the ruins of the systems of our ancestors. And make no mistake – the ruins are still in production today! We cannot place our hope on another new and liberating programming paradigm because there probably won’t be one. We have to make do with what we have.

This is the first six chapters of “Clean Archicture”. The chapters are short and on point and I loved every line of it. It probably isn’t the most comprehensive and balanced description of structured, object-oriented and functional programming, but it provides a narrative that is intuitive and convincing – your mileage may vary, I was hooked.

In the next five chapters, Robert C. Martin reiterates the known SOLID design principles. I rolled my eyes when I glanced the content because I’ve read it like a hundred times in maybe as many books. But I decided to read it once more and I’m glad I did. The principles are known, but the underlying revelation is woven into the text like a good thriller. I hesitate to give away too much, because I really think this book can be spoiled – just like a good thriller. I was sold. Robert C. Martin can explain the same old SOLID to me and I still learn something and have fun.

Then, the part about components. It feels like an intermezzo to an even better thriller, because suddenly there is math and formulas. Its interesting and noteworthy, but if you followed the metrics discussion in the last fifteen years, the excitement of this part will be dampened.

But wait, there is more! Starting with page 133 of 321 (yeah, the Appendix is interesting, but more in the “The Clean Coder” way of things), there is the central question: “What is Architecture?”. There it was again, the thrill that in every line, there could be insights that are worth weeks of thoughts. I read this part in the train from south to north germany and I stared out of the window often, following my own train of thoughts.

Again, no spoilers, but the way the answers are given is so refreshing and the answer itself is so simple that I’m surprised that it took me this long to not come to the same conclusion. Software architecture lost some of its mysticism, but gained a lot of applicability for me. I was spent (in a good way).

And then, on page 200, finally, “The Clean Archicture”. Well, I watched all the trailers on this topic, so my surprise wasn’t really there, but with all the knowledge and insights from the first 200 pages, I could have “invented” the Clean Architecture by myself then and there. It’s more or less the logical next step from the prerequisites. I applaud this masterwork of storytelling, because it doesn’t overwhelm the reader with the genius of the narrator, it drives him to connect the dots himself.

The rest of the book, like the title of part VI, are just “Details”. The central message  – The Dependency Rule, this little spoiler should be allowed – is simple, convincing and deduced from the beginning. I’ve seen the heart of software architecture and it is beautiful.

I even forgive the many typos and grammatical errors (far more than usual) and the bulky appendix for this ride. This book is definitely up there with “Clean Code”. It is accessible, has a clear message and profound effects. And it refrains from preaching most of the time. No need to turn it up to 200 percent when your message is so convincing in itself.

Conclusion: If you are interested in software development with a structure, go grab this book as soon as possible. We’ve waited long enough!

Advanced deb-packaging with CMake

CMake has become our C/C++ build tool of choice because it provides good cross-platform support and very reasonable IDE (Visual Studio, CLion, QtCreator) integration. Another very nice feature is the included packaging support using the CPack module. It allows to create native deployable artifacts for a plethora of systems including NSIS-Installer for Windows, RPM and Deb for Linux, DMG for Mac OS X and a couple more.

While all these binary generators share some CPACK-variables there are specific variables for each generator to use exclusive packaging system features or requirements.

Deb-packaging features

The debian package management system used not only by Debian but also by Ubuntu, Raspbian and many other Linux distributions. In addition to dependency handling and versioning packagers can use several other features, namely:

  • Specifying a section for the packaged software, e.g. Development, Games, Science etc.
  • Specifying package priorities like optional, required, important, standard
  • Specifying the relation to other packages like breaks, enhances, conflicts, replaces and so on
  • Using maintainer scripts to customize the installation and removal process like pre- and post-install, pre- and post-removal
  • Dealing with configuration files to protect end user customizations
  • Installing and linking files and much more without writing shell scripts using ${project-name}.{install | links | ...} files

All these make the software easier to package or easier to manage by your end users.

Using deb-features with CMake

Many of the mentioned features are directly available as appropriately named CMake-variables all starting with CPACK_DEBIAN_.  I would like to specifically mention the CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA variable where you can set the maintainer scripts and one of my favorite features: conffiles.

Deb protects files under /etc from accidental overwriting by default. If you want to protect files located somewhere else you specify them in a file called conffiles each on a separate line:


If the user made changes to these files she will be asked what to do when updating the package:

  • keep the own version
  • use the maintainer version
  • review the situation and merge manually.

For extra security files like myproject.conf.dpkg-dist and myproject.conf.dpkg-old are created so no changes are lost.

Unfortunately, I did not get the linking feature working without using maintainer scripts. Nevertheless I advise you to use CMake for your packaging work instead of packaging using the native debhelper way.

It is much more natural for a CMake-based project and you can reuse much of your metadata for other target platforms. It also shields you from a lot of the gory details of debian packaging without removing too much of the power of deb-packages.

Lessons learned developing hybrid web apps (using Apache Cordova)

In the past year we started exploring a new (at leat for us) terrain: hybrid web apps. We already developed mobile web apps and native apps but this year we took a first step into the combination of both worlds. Here are some lessons learned so far.

Just develop a web app

after all the hybrid app is a (mobile) web app at its core, encapsulating the native interactions helped us testing in a browser and iterating much faster. Also clean architecture supports to defer decisions of the environment to the last possible moment.

Chrome remote debugging is a boon

The tools provided by Chrome for remote debugging on Android web views and browser are really great. You can even see and control the remote UI. The app has some redraw problems when the debugger is connected but overall it works great.

Versioning is really important

Developing web apps the user always has the latest version. But since our app can run offline and is installed as a normal Android app you have to have versions. These versions must be visible by the user, so he can tell you what version he runs.

Android app update fails silently

Sometimes updating our app only worked in parts. It seemed that the web view cached some files and didn’t update others. The problem: the updater told the user everything went smoothly. Need to investigate that further…

Cordova plugins helped to speed up

Talking to bluetooth devices? checked. Saving lots of data in a local sqlite? Plugins got you covered. Writing and reading local files? No problemo. There are some great plugins out there covering your needs without going native for yourself.

JavaScript isn’t as bad as you think

Working with JavaScript needs some discipline. But using a clean architecture approach and using our beloved event bus to flatten and exposing all handlers and callbacks makes it a breeze to work with UIs and logic.

SVG is great

Our apps uses a complex visualization which can be edited, changed, moved and zoomed by the user. SVG really helps here and works great with CSS and JavaScript.

Use log files

When your app runs on a mobile device without a connection (to the internet) you need to get information from the device to you. Just a console won’t cut it. You need log files to record the actions and errors the user provokes.

Accessibility is harder than you think

Modern design trends sometimes make it hard to get a good accessibility. Common problems are low contrast, using only icons on buttons, indiscernible touch targets, color as information bearer and touch targets that are too small.

These are just the first lessons we learned tackling hybrid development but we are sure there are more to come.

Implementation visibility – Part III

In the first article of this series, I presented the concept of “implementation visibility”. Every requirement can be expressed in source code on a scale of how prominent the implementation will be. There are at least five stages (or levels) on the scale:

  • level 0: Inline
    • level 0+: Inline with comment
    • level 0++: Inline with apologetic comment
  • level 1: separate method
  • level 2: separate class
    • level 2+: new type in domain model
  • level 3: separate aggregate
  • level 4: separate package or module
  • level 5: separate application or service

We examined a simple code example in both preceding articles. The level 0, 0+ and 0++ were covered in the first article, while the second article talked about level 1, 2 and 2+. You might want to read them first if you want to follow the progression through the ranks. In this article, we look at the example at level 3, have a short outlook on further levels and then recap the concept.

A quick reminder

Our example is a webshop that lacks brutto prices. The original code of our shopping cart renderer might looked like this:

public class ShowShoppingCart {
  public ShoppingCartRenderModel render(Iterable&amp;amp;lt;Product&amp;amp;gt; inCart) {
    final ShoppingCartRenderModel result = new ShoppingCartRenderModel();
    for (Product each : inCart) {
    return result;

Visibility level 3: Domain drive all the things!

We’ve introduced a new class for our requirement in visibility level 2 and made it a domain type. This is mostly another name for the concept of Entities or Value Objects from Domain Driven Design (DDD). If you aren’t familiar with Domain Driven Design, I recommend you grab the original book or its worthy successor and read about it. It is a way to look at requirements and code that will transform the way you develop software. To give a short spoiler, DDD Entities and DDD Value Objects are named core domain concepts that form the foundation of every DDD application. They are found by learning about the problem domain your software is used in. DDD Entities have an own identity, while DDD Value Objects just exist to indicate a certain value. Every DDD Entity and most DDD Value Objects are part of an DDD Aggregate. To load and store DDD Aggregates, a DDD Repository is put into place. The DDD Repository encapsulates all the technical stuff that has to happen when the application wants to access an DDD Aggregate through its DDD Root Entity. Sorry for all the “DDD” prefixes, but the terms are overloaded with many different meanings in our profession and I want to be clear what I mean when I use the terms “Repository” or “Aggregate”. Be very careful not to mistake the DDD meanings of the terms for any other meaning out there. Please read the books if you are unsure.

So, in Domain Driven Design, our BruttoPrice type is really a DDD Value Object. It represents a certain value in our currency of choice (Euro in our example), but has no life cycle on its own. Two BruttoPrices can be considered “the same” if their values are equal. This raises the question what the DDD Root Entity of the corresponding DDD Aggregate might be. Just imagine what happens in the domain (in real life, on paper) if you calculate a brutto price from a given netto price: You determine the value added tax category of your taxable product, look up its current percentage and multiply your netto price with the percentage. The DDD Root Entity is the value added tax category, as it can be introduced and revoked by your government and therefor has a life cycle on its own. The tax percentage, the netto price and the brutto price are just DDD Value Objects in its vicinity.

To bring DDD into our code and raise the implementation visibility level, we need to introduce a lot of new types with lots of lines of code:

  • NettoPrice is a DDD Value Object representing the concept of a monetary value without taxes.
  • BruttoPrice is a DDD Value Object representing the concept of a monetary value including taxes.
  • ValueAddedTaxCategory is a DDD Root Entity standing for the concept of different VAT percentages for different product groups.
  • ValueAddedTaxPercentage might be a DDD Value Object representing the concept of a percentage being applied to a NettoPrice to get a BruttoPrice. We will omit this explicit concept and let the ValueAddedTaxCategory deal with the calculation internally.
  • ValueAddedTaxRepository is a DDD Repository providing the ability to retrieve a ValueAddedTaxCategory for a known Taxable.
  • Taxable might be a DDD Entity. For us, it will remain an abstraction to decouple our taxes from other concrete types like Product.

The most surprising new class is probably the ValueAddedTaxRepository. It lingered in our code in nearly all previous levels, but wasn’t prominent, not visible enough to be explicit. Remember lines like this?

final BigDecimal taxFactor = <gets the right tax factor from somewhere> 

Now we know where to retrieve our ValueAddedTaxCategory from! And we don’t even know that the VAT is calculated using a percentage or factor anymore. That’s a detail of the ValueAddedTaxCategory given to us from the ValueAddedTaxRepository. If one day, for example at April 1th, 2020, the VAT for bottled water is decreed to be a fixed amount per bottle, we might need to change the internals of our VAT DDD Aggregate, but the netto and brutto prices and the rest of the application won’t even notice.

We’ve given our different reasons of change different places in our code. We have separated our concerns. This separation requires a lot of work to be spelled out. Let’s look at the code of our example at implementation visibility level 3:

public class ShowShoppingCart {
  public ShoppingCartRenderModel render(Iterable<Product> inCart) {
    final ShoppingCartRenderModel result = new ShoppingCartRenderModel();
    final ValueAddedTaxRepository vatProvider = givenVatRepository();
    for (Product each : inCart) {
      final ValueAddedTaxCategory vat = vatProvider.forType(each);
      final BruttoPrice bruttoPrice = vat.applyTo(each.nettoPrice());
    return result;

There are now three lines of code responsible for calculating the brutto prices. It gets ridiculous! First we obtain the DDD Repository from somewhere. Somebody probably gave us the reference in the constructor or something. Just to remind you: The class is named ShowShoppingCart and now needs to know about a class that calls itself ValueAddedTaxRepository. Then, we obtain the corresponding ValueAddedTaxCategory for each Product or Taxable in our shopping cart. We apply this VAT to the NettoPrice of the Product/Taxable and pass the resulting BruttoPrice side by side with the NettoPrice in the addProductLine() method. Notice how we changed the signature of the method to differentiate between NettoPrice and BruttoPrice instead of using just to Euro parameters. Those domain types are now our level of abstraction. We don’t really care about Euro anymore. The prices might be expressed in mussle shells or bottle caps and we still could use our code without modification.

The ValueAddedTaxCategory we obtain from the DDD Repository isn’t a class with a concrete implementation. Instead, it is an interface:

* AN-17: Calculates the brutto price (netto price with value added tax (VAT))
* for the given netto price.
public interface ValueAddedTaxCategory {
  public BruttoPrice applyTo(NettoPrice nettoPrice);

Now we could nearly get rid of the comment above. It just repeats what the signature of the single method in this type says, too. We keep it for the reference to the requirement (AN-17).

Right now, the interface has only one implementation in the class PercentageValueAddedTaxCategory:

public class PercentageValueAddedTaxCategory implements ValueAddedTaxCategory {
  private final BigDecimal percentage;

  public PercentageValueAddedTaxCategory(final BigDecimal percentage) {
    this.percentage = percentage;

  public BruttoPrice applyTo(NettoPrice nettoPrice) {
    final Euro value = nettoPrice.multiplyWith(this.percentage).inEuro();
    return new BruttoPrice(value);

You might notice that the concrete code of applyTo still has knowledge about the Euro. As long as we don’t ingrain the relationship between NettoPrice and BruttoPrice in these types, somebody has to do the conversion externally – and needs to know about implementation details of these types. That’s an observation that you should at least note down in your domain crunching documents. It isn’t necessarily bad code, but a spot that will require modification once the currency changes to cola bottle caps.

This is a good moment to reconsider what we’ve done to our ShowShoppingCart class. Let’s refactor the code a bit and move the responsibility for value added taxes where it belongs: in the Product type.

public class ShowShoppingCart {
  public ShoppingCartRenderModel render(Iterable<Product> inCart) {
    final ShoppingCartRenderModel result = new ShoppingCartRenderModel();
    for (Product each : inCart) {
    return result;


Now we have made a full circle: Our code looks like it began without the brutto prices, but with one additional line that delivers the brutto prices to the product line in the ShoppingCartRenderModel. The whole infrastructure that we’ve built is hidden behind the Product/Taxable type interface. We still use all of the domain types from above, we’ve just changed the location where we use them. The whole concept complex of different price types, value added taxes and tax categories is a top level construct in our application now. It shows up in the domain model and in the vocabulatory of our project. It isn’t a quick fix, it’s the introduction of a whole set of new ideas and our code now reflects that.

The code at implementation visibility level 3 might seem bloated and over-engineered to some. There is probably truth in this judgement. We’ve introduced far more code seams in the form of abstractions and indirections than we can utilize in the moment. We’ve prepared for an uncertain future. That might turn out to be unnecessary and would then be waste.

So let’s look at our journey as an example of what could be done. There is no need to walk all the way all the time. But you should be able to walk it in case it proves necessary.

Visibility level 4 and above: To infinity and beyond!

Remember that there are implementation visibility levels above 3! If you choose such a level, there will be even more code, more classes and types, more indirection and more abstraction. Suddenly, your new code will show up on system architecture diagrams and be deployed independently. Maybe you’ll need a dedicated server for it or scale it all the way up to its own server farm. Our example doesn’t match those criterias, so I stop here and just say that visibility level 3 isn’t the end of the journey. But you probably got the idea and can continue on your own now.

Recap: Rising through the visibility levels

We’ve come a long way since level 0 in terms of implementation visibility. The code still does the same thing, it just accumulates structure (some may call it cruft) and fletches out the relationships between concepts. In doing so, different axis of change emerge in different locations instead of entangled in one place. Our development effort rises, but we hope for a return on investment in the future.

I’ve found it easier to elevate the implementation visibility level of some code later than to decrease it. You might experience it the other way around. In the end, it doesn’t matter which way we choose – we have to match the importance of the requirement in the code. And as the requirements and their importance change, our code has to adjust to it in order to stay relevant. It isn’t the visibility level you choose now that will decide if your code is visible enough, it is the necessary visibility level you cannot reach for one reason or the other that will doom your code. Because it “feels bloated” and gets replaced, because it wasn’t found in time and is duplicated somewhere else, because it fused together with unrelated code and cannot be separated. Because of a plethora of reasons. By choosing and changing the implementation visibility level of your code deliberately, you at least take the responsibility to minimize the effects of those reasons. And that will empower you even if not all your decisions turn out profitable.


With the end of this third part, our series about the concept of implementation visibility comes to an end. I hope you’ve enjoyed the journey and gained some insights. If you happen to identify an example where this concept could help you, I’d love to hear from you! And if you know about a book or some other source where this concept is explained, too – please comment with a link below.

Integrating catch2 with CMake and Jenkins

A few years back, we posted an article on how to get CMake, googletest and jenkins to play nicely with each other. Since then, Phil Nash’s catch testing library has emerged as arguably the most popular thing to write your C++ tests in. I’m going to show how to setup a small sample project that integrates catch2, CMake and Jenkins nicely.

Project structure

Here is the project structure we will be using in our example. It is a simple library that implements left-pad: A utility function to expand a string to a minimum length by adding a filler character to the left.

├── CMakeLists.txt
├── source
│   ├── CMakeLists.txt
│   ├── string_utils.cpp
│   └── string_utils.h
├── externals
│   └── catch2
│       └── catch.hpp
└── tests
    ├── CMakeLists.txt
    ├── main.cpp
    └── string_utils.test.cpp

As you can see, the code is organized in three subfolders: source, externals and tests. source contains your production code. In a real world scenario, you’d probably have a couple of libraries and executables in additional subfolders in this folder.

The source folder

set(TARGET_NAME string_utils)




The library is added to the install target because that’s what we typically do with our artifacts.

I use externals as a place for libraries that go into the projects VCS. In this case, that is just the catch2 single-header distribution.

The tests folder

I typically mirror the filename and path of the unit under test and add some extra tag, in this case the .test. You should really not need headers here. The corresponding CMakeLists.txt looks like this:



set(TARGET_NAME tests)


  PUBLIC string_utils)

  PUBLIC ../externals/catch2/)

  COMMAND ${TARGET_NAME} -o report.xml -r junit)

The list and the loop help me to list the tests without duplicating the .test tag everywhere. Note that there’s also a main.cpp included which only defines the catch’s main function:

#include <catch.hpp>

The add_test call at the bottom tells CTest (CMake’s bundled test-runner) how to run catch. The “-o” switch commands catch to direct its output to a file, report.xml. The “-r” switch sets the report mode to JUnit format. We will need both to integrate with Jenkins.

The top-level folder

The CMakeLists.txt in the top-level folder needs to call enable_testing() for our setup. Other than that, it just directs to the subfolders via add_subdirectory().


Now all that is needed is to setup Jenkins accordingly. Setup jenkins to get your code, add a “CMake Build” build-step. Hit “Add build tool invocation” and check “Use cmake” to let cmake handle the invocation of your build tool (e.g. make). You also specify the target here, which is typically “install” or “package” via the “–target” switch.

Now you add another step that runs the tests via CTest. Add another Build Step, this time “CMake/CPack/CTest Execution” and pick CTest. The one quirk with this is that it will let the build fail when CTest returns a non-zero exit code – which it does when any tests fail. Usually, you want the build to become unstable and not failed if that happens. Hence set “1-65535” in the “Ignore exit codes” input.

The final step is to let jenkins use the report.xml that we had CTest generate so it can generate the test result charts and tables. To do that, add the post-build action: “Publish JUnit test result report” and point it to tests/report.xml.


That’s it. Now you got your CI running nice catch tests. The code for this example is available on our github.