Transferring commits via Git bundles

Sometimes you want to send (e.g. by e-mail) a set of new Git commits to someone else who has the same repository at an older state, without transferring the whole repository and without sharing a common remote repository.

One feature that might come to your mind are Git patches. Patches, however, don’t work when there are branches and merge commits in the commit history: git format-patch creates patches for the commits across the various branches in the order of their commit times and doesn’t create patches for merge commits.

Git bundles

The solution to the problem are Git bundles. Git bundles contain a partial excerpt of a Git repository in a single file.

This is how to create a bundle, including branches, merge commits and tags:

$ git bundle create my.bundle <base commit>..HEAD --branches --tags

<base commit> must be replaced with the last commit (i.e. commit hash or tag), which was included in the old state of the repository.

A Git bundle can be imported into a repository via git pull:

$ git pull /path/to/my.bundle

Get it up and running

West Side-project story

West Side-project story

I have seen quite a few projects that spent a ton of calendar/developer time and bugdet in building components and frameworks instead of getting something running. Running in this sense does not mean being able to show an application started from a developers IDE on her development notebook. With running I mean versioned artifacts deployed on some sort of staging infrastructure the client/customer has access to. Let me elaborate on that:

Walking skeleton

I really like the notion of the walking skeleton coined by Steve Freeman and Nat Pryce in “Growing Object-Oriented Software, Guided by Tests“. While I do not want to emphasize TDD and/or automated end-to-end testing I see great benefits in producing a walking skeleton that touches all important parts of a system and is working with a minimal set of functionality. For me that means that all of the following elements are in place, albeit in a primitve way which can be refined on the way:

  • The code is hosted in a source code repository accessible to the project members
  • A build system is chosen and able to produce a runnable artifact
  • A continuous integration server (CI) is triggered on changes in the repository and produces the runnable artifact
  • The artifact is easily installable on the target machines and/or installed on a staging system that resembles the target system as closely as reasonable
  • If there are components they talk to another using minimal requests and stubbed replies that can be refined over time

I usually aim for that walking skeleton in the first few hours into a project after the base technologies and requirements allow starting with coding. That may take up to several days when the system is more complex but it should not take weeks. Connect different parts of the system as early as possible even if responses are minimal, hardcoded or “wrong”. It shows that the parts are able to communicate and mismatches will become visible either someway along the build process or at least when running application. Why should you choose such an approach?


  • Most people I know are better at evolving and improving existing stuff than at creating new stuff in empty space in a focused and efficient way. If you have a skeleton of the system it is much easier to talk about the interfaces and responsibilites of the different parts of the system.
  • You get to define APIs which you can evolve over time instead of specifing the complete API and experience implementation and mismatch problems much later when the components need to be integrated.
  • Similarily it is much more difficult to package a complex system after it is finished than to package a simple minimal system and evolve all aspects – building, implemenation and packaging/deployment, maintainance – when necessary.
  • You see if things may work like expected or if there is some inherent problem in the whole design much earlier. Essentially you evolve your proof-of-concept into something with real value for your clients.
  • As soon as your system provides some value you can deploy the working stuff long before the whole project is finished.
  • It is much easier to discuss a working system than to reason about a system not yet existing.
  • You are eating your own dogfood from early on and can address pain points in development, user experience, deployment and running the application.
  • You have something to show more or less right from the beginning and progress will be visible throughout the project.
  • No “Works on my machine!” syndrome.

Potential Problems

Of course there is the challenge of continuously refactoring and extending existing stuff. Later on data migration or migration of configuration may be additional tasks. But hey, you are skilled, agile developers that embrace the idea of changing requirements and the ability to move fast. You have to pay attention not to accumulate too much technical debt as it will slow you down and hurt you in the long run.


Running systems providing value are what your clients often care about the most. If you can something like that early on communication with your stakeholders tends to be more relaxed as they have better possibilities of steering in the right direction and they steadily see progress. Running software should be the primary goal.

Thinking in immutability

The way I learned programming is dictated by objects and states. According to my thinking data is packed into objects which are later modified to reflect the changes over time. State and modification are a central modelling technique. For me programming and OOP in particular resolved around this common theme. Mutating objects pervade my thinking even beyond the code into the database and even the architecture of the whole system.
Besides advantages and on going efforts in the industry I couldn’t help but thinking: immutability is nice. I can use it in some cases and keep it quietly stored in the corner.
But it didn’t remain silent.
So I asked myself: How do you construct programs that build upon immutability? How do you (mostly) avoid mutable objects? How do you think in immutability?
The first step was to unlearn. No updates. No modifications. Read, create, copy. That’s about it. No more CRUD only CR. No more SQL updates, only inserts.

Events and logs

To illustrate I use a simple example. Creating, moving, translating and deleting a point. In the traditional OO way it looks like this:

Point p = new Point(40, 30);
p.moveTo(10, 20);

Or using SQL might be something like this (omitting primary keys and where clauses here):

insert into points (x, y) values (40, 30)
update points p set p.x = p.x + 5
update points p set p.x = 10, p.y = 20
delete from points

In our memory (or database if we use one) every line updates our point:

Point p = new Point(40, 30); // p = {x: 40, y: 30}
p.translateXBy(5); // p = {x: 45, y: 30}
p.moveTo(10, 20); // p = {x: 10, y: 20}
p.delete(); // p = ?

But what if we do not store the results of the operations but the operations themselves? The events.
Imagine your state changes as a series of events. Just imagine.

new PointCreated(40, 30); // pointEvents = [{created[x: 40, y:30]}]
new PointTranslatedXBy(5); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}]
new PointMovedTo(10, 20); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}, {moved:[x: 10, y:20]}]
new PointDeleted(); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}, {moved:[x: 10, y:20]}, {deleted}]

Even in the database we would just use inserts, no more updates and no more deletes. The events are stored in a log (ironically the database does this the same way). A log is a fully ordered, append only queue. Once we use and store events we have some extras besides immutability: an audit trail, an undo stack, recovering, …
We could externalize the event stream in a message queue and could monitor it, replay it to reproduce bugs, distribute it. The possibilities are endless.

But. That’s all nice and fine. I have one more question: what’s the current state? A user should see the current state and other parts of the system also (not mentioning that I – coming from a mutable state kind of thinking – would also feel better seeing it).

So what’s the current state?

All events applied in order.

OK. But isn’t this expensive doing this all the time?


Here another concept from databases helps us: materialized views. We can easily translate in our mind between the new immutable event driven way and the old in place update way. It is just the same data in different representations (if we only are interested in the current state). If we store the current state as a materialized view (or cache) besides the event log we can have both.
Every part of the program which needs the current state gets an immutable copy of it. If this part needs to know when something changes, it can observe the events and act accordingly. This way mutability is pushed to the borders, to the parts where the current state is shown (like the UI layer).

My motto: Make it visible

Nearly ten years ago, I read the wonderful book “Behind Closed Doors. Secrets of Great Management” by Johanna Rothman and Esther Derby. They shared a lot of valuable insights and tipps for my management career, but more important, gave a name to a trend I was pursuing much longer. In their book, they introduce the central aspect of the “Big Visible Chart”, a whiteboard that contains all the important work. This term combined several lines of thought that lingered in my head at the time without myself being able to fully express them. Let me reiterate some of them:

  • Extreme Feedback Devices (XFD) were a new concept back in the days. The aspect of physical interaction with a purely virtual software project thrilled me. Given a sensible choice of the feedback device, it represents project state in a intuitive manner.
  • Scrum and Kanban Boards got popular around the same time. I always rationally regarded them as poor man’s issue tracker, but the ability to really move things around instead of just clicking had something in itself.
  • My father always mentioned his Project Cockpit that he used in his company to maintain an overview of all upcoming and present projects. This cockpit is essentially a Scrum Board on project granularity. We use our variation with great success.
  • A lot of small everyday aspects required my attention much too often. Things like if the dishwasher in a shared appartment contains dirty or clean dishes always needed careful examination.

It was about time to weave all these motivations into one overarching motto that could guide my progress. The “Big Visible Chart” was the first step to this motto, but not the last. A big chart is really just a big information radiator and totally unsuited for the dishwasher use case. The motto needed to contain even more than “put all information on a central whiteboard”. I wasn’t able to word my motto until Bret Victor came along and held his talk “Inventing On Principle” (if you don’t know it, go and watch it now, I’ll be waiting). He talks about the personal mission statement that you should find to arrange your actions around it. That was the magical moment when everything fell into place for me. I knew my motto all along, but couldn’t spell it. And then, it was clear: “Make it visible”. My personal mission is to make things visible.

Let me try to give you a few examples where I applied my principle of making information visible:

  • I built a lot of Extreme Feedback Devices that range from single lamps over multi-colored displays to speech synthesis and even a little waterfall that gets switched on if things are “in a state of flux”, like being built on the CI server. All the devices are clearly perceivable and express information that would otherwise need to be actively pulled from different sources. I even wrote a book chapter about this topic and talk about it on conferences.
  • A lot of recurring tasks in my team are handled by paper tokens that get passed on when the job is done. Examples are the blog token (yes, it’s currently on my desk) for blog entries or the backup token as a reminder to bring in the remotely stored backup device and sync it. These tokens not only remind the next owner of his duty, but also act as a sign that you’ve accomplished your job, just like with task cards on the Scrum board.
  • If we need to work directly on a client server, we put on our “live server hat so that we are reminded to be extra careful (in german, there’s the idiom of “auf der hut sein”). But the hat is also a plain visible sign to everybody else to be a tad more silent and refrain from disturbing. Don’t talk to the hat! A lesser grade of “do not disturb” sign is the fully applied headphone.
  • Of course I built my own variation of my father’s Project Cockpit. It’s a great tracking device to never forget about any project, how sparse the actual activity might be.
  • And I solved the dishwasher case: The last action when clearing the dishes should be to already apply the next dishwasher tab. That way, whenever you open the dishwasher door, there are two possible states: if the tab case is empty, the dishes are clean (or somebody forgot to re-arm). If the tab is closed, you can be sure to have dirty dishes in the machine. The case gets re-opened during the next washing cycle.
  • An extra example might be the date of opening we write on the milk and juice cartons so you’ll know how long it has been open already.

All of these examples make information visible in place that would otherwise require you to collect it by sampling, measuring or asking around. Information radiators are typically big objects that typically do that job for you and present you the result. I’ve come to find that information radiators can be as little as a dishwasher tab in the right spot. The important aspect is to think about a way to make the information visible without much effort.

So if you repeatedly invest effort to gather all necessary data for an information, ask yourself: how could you automate or just formalize things so that you don’t have to gather the data, but have the information right before your eyes whenever you need it? It’s as simple as a little indicator on your mailbox that gets raised by the mailman or as complicated as a multi-colored LED in your faucet indicating the water temperature. The overarching principle is always to make information visible. It’s a very powerful motto to live by.

Quantities in C++ and User Defined Literals

Some weeks ago one of my colleagues wrote about the use and implementation of physical quantities in C#. If you are writing an application in the technical or scientific domain chances are high that you should adhere to his advice and use a suitable representation of physical quantities instead of plain primitive values. Good news is that you can easily port/implement quantities to modern C++ or use existing libraries like Boost.Units.

With C++11 you can go one step further adding the so called User-defined literals. This feature allows definition of suffices for integer, floating-point, character and string literals to produce objects of the desired (quantity) type. While there is nothing wrong with using the multiplication operator to produce quantity instances user-defined literals provide just a little bit more syntactic sugar:

// Your quantity classes...
class Angle;

// operators for user-defined literals
constexpr Angle operator "" _deg(long double deg)
    return deg * degrees;

constexpr Angle operator "" _deg(unsigned long long int deg)
    return deg * degrees;

constexpr Angle operator "" _rad(long double rad)
    return (rad * 180 / M_PI) * degrees;

// add more if needed

This allows you to write code like:

Angle rightAngle = 90_deg;
Angle halfCircle = 3.141_rad;
Angle fullCircle = 4 * 90_deg;

In many cases this looks a tad simpler and cleaner than using the multiplication operator in conjunction with a unit especially in more complex formulas. There are a few things about quantities and user-defined literals in C++ I find noteworthy:

  • These literals are only supported for the built-in literal types. If exact calculation and better than floating-point precision is needed, raw literals (instead of the explained cooked) and decimal libraries have to be used. For raw literals you have to parse the characters of the literal yourself.
  • User-defined literals need to be prefixed with _ to avoid namespace clashes with current and future standard library literals. There are for example some nice literals for durations in the <chrono>-date and time standard library.
  • If you implement your literal operators as constexpr they will be evaluated at compile time meaning slightly increased compile times and zero runtime overhead.

For some more in-depth discussion of user-defined literals have a look at the blog series from Andrzej Krzemieński.


What’s your time, database?

Time is a difficult subject. Especially time zones and daylight saving time. Sounds easy? Well, take a look.
Adding layers in software development complicates the issue and every layer has its own view of time. Let’s start with an example: we write a simple application which stores time based data in a SQL database, e.g. Oracle. The table has a column named ‘at’. Since we don’t want to mess around with timezones, we use a column type without timezone information, in Oracle this would be ‘Date’ if we do not need milliseconds and ‘Timestamp’ if we need them. In Java with plain JDBC we can extract it with a call to ‘getTimestamp’:

Date timestamp = resultSet.getTimestamp("at");

The problem is now we have a timestamp in our local timezone. Where is it converted? Oracle itself has two timezone settings: for the database and for the session. We can query them with:

select DBTIMEZONE from dual;


select SESSIONTIMEZONE from dual;

First Oracle uses the time zone set in the session, then the database one. The results from those queries are interesting though: some return a named timezone like ‘Europe/Berlin’, the other return an offset ‘+01:00’. Here a first subtle detail is important: the named timezone uses the offset and the daylight saving time from the respective timezone, the offset setting only uses the offset and no daylight saving. So ‘+01:00’ would just add 1 hour to UTC regardless of the date.
In our example changing these two settings does not change our time conversion. The timezone settings are for another column type: timestamp with (local) timezone.
Going up one layer the JDBC API reveals an interesting tidbit:

Timestamp getTimestamp(int columnIndex)
throws SQLException

Retrieves the value of the designated column in the current row of this ResultSet object as a java.sql.Timestamp object in the Java programming language.

Sounds about right, but wait there’s another method:

Timestamp getTimestamp(int columnIndex,
Calendar cal)
throws SQLException

Retrieves the value of the designated column in the current row of this ResultSet object as a java.sql.Timestamp object in the Java programming language. This method uses the given calendar to construct an appropriate millisecond value for the timestamp if the underlying database does not store timezone information.

Just as in Oracle we can use a named timezone or an offset:

Date timestamp = resultSet.getTimestamp("at", Calendar.getInstance(TimeZone.getTimeZone("GMT+1:00")));

This way we have control over what and how the time is extracted from the database. The next time you work with time based information take a close look. And if you work with Java use Joda Time.

Keep your ovens clean

Let’s assume for a moment that you are a baker, producing different types of pastries in your small bakery. The production process is always the same: prepare the dough, put it in the oven, wait some time and retrieve the most delicious buns or bread. If we can abstract the real baking process to these steps, it’s the same as with software: prepare the sourcecode, put it in the compiler, wait some time and retrieve the most delicious binary or executable. There is only one difference: The oven of the baker is a self-contained, closed system, while our compilers require a distinct system setup around them in order to produce anything edible. The oven is independent from the kitchen around it, the compiler is depedent on the environment. To finish the analogy, what would a baker say if he can’t bake bread in his oven unless he nurtures a certain type of yeast in his kitchen?

A most unpleasant case

While developing a platform dependent application recently, we met a most unpleasant case of build dependency on a third-party library. It was an old dynamic link library (DLL) that requires registering in the windows registry. There was no other way than to register the DLL using the regsrv32 utility. If you didn’t do this, the build process would abort with an error stating unmet dependencies. If you ran the resulting program on a machine without registered DLL, it would crash with a runtime error complaining about the missing registry entry. And by the way, there are two totally independent regsrv32 utilities on a 64-bit windows system, one for 32-bit and one for 64-bit registrations. No, the name of the latter one isn’t regsvr64, that would be way too easy.

We accepted the fact that you need to prepare your system if you want to run the program, but we quarreled a lot with the nuisance that you need to alter your system just to build the software. This process of alteration is called snowflaking in the DevOp mentality and it’s not a desired activity. We would need to alter every build machine in our continuous integration cluster that comes into contact with the project. And we would need to de-snowflake them again afterwards, because this kind of tinkering adds up to inscrutable side-effects very fast.

A practicable workaround

We found a way around the abovementioned snowflaking for our build servers. It’s not a solution, it’s only a workaround, as it solves the immediate problem but produces some lesser problems on the way. Let’s look at what we did.

At first, our situation could be described with this module diagram:

dependency1We couldn’t modify the problematic DLL itself, it was a given binary. But we could wrap it in our own DLL. Wrapping less pleasant things into something you can control is a proven technique even in baking, by the way. We now had a system layout that looks like this:

dependency2Nothing gained so far, just that we now have a layer outside our system that can provide the functionality of the DLL and is actually under our control. The wrapper really does nothing on its own but to forward each call to the DLL. To profit from this indirection, we need to introduce another module, like this:

dependency3The second module provides the same interface as the first, but does nothing, not even forwarding anywhere. It’s a complete stub, just there to be uncomplicated during the build process. The goal is to build the system using this “empty” DLL and then replace it with the “problematic” DLL afterwards. The only question is: how do we build the problematic DLL? Here’s the workaround part of the solution: We actually had to compile the problematic DLL on a snowflaked system and add it to the project repository. Good thing our target system’s specification is known, so we only need to do this for one platform. Because we are reasonably sure that the DLL interface will not change over time (it had every opportunity in the last ten years and didn’t use it), we can assume that the interface of our two wrapper DLLs also won’t change. So it’s not too problematic to check in a precompiled binary that needs to satisfy an interface that’s reproduced with every build cycle. Still, we need to keep an eye on the method signatures of our two wrapper DLLs. If one of them changes, the modification needs to be replicated on the other wrapper, too. It’s a classic duplication.

When we balanced the duplication in the interfaces of the two wrapper DLLs against the snowflaking of every CI and developer machine, we found our aversion against snow outweighing the other negative aspects. Your mileage may vary.


We kept our build ovens clean by introducing a wrapping layer around the problematic depedency and then using the benefits of indirection by switching to a non-problematic stub during the build cycle. The technique is very old, but still use- and powerful.