My C++ Tool Belt

I suspect that every developer has a “tool belt” that he or she uses to be productive. By that I mean a collection of tools, libraries and whatever else helps. With a few exceptions, these tool belts will probably be language specific, or at least platform specific. As my projects updated their compilers and transitioned to C++11 and beyond, my C++ tool belt changed quite a bit. Since things like threading, smart pointers and functional abstractions where added to the standard library, those are now already included by default. Today I wanna write about what is in my modernized C++11 tool belt.

The Standard Library

Ever since the tr1 extensions, the standard library has progressed into becoming truly powerful and exceptional. The smart pointers, containers, algorithms are much more language extensions than “just” a library, and they play perfectly with actual language features, such as lambdas, auto and initializer lists.

fmtlib

fmtlib provides placeholder-based text formatting a la Python’s String.format. There have been a few implementations of this idea over the years, but this is the first where I think that it might just dethrone operator<< overloading for good. It's fast, stable, portable and has a nice API.
I begin to miss this library the moment I need to work on a project that does not have it.
The next best thing is Qt’s QString::arg mechanism, with slightly inferior API, a less inclusive license, and a much bigger dependency.

spdlog

Logging is a powerful tool, both for software development and maintenance. Chances are you are going to need it at one point. spdlog is my favorite choice for this task. It uses fmtlib internally, which is just another plus point. It’s simple, fast and very nice to use due to reuse of fmtlib’s formatting. I usually just include this in my projects and get the included fmtlib for free.

optional

This one is actually part of the most recent C++17, but since that is not widely available yet (meaning not many projects have adopted it), I’m going to list it explicitly. There are also a few alternative implementations, such as the one in Boost or akrzemi1’s single-header variant.
Unlike many other programming languages, C++ has a relatively high emphasis on value types. While reference types usually have a built-in “not available” state (a.k.a. nullptr, NULL, Nothing or nil), an optional can transport intent much clearer. For value types, however, it’s absolutely mandatory to have an optional type. Otherwise, you just end up wrapping the value in a pointer just to make it optional.
Do not, however, fall into the trap of using optional for error handling. It’s not made for that, and other abstractions, such as expected are much better for that.

CMake

There is really only one choice when it comes to build tools, and that’s CMake. It’s got its own bunch of weaknesses, but the goods far outweight the bads. With the target_ functions, it’s actually quite nice and scales really well to bigger projects. The main downside here is that it still does not play nice with some tools, most notably visual studio. CLion and QtCreator fare much better. Then again, CMake enables the use of other tools easily, such as clang-tidy.

A word on Boost

Boost is no longer the must-have it once was. Much of the mandated functionality has already been incorporated into the standard library. It is no longer a requirement for a sane C++ project. On the contrary, boost is notoriously huge and somewhat cumbersome to integrate. Boost is not a library, it is a collection of libraries, therefore you can still decide whether to use Boost on a library by library basis. However, much of that is viral, and using a small part of Boost will easily drag in a few hundreds of other Boost headers. The libraries I tend to include most often are Boost.Utility (for boost::noncopyable) and Boost.Filesystem. The former is obviously easy to do without Boost, especially with = delete; and the latter is a part of the standard library since C++17. I hope to be doing the majority of my projects without it in the future. Boost was a catalyst for most of the C++ progress in recent years. It slowly becoming obsolete, either by being integrated into the standard or it’s idioms no longer being needed, is just a sign of its own success.

My honorable mentions are Qt and the stb single file libraries. What are your go-to tools?

Platform independent development with .NET

We develop most of our projects as platform independent applications, usually running under Windows, Mac and Linux. There are exceptions, for example when it is required to communicate with special hardware drivers or third-party libraries or other components that are not available on all platforms. But even then we isolate these parts into interchangeable modules that can be operated either in a simulated mode or with the real thing. The simulated modes are platform independent. Developers usually can work on the code base using their favorite operating system. Of course, it has to be tested on the target platform(s) that the application will run on in the end.

Platform independent development is both a matter of technology choices and programming practices. Concerning the technology the ecosystem based on the Java VM is a proven choice for platform independent development. We have developed many projects in Java and other JVM based languages. All of our developers are polyglots and we are able to develop software with a wide variety of programming languages.

The .NET ecosystem

Until recently the .NET platform has been known to be mainly a Microsoft Windows based ecosystem. The Mono project was started by non-Microsoft developers to provide an open source implementation of .NET for other operating systems, but it never had the same status as Microsoft’s official .NET on Windows.

However, recently Microsoft has changed course: They open sourced their .NET implementation and are porting it to other platforms. They acquired Xamarin, the company behind the Mono project, and they are releasing developer tools such as IDEs for non-Windows platforms.

IDEs for non-Windows platforms

If you want to develop a .NET project on a platform other than Windows you now have several choices for an IDE:

I am currently using JetBrains Rider on a Mac to develop a .NET based application in C#. Since I have used other JetBrains products before it feels very familiar. Xamarin Studio, MonoDevelop, VS for Mac and JetBrains Rider all support the solution and project file format of the original Visual Studio for Windows. This means a .NET project can be developed with any of these IDEs.

Web applications

The .NET application I am developing is based on Web technologies. The server side uses the NancyFX web framework, the client side uses React. Persistence is done with Microsoft’s Entity Framework. All the libraries I need for the project like NancyFX, the Entity Framework, a PostgreSQL driver, JSON.NET, NLog, NUnit, etc. work on non-Windows platforms without any problems.

Conclusion

Development of .NET applications is no longer limited to the Windows platform. Microsoft is actively opening up their development platform for other operating systems.

Recap of the Schneide Dev Brunch 2017-02-12

brunch64-borderedYesterday at sunday, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. This brunch was a little smaller in numbers of attendents, but very interesting nonetheless. As usual, the main theme was that if you bring a software-related topic along with your food, everyone has something to share. Because we were very invested in our topics, we established an agenda for the event. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Household roboters

We started our brunch talk by mentioning the services our five years old roombas provide for us, especially keeping the floor free of any small things. The biggest effect when having an electronic pet like a roomba is that you learn to keep your things above ground, especially cords with expensive electronics on the other end. The continuous elimination of dust is just a positive bonus on top of your behaviour adjustment. You keep the floor tidy, the roomba just mercilessly enforces this rule.

Today, there are many alternatives to the original roomba and most have really nice features and abilities. So no matter what brand you buy, you’ll get a capable floor police.

Code Review priorities

In our recent dev brunches, we talked about code review tools and code review habits. This time, we talked about code review priorities and the sorry state we are still in with current tools. We worked out that its nearly useless to only show the diff with a few lines of surrounding code and expect a thorough review. Even the concept of “changed files” is rather distracting in an object-oriented language. But even the current tools are only as good as we make use of them.

The first priority of code reviews should be finding and eliminating bugs – “real” bugs that would have had surfaced in production otherwise and “hypothetical” bugs that could have shown up in production. This means that code review is in its core an activity for the user of the software. Only second priority is the understandibility of the source code. If the reviewer doesn’t understand the code, chances are high that nobody will, including the author in a few weeks or months. Cleaning up the code now mitigates the problem for the lowest possible cost because the “hurdle of understanding” isn’t raised yet. A code review should never work on the level of linters and should not address topics that can be checked by an automatic tool. Suggestions about refactorings should be kept to a minimum because they may serve no purpose if the code isn’t touched again. Refactor when the code is opened for the second edit, not on the first review. Review the code on the semantic level, not on the syntactic.

And keep in mind that code review tend to be used for conditioning remarks (“don’t do that”, “this is ugly”, “I don’t approve”, etc.). Try to avoid conditioning and strive to provide educational comments (“if you change this to that, then you’ll profit from this benefit”, “here’s a suggestion for a better approach and here’s why it is better”, etc.). But we also discussed that at this point, the code review remarks are probably better said in a pair programming session.

Code reviews are a powerful tool for development teams, but with power comes danger. Hopefully, we get adequate software tools to help us avoid the common traps soon.

Time management

Out of interest, we talked about some principles and practices to better manage one’s time.

The first thing to be aware of are the two fundamentally different schedules of management and development. The manager’s schedule is clocked in 30 minute intervals and driven by outside demand (meaning that a manager idles when not requested), while the maker’s schedule works with 4-hour blocks of uninterrupted, deeply concentrated work. You can probably see the problems that arise when somebody in a maker cycle is interrupted multiple times as if he was in a manager cycle. The first thing you can do is to announce your maker cycles (by clear “busy right now” indicators like headphones or a “do not disturb” sign) or announce your manager cycles much to the effect of consultation hours. Let your disturber know if he can disturb safely or if even the question causes damage.

Another important thing is to arrange your surrounding according to your schedule. Your schedule is so important that you should choose your service providers according to it. For example, if you work full time, look for hairdressers that work saturdays or dentists that offer appointments in the night. If you need to contact people for personal matters during work hours, allocate a specific timebox each day or at one day in the week and do it then. Announce this timebox to everybody who might want to contact you during work. This way, to can differentiate people that respect your (announced!) schedule and people that don’t. Depending on your rigorousness, you can cut the people that harm your schedule out of your life.

Work only with people who value your expectations (if reasonable). If you give a task to somebody, let’s say a craftsman, and state the deadline, you need to be sure the deadline is met without you checking or the craftsman will report back in time. Don’t give tasks to people who leave you in the lurch.

It all boils down to keep the control about your calendar. Whenever you give somebody else the opportunity to “conquer” a slice of your available time at their convenience, you increase your own inconvenience.

Karlsruhe C++ User Group

The year 2017 started with a new-founded C++ user group in karlsruhe that started with great events. David, the organisator of the user group is a regular attendee at our Dev Brunch and reported about his experiences with the boot process of the user group. He found a sponsor in the Clausmark GmbH and accompanied the monthly talks and programming events with a regular table that provides a similar format as our Dev Brunch, just in the night and not in the morning. We also talked about possible future content, and found code-centric “git guided live casts” a worthwhile format. Another format, the excellent code retreats are a great way to learn from others, but require a full day and not just two hours in the evening. The Game of Life kata is really fun, even when done repeatedly. Once you discover the solution in APL, you’ll want a special APL keyboard, too.

We are looking forward to hear great talks and meet cool people at the C++ user group Karlsruhe.

Sales knowledge

Our last topic in the bonus time (we were lenient with our scheduled time box, it’s sunday!) was about sales and the installation of sales knowledge and sales behaviour in a group of developers. We agreed that starting with Strategic Selling is a good choice because the process/framework is compatible with established developer culture and effective in its results. The resulting shift in the perception of occurrences is immediate and powerful. Strategic Selling is a rather old sales process that share some similarities with Solution Selling, another nerd-friendly process for complex business-to-business (B2B) sales.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei in April. We even have some topics already on the agenda (like a report about first-hand experiences with the programming language Rust and a discussion about the concept of provisioning). And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

C++ Coroutines on Windows with the Fiber API

Last week, I had the chance to try out coroutines as a way to cooperatively interleave long tasks with event-processing. Unlike threads, where you can have interaction between between threads at any time, coroutines need to yield control explicitly, whic arguably makes synchronisation a little simpler. Especially in (legacy) systems that are not designed for concurrency. Of course, since coroutines do not run at the same time, you do not get the perks from concurrency either.

If you don’t know coroutines, think of them as functions that can be paused and resumed.

Unlike many other languages, C++ does not have built-in support for coroutines just yet. There are, however, several alternatives. On Windows, you can use the Fiber API to implement coroutines easily.

Here’s some example code of how that works:

auto coroutine=make_shared<FiberCoroutine>();
coroutine->setup([](Coroutine::Yield yield)
{
  for (int i=0; i<3; ++i)
  {
    cout << "Coroutine " 
         << i << std::endl;
    yield();
  }
});

int stepCount = 0;
while (coroutine->step())
{
  cout << "Main " 
       << stepCount++ << std::endl;
}

Somewhat surprisingly, at least if you have never seen coroutines, this will output the two outputs alternatingly:

Coroutine 0
Main 0
Coroutine 1
Main 1
Coroutine 2
Main 2

Interface

Since fibers are not the only way to implement coroutines and since we want to keep our client code nicely insulated from the windows API, there’s a pure-virtual base class as an interface:

class Coroutine
{
public:
  using Yield = std::function<void()>;
  using Run = std::function<void(Yield)>;

  virtual ~Coroutine() = default;
  virtual void setup(Run f) = 0;
  virtual bool step() = 0;
};

Typically, creation of a Coroutine type allocates all the resources it needs, while setup “primes” it with an inner “Run” function that can use an implementation-specific “Yield” function to pass control back to the caller, which is whoever calls step.

Implementation

The implementation using fibers is fairly straight-forward:

class FiberCoroutine
  : public Coroutine
{
public:
  FiberCoroutine()
  : mCurrent(nullptr), mRunning(false)
  {
  }

  ~FiberCoroutine()
  {
    if (mCurrent)
      DeleteFiber(mCurrent);
  }

  void setup(Run f) override
  {
    if (!mMain)
    {
      mMain = ConvertThreadToFiber(NULL);
    }
    mRunning = true;
    mFunction = std::move(f);

    if (!mCurrent)
    {
      mCurrent = CreateFiber(0,
        &FiberCoroutine::proc, this);
    }
  }

  bool step() override
  {
    SwitchToFiber(mCurrent);
    return mRunning;
  }

  void yield()
  {
    SwitchToFiber(mMain);
  }

private:
  void run()
  {
    while (true)
    {
      mFunction([this]
                { yield(); });
      mRunning = false;
      yield();
    }
  }

  static VOID WINAPI proc(LPVOID data)
  {
    reinterpret_cast<FiberCoroutine*>(data)->run();
  }

  static LPVOID mMain;
  LPVOID mCurrent;
  bool mRunning;
  Run mFunction;
};

LPVOID FiberCoroutine::mMain = nullptr;

The idea here is that the caller and the callee are both fibers: lightweight threads without concurrency. Running the coroutine switches to the callee’s, the run function’s, fiber. Yielding switches back to the caller. Note that it is currently assumed that all callers are from the same thread, since each thread that participates in the switching needs to be converted to a fiber initially, even the caller. The current version only keeps the fiber for the initial thread in a single static variable. However, it should be possible to support this by replacing the single static fiber pointer with a map that maps each thread to its associated fiber.

Note that you cannot return from the fiberproc – that will just terminate the whole thread! Instead, just yield back to the caller and either re-use or destroy the fiber.

Assessment

Fiber-based coroutines are a nice and efficient way to model non-linear control-flow explicitly, but they do not come without downsides. For example, while this example worked flawlessly when compiled with visual studio, Cygwin just terminates without even an error. If you’re used to working with the visual studio debugger, it may surprise you that the caller gets hidden completely while you’re in the run function. The run functions stack completely replaces the callers stack until you call yield(). This means that you cannot find out who called step(). On the other hand, if you’re actually doing a lot of processing in the run function, this is quite nice for profiling, as the “processing” call tree seemingly has its own root in the call-tree.

I just wish the visual studio debugger had a way to view the states of the different fibers like it has for threads.

Alternatives

  • On Linux, you can use the ucontext.
  • Visual Studio 2015 also has another, newer, implementation.
  • Coroutines can be implemented using threads and condition-variables.
  • There’s also Boost.Coroutine, if you need an independent implementation of the concept. From what I gather, they only use Fibers optionally, and otherwise do the required “trickery” themselves. Maybe this even keeps the caller-stack visible – it is certainly worth exploring.
  • Generating a spherified cube in C++

    In my last post, I showed how to generate an icosphere, a subdivided icosahedron, without any fancy data-structures like the half-edge data-structure. Someone in the reddit discussion on my post mentioned that a spherified cube is also nice, especially since it naturally lends itself to a relatively nice UV-map.

    The old algorithm

    The exact same algorithm from my last post can easily be adapted to generate a spherified cube, just by starting on different data.

    cube

    After 3 steps of subdivision with the old algorithm, that cube will be transformed into this:

    split4

    Slightly adapted

    If you look closely, you will see that the triangles in this mesh are a bit uneven. The vertical lines in the yellow-side seem to curve around a bit. This is because unlike in the icosahedron, the triangles in the initial box mesh are far from equilateral. The four-way split does not work very well with this.

    One way to improve the situation is to use an adaptive two-way split instead:
    split2

    Instead of splitting all three edges, we’ll only split one. The adaptive part here is that the edge we’ll split is always the longest that appears in the triangle, therefore avoiding very long edges.

    Here’s the code for that. The only tricky part is the modulo-counting to get the indices right. The vertex_for_edge function does the same thing as last time: providing a vertex for subdivision while keeping the mesh connected in its index structure.

    TriangleList
    subdivide_2(ColorVertexList& vertices,
      TriangleList triangles)
    {
      Lookup lookup;
      TriangleList result;
    
      for (auto&& each:triangles)
      {
        auto edge=longest_edge(vertices, each);
        Index mid=vertex_for_edge(lookup, vertices,
          each.vertex[edge], each.vertex[(edge+1)%3]);
    
        result.push_back({each.vertex[edge],
          mid, each.vertex[(edge+2)%3]});
    
        result.push_back({each.vertex[(edge+2)%3],
          mid, each.vertex[(edge+1)%3]});
      }
    
      return result;
    }
    

    Now the result looks a lot more even:
    split2_sphere

    Note that this algorithm only doubles the triangle count per iteration, so you might want to execute it twice as often as the four-way split.

    Alternatives

    Instead of using this generic of triangle-based subdivision, it is also possible to generate the six sides as subdivided patches, as suggested in this article. This approach works naturally if you want to have seams between your six sides. However, that approach is more specialized towards this special geometry and will require extra “stitching” if you don’t want seams.

    Code

    The code for both the icosphere and the spherified cube is now on github: github.com/softwareschneiderei/meshing-samples.

    Generating an Icosphere in C++

    If you want to render a sphere in 3D, for example in OpenGL or DirectX, it is often a good idea to use a subdivided icosahedron. That often works better than the “UVSphere”, which means simply tesselating a sphere by longitude and latitude. The triangles in an icosphere are a lot more evenly distributed over the final sphere. Unfortunately, the easiest way, it seems, is to generate such a sphere is to do that in a 3D editing program. But to load that into your application requires a 3D file format parser. That’s a lot of overhead if you really need just the sphere, so doing it programmatically is preferable.

    At this point, many people will just settle for the UVSphere since it is easy to generate programmatically. Especially since generating the sphere as an indexed mesh without vertex-duplicates further complicates the problem. But it is actually not much harder to generate the icosphere!
    Here I’ll show some C++ code that does just that.

    C++ Implementation

    We start with a hard-coded indexed-mesh representation of the icosahedron:

    struct Triangle
    {
      Index vertex[3];
    };
    
    using TriangleList=std::vector<Triangle>;
    using VertexList=std::vector<v3>;
    
    namespace icosahedron
    {
    const float X=.525731112119133606f;
    const float Z=.850650808352039932f;
    const float N=0.f;
    
    static const VertexList vertices=
    {
      {-X,N,Z}, {X,N,Z}, {-X,N,-Z}, {X,N,-Z},
      {N,Z,X}, {N,Z,-X}, {N,-Z,X}, {N,-Z,-X},
      {Z,X,N}, {-Z,X, N}, {Z,-X,N}, {-Z,-X, N}
    };
    
    static const TriangleList triangles=
    {
      {0,4,1},{0,9,4},{9,5,4},{4,5,8},{4,8,1},
      {8,10,1},{8,3,10},{5,3,8},{5,2,3},{2,7,3},
      {7,10,3},{7,6,10},{7,11,6},{11,0,6},{0,1,6},
      {6,1,10},{9,0,11},{9,11,2},{9,2,5},{7,2,11}
    };
    }
    

    icosahedron
    Now we iteratively replace each triangle in this icosahedron by four new triangles:

    subdivision

    Each edge in the old model is subdivided and the resulting vertex is moved on to the unit sphere by normalization. The key here is to not duplicate the newly created vertices. This is done by keeping a lookup of the edge to the new vertex it generates. Note that the orientation of the edge does not matter here, so we need to normalize the edge direction for the lookup. We do this by forcing the lower index first. Here’s the code that either creates or reused the vertex for a single edge:

    using Lookup=std::map<std::pair<Index, Index>, Index>;
    
    Index vertex_for_edge(Lookup& lookup,
      VertexList& vertices, Index first, Index second)
    {
      Lookup::key_type key(first, second);
      if (key.first>key.second)
        std::swap(key.first, key.second);
    
      auto inserted=lookup.insert({key, vertices.size()});
      if (inserted.second)
      {
        auto& edge0=vertices[first];
        auto& edge1=vertices[second];
        auto point=normalize(edge0+edge1);
        vertices.push_back(point);
      }
    
      return inserted.first->second;
    }
    

    Now you just need to do this for all the edges of all the triangles in the model from the previous interation:

    TriangleList subdivide(VertexList& vertices,
      TriangleList triangles)
    {
      Lookup lookup;
      TriangleList result;
    
      for (auto&& each:triangles)
      {
        std::array<Index, 3> mid;
        for (int edge=0; edge<3; ++edge)
        {
          mid[edge]=vertex_for_edge(lookup, vertices,
            each.vertex[edge], each.vertex[(edge+1)%3]);
        }
    
        result.push_back({each.vertex[0], mid[0], mid[2]});
        result.push_back({each.vertex[1], mid[1], mid[0]});
        result.push_back({each.vertex[2], mid[2], mid[1]});
        result.push_back({mid[0], mid[1], mid[2]});
      }
    
      return result;
    }
    
    using IndexedMesh=std::pair<VertexList, TriangleList>;
    
    IndexedMesh make_icosphere(int subdivisions)
    {
      VertexList vertices=icosahedron::vertices;
      TriangleList triangles=icosahedron::triangles;
    
      for (int i=0; i<subdivisions; ++i)
      {
        triangles=subdivide(vertices, triangles);
      }
    
      return{vertices, triangles};
    }
    

    There you go, a customly subdivided icosphere!
    icosphere

    Performance

    Of course, this implementation is not the most runtime-efficient way to get the icosphere. But it is decent and very simple. Its performance depends mainly on the type of lookup used. I used a map instead of an unordered_map here for brevity, only because there’s no premade hash function for a std::pair of indices. In pratice, you would almost always use a hash-map or some kind of spatial structure, such as a grid, which makes this method a lot tougher to compete with. And certainly feasible for most applications!

    The general pattern

    The lookup-or-create pattern used in this code is very useful when creating indexed-meshes programmatically. I’m certainly not the only one who discovered it, but I think it needs to be more widely known. For example, I’ve used it when extracting voxel-membranes and isosurfaces from volumes. It works very well whenever you are creating your vertices from some well-defined parameters. Usually, it’s some tuple that describes the edge you are creating the vertex on. This is the case with marching cubes or marching tetrahedrons. It can, however, also be grid coordinates if you sparsely generate vertices on a grid, for example when meshing heightmaps.

    Explicit types – and when to use them

    Many modern programming languages offer a way declare variables without an explicit type if the type can be inferred, either dynamically or statically. Many also allow for variables to be explicitly defined with a type. For example, Scala and C# let you omit the explicit variable type via the var keyword, but both also allow defining variables with explicit types. I’m coming from the C++ world, where “auto” is available for this purpose since the relatively recent C++11. However, people are still debating whether you should actually use it.

    Pros

    Herb Sutter popularised the almost-always-auto style. He advocates that using more type inference is good because it is roughly equivalent to programming against interfaces instead of implementations. He says that “Overcommitting to explicit types makes code less generic and more interdependent, and therefore more brittle and limited.” However, he also mentions that you might sometimes want to use explicit types.

    Now what exactly is overcommiting here? When is the right time to use explicit types?

    Cons

    Opponents to implicit typing, many of them experienced veterans, often state that they want the actual type visible in the source code. They don’t want to rely on type inference being right. They want the code to explicitly state what’s going on.

    At first, I figured that was just conservatism in the face of a new “scary” feature that they did not fully understand. After all, IDEs can usually infer the type on-the-fly and you can hover on a variable to let it show you the type.

    For C++, the function signature is a natural boundary where you often insert explicit types, unless you want to commit to the compile time and physical dependency cost that comes with templates. Other languages, such as Groovy, do not have this trade-off and let you skip explicit types almost everywhere. After working with Groovy/Grails for a while, where the dominant style seems to be to omit types whereever possible, it dawned on me that the opponents of implicit typing have a point. Not only does the IDE often fail to show me the inferred type (even though it still works way more often than I would have anticipated), but I also found it harder to follow and modify code that did not mention explicit types. Seemingly contrary to Herb Sutter’s argument, that code felt more brittle than I had liked.

    Middle-ground

    As usual, the truth seems to be somewhere in the middle. I propose the following rule for when to use explicit types:

    • Explicit typing for domain-types
    • Implicit typing everywhere else

    Code using types from the problem domain should be as specific as possible. There’s no need for it to be generic – it’s actually counter-productive, as otherwise the code model would be inconsistent with model of the problem domain. This is also the most important aspect to grok when reading code, so it should be explicit. The type is as important as the action on it.

    On the other hand, for pure-fabrication types that do not respresent a concept in the domain, the action is important, while the type is merely a means to achieve this action. Typically, most of the elements from a language’s standard library fall into this category. All your containers, iterators, callables. Their types are merely implementation details: an associative container could be an array, or a hash-map or a tree structure. Exchanging it rarely changes the meaning of the code in the problem domain – it just changes its performance characteristics.

    Containers will occasionally contain domain-types in their type. What do you do about those? I think they belong in the “everywhere else” catergory, but you should be take extra care to name the contained type when working with it – for example when declaring the variable of the for-each loop on it, or when inserting something into it. This way, the “collection of domain-type” aspect will become clear, but the specific container implementation will stay implicit – like it should.

    What do you think? Is this a useful proposition for your code?