C++ Coroutines on Windows with the Fiber API

Last week, I had the chance to try out coroutines as a way to cooperatively interleave long tasks with event-processing. Unlike threads, where you can have interaction between between threads at any time, coroutines need to yield control explicitly, whic arguably makes synchronisation a little simpler. Especially in (legacy) systems that are not designed for concurrency. Of course, since coroutines do not run at the same time, you do not get the perks from concurrency either.

If you don’t know coroutines, think of them as functions that can be paused and resumed.

Unlike many other languages, C++ does not have built-in support for coroutines just yet. There are, however, several alternatives. On Windows, you can use the Fiber API to implement coroutines easily.

Here’s some example code of how that works:

auto coroutine=make_shared<FiberCoroutine>();
coroutine->setup([](Coroutine::Yield yield)
{
  for (int i=0; i<3; ++i)
  {
    cout << "Coroutine " 
         << i << std::endl;
    yield();
  }
});

int stepCount = 0;
while (coroutine->step())
{
  cout << "Main " 
       << stepCount++ << std::endl;
}

Somewhat surprisingly, at least if you have never seen coroutines, this will output the two outputs alternatingly:

Coroutine 0
Main 0
Coroutine 1
Main 1
Coroutine 2
Main 2

Interface

Since fibers are not the only way to implement coroutines and since we want to keep our client code nicely insulated from the windows API, there’s a pure-virtual base class as an interface:

class Coroutine
{
public:
  using Yield = std::function<void()>;
  using Run = std::function<void(Yield)>;

  virtual ~Coroutine() = default;
  virtual void setup(Run f) = 0;
  virtual bool step() = 0;
};

Typically, creation of a Coroutine type allocates all the resources it needs, while setup “primes” it with an inner “Run” function that can use an implementation-specific “Yield” function to pass control back to the caller, which is whoever calls step.

Implementation

The implementation using fibers is fairly straight-forward:

class FiberCoroutine
  : public Coroutine
{
public:
  FiberCoroutine()
  : mCurrent(nullptr), mRunning(false)
  {
  }

  ~FiberCoroutine()
  {
    if (mCurrent)
      DeleteFiber(mCurrent);
  }

  void setup(Run f) override
  {
    if (!mMain)
    {
      mMain = ConvertThreadToFiber(NULL);
    }
    mRunning = true;
    mFunction = std::move(f);

    if (!mCurrent)
    {
      mCurrent = CreateFiber(0,
        &FiberCoroutine::proc, this);
    }
  }

  bool step() override
  {
    SwitchToFiber(mCurrent);
    return mRunning;
  }

  void yield()
  {
    SwitchToFiber(mMain);
  }

private:
  void run()
  {
    while (true)
    {
      mFunction([this]
                { yield(); });
      mRunning = false;
      yield();
    }
  }

  static VOID WINAPI proc(LPVOID data)
  {
    reinterpret_cast<FiberCoroutine*>(data)->run();
  }

  static LPVOID mMain;
  LPVOID mCurrent;
  bool mRunning;
  Run mFunction;
};

LPVOID FiberCoroutine::mMain = nullptr;

The idea here is that the caller and the callee are both fibers: lightweight threads without concurrency. Running the coroutine switches to the callee’s, the run function’s, fiber. Yielding switches back to the caller. Note that it is currently assumed that all callers are from the same thread, since each thread that participates in the switching needs to be converted to a fiber initially, even the caller. The current version only keeps the fiber for the initial thread in a single static variable. However, it should be possible to support this by replacing the single static fiber pointer with a map that maps each thread to its associated fiber.

Note that you cannot return from the fiberproc – that will just terminate the whole thread! Instead, just yield back to the caller and either re-use or destroy the fiber.

Assessment

Fiber-based coroutines are a nice and efficient way to model non-linear control-flow explicitly, but they do not come without downsides. For example, while this example worked flawlessly when compiled with visual studio, Cygwin just terminates without even an error. If you’re used to working with the visual studio debugger, it may surprise you that the caller gets hidden completely while you’re in the run function. The run functions stack completely replaces the callers stack until you call yield(). This means that you cannot find out who called step(). On the other hand, if you’re actually doing a lot of processing in the run function, this is quite nice for profiling, as the “processing” call tree seemingly has its own root in the call-tree.

I just wish the visual studio debugger had a way to view the states of the different fibers like it has for threads.

Alternatives

  • On Linux, you can use the ucontext.
  • Visual Studio 2015 also has another, newer, implementation.
  • Coroutines can be implemented using threads and condition-variables.
  • There’s also Boost.Coroutine, if you need an independent implementation of the concept. From what I gather, they only use Fibers optionally, and otherwise do the required “trickery” themselves. Maybe this even keeps the caller-stack visible – it is certainly worth exploring.
  • Generating a spherified cube in C++

    In my last post, I showed how to generate an icosphere, a subdivided icosahedron, without any fancy data-structures like the half-edge data-structure. Someone in the reddit discussion on my post mentioned that a spherified cube is also nice, especially since it naturally lends itself to a relatively nice UV-map.

    The old algorithm

    The exact same algorithm from my last post can easily be adapted to generate a spherified cube, just by starting on different data.

    cube

    After 3 steps of subdivision with the old algorithm, that cube will be transformed into this:

    split4

    Slightly adapted

    If you look closely, you will see that the triangles in this mesh are a bit uneven. The vertical lines in the yellow-side seem to curve around a bit. This is because unlike in the icosahedron, the triangles in the initial box mesh are far from equilateral. The four-way split does not work very well with this.

    One way to improve the situation is to use an adaptive two-way split instead:
    split2

    Instead of splitting all three edges, we’ll only split one. The adaptive part here is that the edge we’ll split is always the longest that appears in the triangle, therefore avoiding very long edges.

    Here’s the code for that. The only tricky part is the modulo-counting to get the indices right. The vertex_for_edge function does the same thing as last time: providing a vertex for subdivision while keeping the mesh connected in its index structure.

    TriangleList
    subdivide_2(ColorVertexList& vertices,
      TriangleList triangles)
    {
      Lookup lookup;
      TriangleList result;
    
      for (auto&& each:triangles)
      {
        auto edge=longest_edge(vertices, each);
        Index mid=vertex_for_edge(lookup, vertices,
          each.vertex[edge], each.vertex[(edge+1)%3]);
    
        result.push_back({each.vertex[edge],
          mid, each.vertex[(edge+2)%3]});
    
        result.push_back({each.vertex[(edge+2)%3],
          mid, each.vertex[(edge+1)%3]});
      }
    
      return result;
    }
    

    Now the result looks a lot more even:
    split2_sphere

    Note that this algorithm only doubles the triangle count per iteration, so you might want to execute it twice as often as the four-way split.

    Alternatives

    Instead of using this generic of triangle-based subdivision, it is also possible to generate the six sides as subdivided patches, as suggested in this article. This approach works naturally if you want to have seams between your six sides. However, that approach is more specialized towards this special geometry and will require extra “stitching” if you don’t want seams.

    Code

    The code for both the icosphere and the spherified cube is now on github: github.com/softwareschneiderei/meshing-samples.

    Generating an Icosphere in C++

    If you want to render a sphere in 3D, for example in OpenGL or DirectX, it is often a good idea to use a subdivided icosahedron. That often works better than the “UVSphere”, which means simply tesselating a sphere by longitude and latitude. The triangles in an icosphere are a lot more evenly distributed over the final sphere. Unfortunately, the easiest way, it seems, is to generate such a sphere is to do that in a 3D editing program. But to load that into your application requires a 3D file format parser. That’s a lot of overhead if you really need just the sphere, so doing it programmatically is preferable.

    At this point, many people will just settle for the UVSphere since it is easy to generate programmatically. Especially since generating the sphere as an indexed mesh without vertex-duplicates further complicates the problem. But it is actually not much harder to generate the icosphere!
    Here I’ll show some C++ code that does just that.

    C++ Implementation

    We start with a hard-coded indexed-mesh representation of the icosahedron:

    struct Triangle
    {
      Index vertex[3];
    };
    
    using TriangleList=std::vector<Triangle>;
    using VertexList=std::vector<v3>;
    
    namespace icosahedron
    {
    const float X=.525731112119133606f;
    const float Z=.850650808352039932f;
    const float N=0.f;
    
    static const VertexList vertices=
    {
      {-X,N,Z}, {X,N,Z}, {-X,N,-Z}, {X,N,-Z},
      {N,Z,X}, {N,Z,-X}, {N,-Z,X}, {N,-Z,-X},
      {Z,X,N}, {-Z,X, N}, {Z,-X,N}, {-Z,-X, N}
    };
    
    static const TriangleList triangles=
    {
      {0,4,1},{0,9,4},{9,5,4},{4,5,8},{4,8,1},
      {8,10,1},{8,3,10},{5,3,8},{5,2,3},{2,7,3},
      {7,10,3},{7,6,10},{7,11,6},{11,0,6},{0,1,6},
      {6,1,10},{9,0,11},{9,11,2},{9,2,5},{7,2,11}
    };
    }
    

    icosahedron
    Now we iteratively replace each triangle in this icosahedron by four new triangles:

    subdivision

    Each edge in the old model is subdivided and the resulting vertex is moved on to the unit sphere by normalization. The key here is to not duplicate the newly created vertices. This is done by keeping a lookup of the edge to the new vertex it generates. Note that the orientation of the edge does not matter here, so we need to normalize the edge direction for the lookup. We do this by forcing the lower index first. Here’s the code that either creates or reused the vertex for a single edge:

    using Lookup=std::map<std::pair<Index, Index>, Index>;
    
    Index vertex_for_edge(Lookup& lookup,
      VertexList& vertices, Index first, Index second)
    {
      Lookup::key_type key(first, second);
      if (key.first>key.second)
        std::swap(key.first, key.second);
    
      auto inserted=lookup.insert({key, vertices.size()});
      if (inserted.second)
      {
        auto& edge0=vertices[first];
        auto& edge1=vertices[second];
        auto point=normalize(edge0+edge1);
        vertices.push_back(point);
      }
    
      return inserted.first->second;
    }
    

    Now you just need to do this for all the edges of all the triangles in the model from the previous interation:

    TriangleList subdivide(VertexList& vertices,
      TriangleList triangles)
    {
      Lookup lookup;
      TriangleList result;
    
      for (auto&& each:triangles)
      {
        std::array<Index, 3> mid;
        for (int edge=0; edge<3; ++edge)
        {
          mid[edge]=vertex_for_edge(lookup, vertices,
            each.vertex[edge], each.vertex[(edge+1)%3]);
        }
    
        result.push_back({each.vertex[0], mid[0], mid[2]});
        result.push_back({each.vertex[1], mid[1], mid[0]});
        result.push_back({each.vertex[2], mid[2], mid[1]});
        result.push_back({mid[0], mid[1], mid[2]});
      }
    
      return result;
    }
    
    using IndexedMesh=std::pair<VertexList, TriangleList>;
    
    IndexedMesh make_icosphere(int subdivisions)
    {
      VertexList vertices=icosahedron::vertices;
      TriangleList triangles=icosahedron::triangles;
    
      for (int i=0; i<subdivisions; ++i)
      {
        triangles=subdivide(vertices, triangles);
      }
    
      return{vertices, triangles};
    }
    

    There you go, a customly subdivided icosphere!
    icosphere

    Performance

    Of course, this implementation is not the most runtime-efficient way to get the icosphere. But it is decent and very simple. Its performance depends mainly on the type of lookup used. I used a map instead of an unordered_map here for brevity, only because there’s no premade hash function for a std::pair of indices. In pratice, you would almost always use a hash-map or some kind of spatial structure, such as a grid, which makes this method a lot tougher to compete with. And certainly feasible for most applications!

    The general pattern

    The lookup-or-create pattern used in this code is very useful when creating indexed-meshes programmatically. I’m certainly not the only one who discovered it, but I think it needs to be more widely known. For example, I’ve used it when extracting voxel-membranes and isosurfaces from volumes. It works very well whenever you are creating your vertices from some well-defined parameters. Usually, it’s some tuple that describes the edge you are creating the vertex on. This is the case with marching cubes or marching tetrahedrons. It can, however, also be grid coordinates if you sparsely generate vertices on a grid, for example when meshing heightmaps.

    Explicit types – and when to use them

    Many modern programming languages offer a way declare variables without an explicit type if the type can be inferred, either dynamically or statically. Many also allow for variables to be explicitly defined with a type. For example, Scala and C# let you omit the explicit variable type via the var keyword, but both also allow defining variables with explicit types. I’m coming from the C++ world, where “auto” is available for this purpose since the relatively recent C++11. However, people are still debating whether you should actually use it.

    Pros

    Herb Sutter popularised the almost-always-auto style. He advocates that using more type inference is good because it is roughly equivalent to programming against interfaces instead of implementations. He says that “Overcommitting to explicit types makes code less generic and more interdependent, and therefore more brittle and limited.” However, he also mentions that you might sometimes want to use explicit types.

    Now what exactly is overcommiting here? When is the right time to use explicit types?

    Cons

    Opponents to implicit typing, many of them experienced veterans, often state that they want the actual type visible in the source code. They don’t want to rely on type inference being right. They want the code to explicitly state what’s going on.

    At first, I figured that was just conservatism in the face of a new “scary” feature that they did not fully understand. After all, IDEs can usually infer the type on-the-fly and you can hover on a variable to let it show you the type.

    For C++, the function signature is a natural boundary where you often insert explicit types, unless you want to commit to the compile time and physical dependency cost that comes with templates. Other languages, such as Groovy, do not have this trade-off and let you skip explicit types almost everywhere. After working with Groovy/Grails for a while, where the dominant style seems to be to omit types whereever possible, it dawned on me that the opponents of implicit typing have a point. Not only does the IDE often fail to show me the inferred type (even though it still works way more often than I would have anticipated), but I also found it harder to follow and modify code that did not mention explicit types. Seemingly contrary to Herb Sutter’s argument, that code felt more brittle than I had liked.

    Middle-ground

    As usual, the truth seems to be somewhere in the middle. I propose the following rule for when to use explicit types:

    • Explicit typing for domain-types
    • Implicit typing everywhere else

    Code using types from the problem domain should be as specific as possible. There’s no need for it to be generic – it’s actually counter-productive, as otherwise the code model would be inconsistent with model of the problem domain. This is also the most important aspect to grok when reading code, so it should be explicit. The type is as important as the action on it.

    On the other hand, for pure-fabrication types that do not respresent a concept in the domain, the action is important, while the type is merely a means to achieve this action. Typically, most of the elements from a language’s standard library fall into this category. All your containers, iterators, callables. Their types are merely implementation details: an associative container could be an array, or a hash-map or a tree structure. Exchanging it rarely changes the meaning of the code in the problem domain – it just changes its performance characteristics.

    Containers will occasionally contain domain-types in their type. What do you do about those? I think they belong in the “everywhere else” catergory, but you should be take extra care to name the contained type when working with it – for example when declaring the variable of the for-each loop on it, or when inserting something into it. This way, the “collection of domain-type” aspect will become clear, but the specific container implementation will stay implicit – like it should.

    What do you think? Is this a useful proposition for your code?

    Simple C++11 – Part I – Unit Structure

    C++ has long had the stigma of an overlay complex and unproductive language. Lately, with the advent of C++11, things have brightened a bit, but there are still a lot of misconceptions about the language. I think this is mostly because C++ was taught in a wrong way. This series aims to show my, hopefully somewhat simpler, way of using C++11.

    Since it is typically the first thing I do when starting a new project, I will start with how I am setting up a new compile unit, e.g. a header and compile unit pair.

    Note that I will try not to focus on a specific C++11 paradigm, such as object-oriented or imperative. This structure seems to work well for all kinds of paradigms. But without much further ado, here’s the header file for my imaginary “MyUnit” unit:

    MyUnit.hpp

    #pragma once
    
    #include <vector>
    #include "MyStuff.hpp"
    
    namespace MyModule { namespace MyUnit {
    
    /** Does something only a good bar could.
    */
    std::vector<float> bar(int fooCount);
    
    /** Foo is an integral part of any program.
        Be sure to call it frequently.
    */
    void foo(MyStuff::BestType somethingGood);
    
    }}
    

    I prefer the .hpp file ending for headers. While I’m perfectly fine with .h, I think it is helpful to differentiate pure C headers from C++ headers.

    #pragma once

    I’m using #pragma once here instead of include guards. It is not an official part of the standard, but all the big compilers (Visual C++, g++ and clang) support it, making it a de-facto standard. Unlike include guards, you only have to add only one line, which says exactly what you want to achieve with it. You do not have to find a unique identifier for your include guard that will most certainly break if you rename the file/unit. It’s more readable, more resilient to change and easier to set up.

    Namespaces

    I like to have all the contents of a unit in a single namespace. The actual structure of the namespaces – i.e. per unit or per module or something else entirely depends on the specifics of the project, but filling more than one namespace is a guarantee for chaos. It’s usually a sign that the unit should be broken up into smaller pieces. An exception to this would be the infamous “detail” namespace, as seen in many of the Boost libraries. In that case, the namespace is not used to structure the API, but to explicitly omit things from the API that have to be visible for technical reasons.

    Documentation

    Documentation goes into the header, not into the implementation. The header describes the API, not only to the compiler, but also to humans. It is by no means an implementation detail, but part of the seam that isolates it from the rest of the code. Note that this part of the documentation concerns the API contract only, never the implementation. That part goes into the .cpp file.

    But now to the implementation file:

    MyUnit.cpp

    #include "MyUnit.hpp"
    
    #include "CoolFunctionality.hpp"
    
    using namespace MyModule;
    using namespace MyUnit;
    
    namespace {
    
    int helperFunction(float rhs)
    {
      /* ... */
    }
    
    }// namespace
    
    std::vector<float> MyUnit::bar(int fooCount)
    {
      /* ... */
    }
    
    void MyUnit::foo(MyStuff::BestType somethingGood)
    {
      /* ... */
    }
    
    

    Own #include first

    The only rule I have for includes is that the unit’s own include is always the first. This is to test whether the header is self-sufficient, i.e. that it will compile without being in the context of other headers or, even worse, code from an implementation file. Some people like to order the rest of their includes according to their “origin”, e.g. sections for system headers or library headers. I think imposing any extra order here is not needed. If anything, I prefer not waste time sorting include directives and just append an include when I need it.

    Using namespace

    I choose using-directives of my unit’s namespaces over explicitly accessing the namespaces each time. Unlike the headers, the implementation file lives in a locally defined context. Therefore, it is not a problem to use a very specific view onto the unit. In fact, it would be a problem to be overly generic. The same argument also holds for other “local” modules that this unit is only using, as long as there are no collisions. I avoid using namespaces from external libraries to mark the library boundary (such as std, boost etc.).

    Unnamed namespace

    The unnamed namespace contains all the implementation helpers specific to this unit. It is quite common for this to contain a lot of the “meat” of an actual unit, while the unit’s visible functions merely wrap and canonize the functionality implemented here. I try to keep only one unnamed namespace in each file, to have a clear separation of what is supposed to be visible to the outside – and what is not.

    Visible implementation

    The implementation of the visible API of the module is the most obvious part of the .cpp file. For consistency reasons, the order of the functions should be the same as in the header.

    I’d advice against implementing in a file wide open namespace. That means balancing an unnecessary pair of parenthesis over the whole implementation file.  Also, you can not only define functions and types, but also declare them – this leads to a function further down in the implementation to see a different namespace than one before it.

    Conclusion

    This concludes the first part. I’ve played with the thought of using a 3-piece setup instead, extending the header/implementation with a unit-test file, but have not gathered any sharable experience yet. This setup, however, has worked for me for a long time and with many different projects. Have you had similar – or completely different – setups that worked for you? Do tell!

    Meet my Expectations!

    A while ago I came across a particulary irritating piece of code in a somewhat harmlessly looking mathematical vector class. C++’s rare feature of operator overloading makes it a good fit for multi-dimensional calculations, so vector classes are common and I had already seen quite a few of them in my career. It looked something like this:

    template <typename T>
    class vec2
    {
    public:
      /* A few member functions.. */
      bool operator==(vec2 const& rhs) const;
    
      T x;
      T y;
    };
    

    Not many surprises here, except that maybe the operator==() should be a free-function instead. Whether the data members of the class are an array or named individually is often a point of difference between vector implementations. Both certainly have their merits. But I digress…
    What really threw me off was the implementation of the operator==(). How would you implement it? Intuitively, I would have expected pretty much this code:

    template <typename T>
    bool vec2<T>::operator==(vec2 const& rhs) const
    {
      return x==rhs.x && y==rhs.y;
    }
    

    However, what I found instead was this:

    template <typename T>
    bool vec2<T>::operator==(vec2 const& rhs) const
    {
      if (x!=rhs.x || y!=rhs.y)
      {
        return false;
      }
    
      return true;
    }
    

    What is wrong with this code?

    Think about that for a moment! Can you swiftly verify whether this boolean logic is correct? You actually need to apply De Morgan’s laws to get to the expression from the first implementation!
    This code was not technically wrong. In fact, for all its technical purposes, it was working fine. And it seems functionally identical to the first version! Still, I think it is wrong on at least two levels.

    Different relations

    Firstly, it bases its equality on the inequality of its contained type, T. I found this quite surprising, so this already violated the POLA for me. I immediately asked myself: Why did the author choose to implement this based on operator!=(), and not on operator==()? After all, supplying equality for relations is common in templated C++, while inequality is inferred. In a way, this is more intuitive. Inequality already has the negation in its name, while equality is something “original”! Not only that, but why base the equality on a different relation of the contained type instead of the same? This can actually be a problem when the vector is instantiated on a type that supplies operator==(), but not operator!=() – thought that would be equally surprising. It turned out that the vector was only used on built-in types, so those particular concerns were futile. At least, until it is later used with a custom type.

    Too many negations

    Secondly, there’s the case of immediately returning a boolean after a condition. This alone is often considered a code-smell. It could be argued that this is more readable, but I don’t want to argue in favor of pure brevity. I want to argue in favor of clarity! In this case, that construct is basically used to negate the boolean expression, further obscuring the result of the whole function.
    So basically, the function does a double negation (not un-equal) to express a positive concept (equal). And negations are a big source of errors and often lead to confusion.

    Conclusion

    You need to make sure to make the code as simple and clear as possible and avoids any surprises, especially when dealing with the relatively unconstrained context of C++ templates.  In other words, you need to make sure to meet the expectations of the naive reader as well as possible!

    Building Visual C++ Projects with CMake

    In previous post my colleague showed how to create RPM packages with CMake. As a really versatile tool it is also able to create and build Visual Studio projects on Windows. This property makes it very valuable when you want to integrate your project into a CI cycle(in our case Jenkins).

    Prerequisites:

    To be able to compile anything following packages needed to be installed beforehand:

    •  CMake. It is helpful to put it in the PATH environment variable so that absolute paths aren’t needed.
    • Microsoft Windows SDK for Windows 7 and .NET Framework 4 (the web installer or  the ISOs).  The part “.NET Framework 4” is very important, since when the SDK for the .NET Framework 3.5 is installed you will get following parse error for your *.vcxproject files:

      error MSB4066: The attribute “Label” in element is unrecognized

      at the following position:

      <ItemGroup Label=”ProjectConfigurations”>

      Probably equally important is the bitness of the installed SDK. The x64 ISO differs only in one letter from the x86 one. Look for the X if want 64 bit.

    • .NET Framework 4, necessary to make msbuild run

    It is possible that you encounter following message during your SDK setup:

    A problem occurred while installing selected Windows SDK components. Installation of the “Microsoft Windows SDK for Windows 7” product has reported the following error: Please refer to Samples\Setup\HTML\ConfigDetails.htm document for further information. Please attempt to resolve the problem and then start Windows SDK setup again. If you continue to have problems with this issue, please visit the SDK team support page at http://go.microsoft.com/fwlink/?LinkId=130245. Click the View Log button to review the installation log. To exit, click Finish.

    The reason behind this wordy and less informative error message were the Visual C++ Redistributables installed on the system. As suggested by Microsoft KB article removing them all helped.

    Makefiles:

    For CMake to build anything you need to have a CMakeLists.txt file in your project. For a tutorial on how to use CMake, look at this page. Here is a simple CMakeLists.txt to get you started:

    project(MyProject)
     cmake_minimum_required(VERSION 2.6)
     set(source_files
     main.cpp
     )
     include_directories(
     ${CMAKE_CURRENT_SOURCE_DIR}
     )
     add_executable(MyProject ${source_files})

    Building:

    To build a project there are few steps necessary. You can enter them in your CI directy or put them in a batch file.

    call "%ProgramFiles%\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /Release /x86

    With this call all necessary environment variables are set. Be careful on 64 bit platforms as jenkins slave executes this call in a 32 bit context and so “%ProgramFiles%” is resolved to “ProgramFiles (x86)” where the SDK does not lie.

    del CMakeCache.txt

    This command is not strictly necessary, but it prevents you from working with outdated generated files when you change your configuration.

    cmake -G "Visual Studio 10" .

    Generates a Visual Studio 2010 Solution. Every change to the solution and the project files will be gone when you call it, so make sure you track all necessary files in the CMakeLists.txt.

    cmake --build . --target ALL_BUILD --config Release

    The final step. It will net you the MyProject.exe binary. The target parameter is equal to the name of the project in the solution and the config parameter is one of the solution configurations.

    Final words:

    The hardest and most time consuming part was the setup of prerequisites. Generic, not informative error messages are the worst you can do to a clueless customer. But… when you are done with it, you are only two small steps apart from an automatically built executable.