.NET Core for platform independent web development

Several of our projects are based on the .NET platform. Until recently all of them used the classic .NET Framework. With a new project we had the opportunity to give .NET Core a try. The name stands for a moderized variant of the .NET Framework. It is developed by The .NET Foundation and Microsoft as a platform independent open-source project.

Not every type of project is currently suitable for .NET Core. If you want to develop a Windows desktop application (WinForms, WPF) you still have to use the classic .NET Framework. However, for server based applications .NET Core is a really good fit. Our application, for example, is implemented as a JSON API server with .NET Core and a React/Redux based client interface.

The Benefits

Since .NET Core is platform independent it runs on Linux, MacOS and Windows. We no longer need a Window machines to build the project from our CI server. Microsoft provides Docker images for building and running .NET Core projects.

ASP.NET Core applications are no longer bound to Microsoft’s IIS or IIS Express. You can also host them on Apache or Nginx servers as well.

With .NET Core you also have a vast choice of IDEs. Of course, you can use Visual Studio on Windows. But you also have the option to use JetBrains’ Rider (on any platform), Visual Studio for Mac or Visual Studio Code (Mac, Linux, Windows). If you don’t want to use an IDE for everything .NET Core also has a nice command-line interface. For example, the following command sets up a new ASP.NET Core project with React and Redux:

$ dotnet new reactedux

To compile an run the project:

$ dotnet run

The Entity Framework Core also has a feature I missed in the Entity Framework for the classic .NET Framework: a pure in-memory database provider, which is very useful for testing.

The Downsides

When you browse the NuGet packages list you have to be aware that not every package is compatible with .NET Core yet, but the list is growing. And, as mentioned above, you can’t develop desktop GUI applications with .NET Core.

Advertisements

Ansible in Jenkins

Ansible is a powerful tool for automation of your IT infrastructure. In contrast to chef or puppet it does not need much infrastructure like a server and client (“agent”) programs on your target machines. We like to use it for keeping our servers and desktop machines up-to-date and provisioned in a defined, repeatable and self-documented way.

As of late ansible has begun to replace our different, custom-made – but already automated – deployment processes we implemented using different tools like ant scripts run by jenkins-jobs. The natural way of using ansible for deployment in our current infrastructure would be using it from jenkins with the jenkins ansible plugin.

Even though the plugin supports the “Global Tool Configuration” mechanism and automatic management of several ansible installations it did not work out of the box for us:

At first, the executable path was not set correctly. We managed to fix that but then the next problem arose: Our standard build slaves had no jinja2 (python templating library) installed. Sure, that are problems you can easily fix if you decide so.

For us, it was too much tinkering and snowflaking our build slaves to be feasible and we took another route, that you can consider: Running ansible from an docker image.

We already have a host for running docker containers attached to jenkins so our current state of deployment with ansible roughly consists of a Dockerfile and a Jenkins job to run the container.

The Dockerfile is as simple as


FROM ubuntu:14.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y dist-upgrade && apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-add-repository ppa:ansible/ansible-2.4
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install ansible

# Setup work dir
WORKDIR /project/provisioning

# Copy project directory into container
COPY . /project

# Deploy the project
CMD ansible-playbook -i inventory deploy-project.yml

And the jenkins build step to actually run the deployment looks like


docker build -t project-deploy .
docker run project-deploy

That way we can tailor our deployment machine to conveniently run our ansible playbooks for the specific project without modifying our normal build slave setups and adding complexity on their side. All the tinkering with the jenkins ansible plugin is unnecessary going this way and relying on docker and what the container provides for running ansible.

uninitialized_tag in C++

No doubt, C++ is one of those languages you can use to squeeze out every last drop of your CPU’s processing power. On the other hand, it also allows a high amount of abstraction. However, micro-optimization seldom works well with nice abstractions.

The dilemma

One such case is the matter of default-initialization with “math types”, such as three-dimensional vectors used in computer graphics. Do you let your default constructor zero initialize by default or do you leave the elements uninitialized and risk undefined behavior?

One way around this dilemma is to use tag dispatching to enable both:

template <class T> struct v3 {
  v3(T v={}) : x{v}, y{v}, z{v} {}
  v3(uninitialized_tag) {}
  T x,y,z;
};

Now a v3 zero-initializes by default, while you can still avoid the initialization costs by calling it with:

v3<float>{uninitialized_tag{}};

Drawbacks

This approach is not without drawbacks. It’s a bit of an uphill battle to find a good test for this. You need to overwrite the values before you use them, or the compiler is free to do whatever it wants when you use them. It’d be undefined behavior. But you also do not want it to figure out that you are overwriting all the values – because in that case, it can optimize out the zero-initialization anyways.
It does work for a few simple cases though, and you scan see the zero-initialization getting removed, e.g. in the compiler explorer.

However, it will often not let you do what you set out to do – leave some some vectors uninitialized. Consider this:

std::vector<v3<float>> v(N, uninitialized_tag{});

This does not, in fact, transport the uninitialized_tag to the v3 constructor. It first converts the tag to a v3, and then uses that value to initialize all the other N elements with the uninitialized data. This is actually a lot of copying, and creates a whole lot more code than the zero initialization would have. You can get this to work with a container that uses the given initializer value to initialize the elements without converting first. You’re probably better of with a mechanism like the std::vector::reserve that essentially gives you the ability to leave elements uninitialized.

Conclusion

This is a very specialized method for very few niche cases, and you need to carefully select your infrastructure to see any gain that cannot be achieved by simpler means. Use with caution!

Integrating .NET projects with Gradle

Recently I have created Gradle build scripts for several .NET projects, bot C# and VB.NET projects. Projects for the .NET platform are usually built with MSBuild, which is part of the .NET Framework distribution and itself a full-blown build automation tool: you can define build targets, their dependencies and execute tasks to reach the build targets. I have written about the basics of MSBuild in a previous blog post.

The .NET projects I was working on were using MSBuild targets for the various build stages as well. Not only for building and testing, but also for the release and deployment scripts. These scripts were called from our Jenkins CI with the MSBuild Jenkins Plugin.

Gradle plugins

However, I wasn’t very happy with MSBuild’s clunky Ant-like XML based syntax, and for most of our other projects we are using Gradle nowadays. So I tried Gradle for a new .NET project. I am using the Gradle MSBuild and Gradle NUnit plugins. Of course, the MSBuild Gradle plugin is calling MSBuild, so I don’t get rid of MSBuild completely, because Visual Studio’s .csproj and .vbproj project files are essentially MSBuild scripts, and I don’t want to get rid of them. So there is one Gradle task which to calls MSBuild, but everything else beyond the act of compilation is automated with regular Gradle tasks, like copying files, zipping release artifacts etc.

Basic usage of the MSBuild plugin looks like this:

plugins {
  id "com.ullink.msbuild" version "2.18"
}

msbuild {
  // either a solution file
  solutionFile = 'DemoSolution.sln'
  // or a project file (.csproj or .vbproj)
  projectFile = file('src/DemoSoProject.csproj')

  targets = ['Clean', 'Rebuild']

  destinationDir = 'build/msbuild/bin'
}

The plugin offers lots of additional options, be sure to check out the documentation on Github. If you want to give the MSBuild step its own task name, which is currently not directly mentioned on the Github page, use the task type Msbuild from the package com.ullink:

import com.ullink.Msbuild

// ...

task buildSolution(type: 'Msbuild', dependsOn: '...') {
  // ...
}

Since the .NET projects I’m working on use NUnit for unit testing, I’m using the NUnit Gradle plugin by the same creator as well. Again, please consult the documentation on the Github page for all available options. What I found necessary was setting the nunitHome option, because I don’t want the plugin to download a NUnit release from the internet, but use the one that is included with our project. Also, if you want a task with its own name or multiple testing tasks, use the NUnit task type in the package com.ullink.gradle.nunit:

import com.ullink.gradle.nunit.NUnit

// ...

task test(type: 'NUnit', dependsOn: 'buildSolution') {
  nunitVersion = '3.8.0'
  nunitHome = "${project.projectDir}/packages/NUnit.ConsoleRunner.3.8.0/tools"
  testAssemblies = ["${project.projectDir}/MyProject.Tests/bin/Release/MyProject.Tests.dll"]
}
test.dependsOn.remove(msbuild)

With Gradle I am now able to share common build tasks, for example for our release process, with our other non .NET projects, which use Gradle as well.

Analyzing gradle projects using SonarQube without gradle plugin

SonarQube makes static code analysis easy for a plethora of languages and environments. In many of our newer projects we use gradle as our buildsystem and jenkins as our continuous integration server. Integrating sonarqube in such a setup can be done in a couple of ways, the most straightforward being

  • Integrating SonarQube into your gradle build and invoke the gradle script in jenkins
  • Letting jenkins invoke the gradle build and execute the SonarQube scanner

I chose the latter one because I did not want to add further dependencies to the build process.

Configuration of the SonarQube scanner

The SonarQube scanner must be configured by property file called sonar-project.properties by default:

# must be unique in a given SonarQube instance
sonar.projectKey=domain:project
# this is the name and version displayed in the SonarQube UI. Was mandatory prior to SonarQube 6.1.
sonar.projectName=My cool project
sonar.projectVersion=23

sonar.sources=src/main/java
sonar.tests=src/test/java
sonar.java.binaries=build/classes/java/main
sonar.java.libraries=../lib/**/*.jar
sonar.java.test.libraries=../lib/**/*.jar
sonar.junit.reportPaths=build/test-results/test/
sonar.jacoco.reportPaths=build/jacoco/test.exec

sonar.modules=application,my_library,my_tools

# Encoding of the source code. Default is default system encoding
sonar.sourceEncoding=UTF-8
sonar.java.source=1.8

sonar.links.ci=http://${my_jenkins}/view/job/MyCoolProject
sonar.links.issue=http://${my_jira}/browse/MYPROJ

After we have done that we can submit our project to the SonarQube scanner using the jenkins SonarQube plugin and its “Execute SonarQube Scanner” build step.

Optional: Adding code coverage to our build

Even our gradle-based projects aim to be self-contained. That means we usually do not use repositories like mavenCentral for our dependencies but store them all in a lib directory along the project. If we want to add code coverage to such a project we need to add jacoco in the version corresponding to the jacoco-gradle-plugin to our libs in build.gradle:

allprojects {
    apply plugin: 'java'
    apply plugin: 'jacoco'
    sourceCompatibility = 1.8

    jacocoTestReport {
        reports {
            xml.enabled true
        }
        jacocoClasspath = files('../lib/org.jacoco.core-0.7.9.jar',
            '../lib/org.jacoco.report-0.7.9.jar',
            '../lib/org.jacoco.ant-0.7.9.jar',
            '../lib/asm-all-5.2.jar'
        )
    }
}

Gotchas

Our jenkins build job consists of 2 steps:

  1. Execute gradle
  2. Submit project to SonarQube’s scanner

By default gradle stops execution on failure. That means later tasks like jacocoTestReport are not executed if a test fails. We need to invoke gradle with the --continue switch to always run all of our tasks.

Some tricks for working with SVG in JavaScript

Scalable vector graphics (SVG) is a part of the document object model (DOM) and thus can be modified just like any other DOM node from JavaScript. But SVG has some pitfalls like having its own coordinate system and different style attributes which can be a headache. What follows is a non comprehensive list of hints and tricks which I found helpful while working with SVG.

Coordinate system

From screen coordinates to SVG

function screenToSVG(svg, x, y) { // svg is the svg DOM node
  var pt = svg.createSVGPoint();
  pt.x = x;
  pt.y = y;
  var cursorPt = pt.matrixTransform(svg.getScreenCTM().inverse());
  return {x: Math.floor(cursorPt.x), y: Math.floor(cursorPt.y)}
}

From SVG coordinates to screen

function svgToScreen(element) {
  var rect = element.getBoundingClientRect();
  return {x: rect.left, y: rect.top, width: rect.width, height: rect.height};
}

Zooming and panning

Getting the view box

function viewBox(svg) {
    var box = svg.getAttribute('viewBox');
    return {x: parseInt(box.split(' ')[0], 10), y: parseInt(box.split(' ')[1], 10), width: parseInt(box.split(' ')[2], 10), height: parseInt(box.split(' ')[3], 10)};
};

Zooming using the view box

function zoom(svg, initialBox, factor) {
  svg.setAttribute('viewBox', initialBox.x + ' ' + initialBox.y + ' ' + initialBox.width / factor + ' ' + initialBox.height / factor);
}

function zoomFactor(svg) {
  var height = parseInt(svg.getAttribute('height').substring(0, svg.getAttribute('height').length - 2), 10);
  return 1.0 * viewBox(svg).height / height;
}

Panning (with zoom factor support)

function pan(svg, panX, panY) {
  var pos = viewBox(svg);
  var factor = zoomFactor(svg);
  svg.setAttribute('viewBox', (pos.x - factor * panX) + ' ' + (pos.y - factor * panY) + ' ' + pos.width + ' ' + pos.height);
}

Misc

Embedding HTML

function svgEmbedHTML(width, height, html) {
    var svg = document.createElementNS("http://www.w3.org/2000/svg", "foreignObject");
    svg.setAttribute('width', '' + width);
    svg.setAttribute('height', '' + height);
    var body = document.createElementNS('http://www.w3.org/1999/xhtml', 'body');
    body.style.background = 'none';
    svg.appendChild(body);
    body.appendChild(html);
    return svg;
}

Making an invisible rectangular click/touch area

function addTouchBackground(svgRoot) {
    var rect = svgRect(0, 0, '100%', '100%');
    rect.style.fillOpacity = 0.01;
    root.appendChild(rect);
}

Using groups as layers

This one needs an explanation. The render order of the svg children depends on the order in the DOM: the last one in the DOM is rendered last and thus shows above all others. If you want to have certain elements below or above others I found it helpful to use groups in svg and add to them.

function svgGroup(id) {
    var group = document.createElementNS('http://www.w3.org/2000/svg', 'g');
    if (id) {
        group.setAttribute('id', id);
    }
    return group;
}

// and later on:
document.getElementById(id).appendChild(yourElement);

Oversimplified C++ Project FAQ 2018

If you are starting a new C++ project, you’re faced with a few difficult decisions. C++ is not a ‘batteries-included’ language, so you need to pick a few technologies before you can start.
Yet worse, the answer to most of the pressing questions is often ‘it depends’ and changing one of the choices mid-project can be very expensive.
Therefore, I have compiled this list to give totally biased and oversimplified to the most important questions. If you want more nuanced answers, feel free to do your own research.
This is meant to be a somewhat amusing starting point.

FAQ

1. Which OS should I pick?

Linux

Rationale

Usually, not a choice you can make yourself – but if you do: dependency management is easier with a package manager, and it seems to be the most dominant OS in the C++ community. Hence you will get the best support and easiest access to technologies.

2. Which build system should I use?

CMake

Rationale

This is what everyone else is using, and those that are not are a real pain. For better or worse, the market is locked in. With target based properties in modern CMake, it’s not even that bad.

3. Which IDE should I choose?

Visual Studio 2017 on Windows, CLion everywhere else.

Rationale

CLion is getting more robust and feature rich with every release. Native CMake support and really cool refactoring capabilities finally make this a valid contender to Visual Studio’s crown. However, the VS debugger is still the best in the game, so VS still comes out on top on Windows – tho not by a huge margin.

4. Which Language version should you use?

C++14

Rationale

C++17 is not quite there yet with library, tool and platform support. Also, people do not really know how to use it well yet. C++14 builds on the now well-established C++11, which a few rather important “fixes” – and support is ubiquitous.

5. Which GUI toolkit should you use?

Qt

Rationale

No other toolkit comes close in maturity. Qt’s signal/slot system almost seamlessly integrates with C++11 lambdas, making the precompile step needed for SLOTs a non-issue. Barring the license costs for closed-source projects, there is really no reason not to use it.

6. Should you use Boost?

No

Rationale

Boost is a huge and clunky dependency that will explode your build times as soon as you even touch it. And it’s ‘viral’ enough that you can distinguish a Boost project from a non-Boost project. Boost.Optional, Boost.Variant and Boost.Filesystem prepare you for a smooth transition to C++17, but there are other more lightweight alternatives available.

Closing thoughts

There you have my totally biased opinion but hopefully entertaining. YMWV, but I think this is a good starting point if you don’t want to exeriment too much.