.NET Core for platform independent web development

Several of our projects are based on the .NET platform. Until recently all of them used the classic .NET Framework. With a new project we had the opportunity to give .NET Core a try. The name stands for a moderized variant of the .NET Framework. It is developed by The .NET Foundation and Microsoft as a platform independent open-source project.

Not every type of project is currently suitable for .NET Core. If you want to develop a Windows desktop application (WinForms, WPF) you still have to use the classic .NET Framework. However, for server based applications .NET Core is a really good fit. Our application, for example, is implemented as a JSON API server with .NET Core and a React/Redux based client interface.

The Benefits

Since .NET Core is platform independent it runs on Linux, MacOS and Windows. We no longer need a Window machines to build the project from our CI server. Microsoft provides Docker images for building and running .NET Core projects.

ASP.NET Core applications are no longer bound to Microsoft’s IIS or IIS Express. You can also host them on Apache or Nginx servers as well.

With .NET Core you also have a vast choice of IDEs. Of course, you can use Visual Studio on Windows. But you also have the option to use JetBrains’ Rider (on any platform), Visual Studio for Mac or Visual Studio Code (Mac, Linux, Windows). If you don’t want to use an IDE for everything .NET Core also has a nice command-line interface. For example, the following command sets up a new ASP.NET Core project with React and Redux:

$ dotnet new reactedux

To compile an run the project:

$ dotnet run

The Entity Framework Core also has a feature I missed in the Entity Framework for the classic .NET Framework: a pure in-memory database provider, which is very useful for testing.

The Downsides

When you browse the NuGet packages list you have to be aware that not every package is compatible with .NET Core yet, but the list is growing. And, as mentioned above, you can’t develop desktop GUI applications with .NET Core.

Integrating .NET projects with Gradle

Recently I have created Gradle build scripts for several .NET projects, bot C# and VB.NET projects. Projects for the .NET platform are usually built with MSBuild, which is part of the .NET Framework distribution and itself a full-blown build automation tool: you can define build targets, their dependencies and execute tasks to reach the build targets. I have written about the basics of MSBuild in a previous blog post.

The .NET projects I was working on were using MSBuild targets for the various build stages as well. Not only for building and testing, but also for the release and deployment scripts. These scripts were called from our Jenkins CI with the MSBuild Jenkins Plugin.

Gradle plugins

However, I wasn’t very happy with MSBuild’s clunky Ant-like XML based syntax, and for most of our other projects we are using Gradle nowadays. So I tried Gradle for a new .NET project. I am using the Gradle MSBuild and Gradle NUnit plugins. Of course, the MSBuild Gradle plugin is calling MSBuild, so I don’t get rid of MSBuild completely, because Visual Studio’s .csproj and .vbproj project files are essentially MSBuild scripts, and I don’t want to get rid of them. So there is one Gradle task which to calls MSBuild, but everything else beyond the act of compilation is automated with regular Gradle tasks, like copying files, zipping release artifacts etc.

Basic usage of the MSBuild plugin looks like this:

plugins {
  id "com.ullink.msbuild" version "2.18"
}

msbuild {
  // either a solution file
  solutionFile = 'DemoSolution.sln'
  // or a project file (.csproj or .vbproj)
  projectFile = file('src/DemoSoProject.csproj')

  targets = ['Clean', 'Rebuild']

  destinationDir = 'build/msbuild/bin'
}

The plugin offers lots of additional options, be sure to check out the documentation on Github. If you want to give the MSBuild step its own task name, which is currently not directly mentioned on the Github page, use the task type Msbuild from the package com.ullink:

import com.ullink.Msbuild

// ...

task buildSolution(type: 'Msbuild', dependsOn: '...') {
  // ...
}

Since the .NET projects I’m working on use NUnit for unit testing, I’m using the NUnit Gradle plugin by the same creator as well. Again, please consult the documentation on the Github page for all available options. What I found necessary was setting the nunitHome option, because I don’t want the plugin to download a NUnit release from the internet, but use the one that is included with our project. Also, if you want a task with its own name or multiple testing tasks, use the NUnit task type in the package com.ullink.gradle.nunit:

import com.ullink.gradle.nunit.NUnit

// ...

task test(type: 'NUnit', dependsOn: 'buildSolution') {
  nunitVersion = '3.8.0'
  nunitHome = "${project.projectDir}/packages/NUnit.ConsoleRunner.3.8.0/tools"
  testAssemblies = ["${project.projectDir}/MyProject.Tests/bin/Release/MyProject.Tests.dll"]
}
test.dependsOn.remove(msbuild)

With Gradle I am now able to share common build tasks, for example for our release process, with our other non .NET projects, which use Gradle as well.

OPC-UA Performance and Bulk Reads

In a previous post on OPC on this blog I introduced some basics of OPC. Now we’ll take look at some performance characteristics of OPC-UA. Performance depends both on the used OPC server and the client, of course. But there are general tips to improve performance.

  • to get maximum performance use OPC without security

OPC message signing and encryption adds overhead. Turn off security for maximum performance if your use case allows to use OPC without security.

  • bulk reads increase performance

Bulk reads

A bulk read call reads multiple variables at once, which reduces communication overhead between client and server.

Here’s a code example using Eclipse Milo, an open-source OPC-UA stack implementation for the Java VM.

final String endpointUrl = "opc.tcp://localhost:53530/OPCUA/SimulationServer";
final EndpointDescription[] endpoints = UaTcpStackClient.getEndpoints(endpointUrl).get();
final OpcUaClientConfigBuilder config = new OpcUaClientConfigBuilder();
config.setEndpoint(endpoints[0]);

final OpcUaClient client = new OpcUaClient(config.build());
client.connect().get();

final List<NodeId> nodeIds = IntStream.rangeClosed(1, 50).mapToObj(i -> new NodeId(5, "Counter" + i)).collect(Collectors.toList());
final List<ReadValueId> readValueIds = nodeIds.stream().map(nodeId -> new ReadValueId(nodeId, AttributeId.Value.uid(), null, null)).collect(Collectors.toList());

// Bulk read call
final ReadResponse response = client.read(0, TimestampsToReturn.Both, readValueIds).get();
final DataValue[] results = response.getResults();
if (null != results) {
	final List<Integer> values = Arrays.stream(results).map(result -> (Integer) result.getValue().getValue()).collect(Collectors.toList());
	System.out.println(values.stream().map(String::valueOf).collect(Collectors.joining(",")));
}

client.disconnect().get();

The code performs a bulk read call on 50 integer variables (“Counter1” to “Counter50”). For performance tests you can put the bulk read call in a loop and measure the times. You should, however, connect to the server over the target network, not on localhost.

With a free (however not open-source) OPC UA simulation server by Prosys and Eclipse Milo for the client I measured times around 3.3 ms per bulk read of these 50 integer variables. I got similar results with the UA.NET stack by the OPC Foundation. Of course, you should do your own measurements with your target setup.

Keep also in mind that the preferred way to use OPC UA is not to constantly poll the values of all the variables. OPC UA allows you to monitor variables for changes and to get notified in case of a change, which is a more event-driven approach.

Using PostgreSQL with Entity Framework

The most widespread O/R (object-relational) mapper for the .NET platform is the Entity Framework. It is most often used in combination with Microsoft SQL Server as database. But the architecture of the Entity Framework allows to use it with other databases as well. A popular and reliable is open-source SQL database is PostgreSQL. This article shows how to use a PostgreSQL database with the Entity Framework.

Installing the Data Provider

First you need an Entity Framework data provider for PostgreSQL. It is called Npgsql. You can install it via NuGet. If you use Entity Framework 6 the package is called EntityFramework6.Npgsql:

> Install-Package EntityFramework6.Npgsql

If you use Entity Framework Core for the new .NET Core platform, you have to install a different package:

> Install-Package Npgsql.EntityFrameworkCore.PostgreSQL

Configuring the Data Provider

The next step is to configure the data provider and the database connection string in the App.config file of your project, for example:

<configuration>
  <!-- ... -->

  <entityFramework>
    <providers>
      <provider invariantName="Npgsql"
         type="Npgsql.NpgsqlServices, EntityFramework6.Npgsql" />
    </providers>
  </entityFramework>

  <system.data>
    <DbProviderFactories>
      <add name="Npgsql Data Provider"
           invariant="Npgsql"
           description="Data Provider for PostgreSQL"
           type="Npgsql.NpgsqlFactory, Npgsql"
           support="FF" />
    </DbProviderFactories>
  </system.data>

  <connectionStrings>
    <add name="AppDatabaseConnectionString"
         connectionString="Server=localhost;Database=postgres"
         providerName="Npgsql" />
  </connectionStrings>

</configuration>

Possible parameters in the connection string are Server, Port, Database, User Id and Password. Here’s an example connection string using all parameters:

Server=192.168.0.42;Port=5432;Database=mydatabase;User Id=postgres;Password=topsecret

The database context class

To use the configured database you create a database context class in the application code:

class AppDatabase : DbContext
{
  private readonly string schema;

  public AppDatabase(string schema)
    : base("AppDatabaseConnectionString")
  {
    this.schema = schema;
  }

  public DbSet<User> Users { get; set; }

  protected override void OnModelCreating(DbModelBuilder builder)
  {
    builder.HasDefaultSchema(this.schema);
    base.OnModelCreating(builder);
  }
}

The parameter to the super constructor call is the name of the configured connection string in App.config. In this example the method OnModelCreating is overridden to set the name of the used schema. Here the schema name is injected via constructor. For PostgreSQL the default schema is called “public”:

using (var db = new AppDatabase("public"))
{
  var admin = db.Users.First(user => user.UserName == "admin")
  // ...
}

The Entity Framework mapping of entity names and properties are case sensitive. To make the mapping work you have to preserve the case when creating the tables by putting the table and column names in double quotes:

create table public."Users" ("Id" bigserial primary key, "UserName" text not null);

With these basics you’re now set up to use PostgreSQL in combination with the Entity Framework.

 

Evolution of programming languages

Programming languages evolve over time. They get new language features and their standard library is extended. Sounds great, doesn’t it? We all know not going forward means your go backward.

But I observe very different approaches looking at several programming ecosystems we are using.

Featuritis

Java and especially C# added more and more “me too” features release after release making relatively lean languages quite complex multi-paradigm languages. They started object oriented and added generics, functional programming features and declarative programming (LINQ in C#) and different UI toolkits (AWT, Swing, JavaFx in Java; Winforms, WPF in C#) to the mix.

Often the new language features add their own set of quirks because they are an afterthought and not carefully enough designed.

For me, this lack of focus makes said language less attractive than more current approaches like Kotlin or Go.

In addition, deprecation often has no effect (see Java) where 20 year old code and style still works which increases the burden further . While it is great from a business perspektive in that your effort to maintain compatibility is low it does not help your code base. Different styles and old ways of doing something tend to remain forever.

Revolution

In Grails (I know, it is not a programming language, but I has its own ecosystem) we see more of a revolution. The core concept as a full stack framework stays the same but significant components are changed quite rapidly. We have seen many changes in technology like jetty to tomcat, ivy to maven, selenium-rc to geb, gant to gradle and the list goes on.

This causes many, sometimes subtle, changes in behaviour that are a real pain when maintaining larger applications over many years.

Framework updates are often a time-consuming hassle but if you can afford it your code base benefits and will eventually become cleaner.

Clean(er) evolution

I really like the evolution in C++. It was relatively slow – many will argue too slow – in the past but it has picked up pace in the last few years. The goal is clearly stated and only features that support it make it in:

  • Make C++ a better language for systems programming and library building
  • Make C++ easier to teach and learn
  • Zero-Cost abstractions
  • better Tool-support

If you cannot make it zero-cost your chances are slim to get your feature in…

C at its core did not change much at all and remained focused on its merits. The upgrades mostly contained convenience features like line comments, additional data type definitions and multithreading.

Honest evolution – breaking backwards compatibility

In Python we have something I would call “honest evolution”. Python 3 introduced some breaking changes with the goal of a cleaner and more consistent language. Python 2 and 3 are incompatible so the distinction in the version number is fair. I like this approach of moving forward as it clearly communicates what is happening and gets rid of the sins in the past.

The downside is that many systems still come with both, a Python 2 and a Python 3 interpreter and accompanying libraries. Fortunately there are some options and tools for your code to mitigate some of the incompatibilities, like the __future__ module and python-six.

At some point in the future (expected in 2020) there will only support for Python 3. So start making the switch.

Platform independent development with .NET

We develop most of our projects as platform independent applications, usually running under Windows, Mac and Linux. There are exceptions, for example when it is required to communicate with special hardware drivers or third-party libraries or other components that are not available on all platforms. But even then we isolate these parts into interchangeable modules that can be operated either in a simulated mode or with the real thing. The simulated modes are platform independent. Developers usually can work on the code base using their favorite operating system. Of course, it has to be tested on the target platform(s) that the application will run on in the end.

Platform independent development is both a matter of technology choices and programming practices. Concerning the technology the ecosystem based on the Java VM is a proven choice for platform independent development. We have developed many projects in Java and other JVM based languages. All of our developers are polyglots and we are able to develop software with a wide variety of programming languages.

The .NET ecosystem

Until recently the .NET platform has been known to be mainly a Microsoft Windows based ecosystem. The Mono project was started by non-Microsoft developers to provide an open source implementation of .NET for other operating systems, but it never had the same status as Microsoft’s official .NET on Windows.

However, recently Microsoft has changed course: They open sourced their .NET implementation and are porting it to other platforms. They acquired Xamarin, the company behind the Mono project, and they are releasing developer tools such as IDEs for non-Windows platforms.

IDEs for non-Windows platforms

If you want to develop a .NET project on a platform other than Windows you now have several choices for an IDE:

I am currently using JetBrains Rider on a Mac to develop a .NET based application in C#. Since I have used other JetBrains products before it feels very familiar. Xamarin Studio, MonoDevelop, VS for Mac and JetBrains Rider all support the solution and project file format of the original Visual Studio for Windows. This means a .NET project can be developed with any of these IDEs.

Web applications

The .NET application I am developing is based on Web technologies. The server side uses the NancyFX web framework, the client side uses React. Persistence is done with Microsoft’s Entity Framework. All the libraries I need for the project like NancyFX, the Entity Framework, a PostgreSQL driver, JSON.NET, NLog, NUnit, etc. work on non-Windows platforms without any problems.

Conclusion

Development of .NET applications is no longer limited to the Windows platform. Microsoft is actively opening up their development platform for other operating systems.

Exiling a legacy COM component

One of our long-standing Java applications has several dependencies to native libraries, which are called via the Java Native Interface (JNI). We usually avoid native library dependencies, but this application must interface with some hardware devices for which the vendors only provide access through native libraries.

From 32 bit to 64 bit

Until recently the application ran in a 32-bit Java VM and the native libraries were 32-bit DLLs as well. Then the time had come to update the application to the 64-bit world. We wanted the application to run in a 64-bit JVM, and 32-bit library code cannot run in a 64-bit process.

We were able to replace the 32-bit libraries with 64-bit libraries, all except one. This particular dependency is not just a native library, but a Windows COM component. We had developed a wrapper DLL, which connected the COM component via JNI to the Java application. While we do have the source code of the wrapper DLL, we don’t have the source code of the COM component, and the vendor does not provide a 64-bit version of the component.

32 bit

So we decided to exile this COM component to a separate process, which we refer to as a container process. The main application runs in a 64-bit JVM and communicates with this process via a simple HTTP API. The API calls from the application do not require a very low latency.

Initially we had planned to implement this container process as a C++ program. However, after a spike it turned out that there is a quicker way to both interface with a COM component and provide a simple self-hosted HTTP service on the Windows platform: The .NET framework provides excellent support for COM interoperability. Using a COM component in .NET is as simple as adding the component as a reference to the project, importing the namespace and instantiating the COM object like a regular .NET object. The event handling of the COM component, which requires quite some boilerplate code to set up in C++, gets automatically mapped to the C#/.NET event handling. The only reminder of the fact that you’re using a COM component is the amount of out and ref parameters.

For the HTTP API we chose the Nancy framework, with which we have had good experiences in previous projects. The architecture now looks like this:

64-bit

While it is a drawback that the container process now depends on the .NET runtime, it is outweighed by the benefit for us: The C#/.NET code for interfacing with the COM component is more maintainable than the previous JNI wrapper code was or a native implementation of the container process would have been.