Self-contained projects in python

An important concept for us is the notion of self-containment. For a project in development this means you find everything you need to develop and run the software directly in the one repository you check out/clone. For practical reasons we most of the time omit the IDE and the basic runtime like Java JDK or the Python interpreter. If you have these installed you are good to go in seconds.

What does this mean in general?

Usually this means putting all your dependencies either in source or object form (dll, jar etc.) directly in a directory of your project repository. This mostly rules out dependency managers like maven. Another not as obvious point is to have hardware dependencies mocked out in some way so your software runs without potentially unavailable hardware attached. The same is true for software services somewhere on the net that may be unavailable, like a payment service for example.

How to do it for Python

For Python projects this means not simply installing you dependencies using the linux package manager, system-wide pip or other dependency management tools but using a virtual environment. Virtual environments are isolated Python environments using an available, but defined Python interpreter on the system. They can be created by the tool virtualenv or since Python 3.3 the included tool venv. You can install you dependencies into this environment e.g. using pip which itself is part of the virtualenv. Preparing a virtual env for your project can be done using a simple shell script like this:

python2.7 ~/my_project/vendor/virtualenv-15.1.0/virtualenv.py ~/my_project_env
source ~/my_project_env/bin/activate
pip install ~/my_project/vendor/setuptools_scm-1.15.0.tar.gz
pip install ~/my_project/vendor/six-1.10.0.tar.gz
...

Your dependencies including virtualenv (for Python installations < 3.3) are stored into the projects source code repository. We usually call the directory vendor or similar.

As a side note working with such a virtual env even remotely work like charm in the PyCharm IDE by selecting the Python interpreter of the virtual env. It correctly shows all installed dependencies and all the IDE support for code completion and imports works as expected:

python-interpreter-settings

What you get

With such a setup you gain some advantages missing in many other approaches:

  • No problems if the target machine has no internet access. This would be problematic to classical pip/maven/etc. approaches.
  • Mostly hassle free development and deployment. No more “downloading the internet” feeling or driver/hardware installation issues for the developer. A deployment is in the most simple cases as easy as a copy/rsync.
  • Only minimal requirements to the base installation of developer, build, deployment or other target machines.
  • Perfectly reproducable builds and tests in isolation. You continuous integration (CI) machine is just another target machine.

What it costs

There are costs of this approach of course but in our experience the benefits outweigh them by a great extent. Nevertheless I want to mention some downsides:

  • Less tool support for managing the dependencies, especially if your are used to maven and friends and happen to like them. Pip can work with local archives just fine but updating is a bit of manual work.
  • Storing (binary) dependencies in your repository increases the checkout size. Nowadays disk space and local network speeds make mostly irrelevant, especially in combination with git. Shallow-clones can further mitigate the problem.
  • You may need to put in some effort for implementing mocks for your hardware or third-party software services and a mechanism for switching between simulation and the real stuff.

Conclusion

We have been using self-containment to great success in varying environments. Usually, both developers and clients are impressed by the ease of development and/or installation using this approach regardless if the project is in Java, C++, Python or something else.

Advertisements

Using passwords with Jenkins CI server

For many of our projects the Jenkins continuous integration (CI) server is one important cornerstone. The well known “works on my machine” means nothing in our company. Only code in repositories and built, tested and packaged by our CI servers counts. In addition to building, testing, analyzing and packaging our projects we use CI jobs for deployment and supervision, too. In such jobs you often need some sort of credentials like username/password or public/private keys.

If you are using username/password they do not only appear in the job configuration but also in the console build logs. In most cases this is undesirable but luckily there is an easy way around it: using the Environment Injector Plugin.

In the plugin you can “inject passwords to the build as environment variables” for use in your commands and scripts.inject-passwords-configuration

The nice thing about this is that the passwords are not only masked in the job configuration (like above) but also in the console logs of the builds!inject-passwords-console-log

Another alternative doing mostly the same is the Credentials Binding Plugin.

There is a lot more to explore when it comes to authentication and credential management in Jenkins as you can define credentials at the global level, use public/private key pairs and ssh agents, connect to a LDAP database and much more. Just do not sit back and provide security related stuff plaintext in job configurations or your deployments scripts!

Introduction to Debian packaging

In former posts I wrote about packaging your software as RPM packages for a variety of use cases. The other big binary packaging system on Linux systems is DEB for Debian, Ubuntu and friends. Both serve the purpose of convenient distribution, installation and update of binary software artifacts. They define their dependencies and describe what the package provides.

How do you provide your software as DEB packages?

The master guide to debian packaging can be found at https://www.debian.org/doc/manuals/maint-guide/. It is a very comprehensive guide spanning many pages and providing loads of information. Building a usable package of your software for your clients can be a matter of minutes if you know what to do. So I want to show you the basic steps, for refinement you will need to consult the guide or other resources specific to the software you want to package. For example there are several guides specific to packaging of software written in Python. I find it hard to determine what the current and recommended way to debian packages for python is because there are differing guides over the last 10 years or so. You may of course say “Just use pip for python” 🙂

Basic tools

The basic tools for building debian packages are debhelper and dh-make. To check the resulting package you will need lintian in addition. You can install them and basic software build tools using:

  sudo apt-get install build-essential dh-make debhelper lintian

The python packaging system uses make and several scripts under the hood to help building the binary packages out of source tarballs.

Your first package

First you need a tar-archive of the software you want to package. With python setuptools you could use python setup.py sdist to generate the tarball. Then you run dh_make on it to generate metadata and the package build environment for your package. Now you have to edit the metadata files, namely control, copyright, changelog. Finally you run dpkg-buildpackage to generate the package itself.  Here is an example of the necessary commands:

mkdir hello-deb-1.0
cd hello-deb-1.0
dh_make -f ../hello-deb-1.0.tar.gz
# edit deb metadata files
vi debian/control
vi debian/copyright
vi debian/changelog
dpkg-buildpackage -us -uc
lintian -i -I --show-overrides hello-deb_1.0-1_amd64.changes

The control file roughly resembles RPMs SPEC file. Package name, description, version and dependency information belong there. Note that debian is very strict when it comes to naming of packages, so make sure you use the pattern ${name}-${version}.tar.gz for the archive and that it extracts into a corresponding directory without the extension, e.g. ${name}-${version}.

If everything went ok several files were generated in your base directory:

  • The package itself as .deb file
  • A file containing the changelog and checksums between package versions and revisions ending with .changes
  • A file with the package description ending with .dsc
  • A tarball with the original sources renamed according to debian convention hello-deb_1.0.orig.tar.gz (note the underscore!)

Going from here

Of course there is a lot more to the tooling and workflow when maintaining debian packages. In future posts I will explore additional means for improving and updating your packages like the quilt patch management tool, signing the package, symlinking, scripts for pre- and post-installation and so forth.

Modern CMake with target_link_libraries

Dependency hell?

One thing that has eluded me in the past was how to efficiently manage dependencies of different components within one CMake project. I’d use the include_directories, add_definitions and add_compile_options command in the top-level or in mid-level CMakeLists.txt files just to get the whole thing to compile. Of course, this is all heavily order-dependent – so the build system breaks as soon as you make an ever so subtle change to the directory layout. I’ve seen projects tackle this problem in various ways – for example by defining specifically named variables for each library and using that for their clients. Other projects defined “interface” files for each library that could be included by other targets. All these homegrown solutions work, but they are rather clumsy and don’t work well when integrating libraries not written in that same convention.

target_link_libraries to the rescue!

It turns out there’s actually a pretty elegant solution built into CMake, which centers around target_link_libraries. But information on this is pretty scarce on the web. The ones that initially put me on the right track were The Ultimate Guide to Modern CMake and CMake – Introduction and best practices. Of course, it’s all in the CMake documentation, but mentioned implicitly at best.

The gist is this: Using target_link_libraries to link A to an internal target B will not only add the linker flags required to link to B, but also the definitions, include paths and other settings – even transitively – if they are configured that way.

To do this, you need to use target_include_directories and target_compile_definitions with the PUBLIC or INTERFACE keywords on your targets. There’s also the PRIVATE keyword that can be used to avoid adding the settings to all dependent targets.

A simple example

Here’s a small example of a library that uses Boost in its headers and therefore wishes to have its clients setup those directories as well:

set(TARGET_NAME cool_lib)

add_library(${TARGET_NAME} STATIC 
  cool_feature.cpp cool_feature.hpp)

target_include_directories(${TARGET_NAME}
  INTERFACE ${CMAKE_CURRENT_SOURCE_DIR})

target_include_directories(${TARGET_NAME} SYSTEM
  PUBLIC ${Boost_INCLUDE_DIR})

Now here’s a program that wants to use that:

set(TARGET_NAME cool_tool)

add_executable(cool_tool main.cpp)

target_link_libraries(cool_tool
  PRIVATE cool_lib)

cool_tool can just #include "cool_feature.hpp" without knowing exactly where it is located in the source tree or without having to worry about setting up the boost includes for itself! Pretty neat!

PRIVATE, PUBLIC and INTERFACE

Typically, you’d use the PRIVATE keyword for includes and definitions that are exclusively used in you implementation, i.e. your *.cpp and *.c files and internal headers. It’s good practice to favor PRIVATE to avoid “leaking” dependencies so they won’t stack up in the dependent libraries and bring down your compile times. The INTERFACE keyword is a bit more curious: For example, with definitions, you can use it to define your .dll interface differently for compilation and usage. For include directories, one common usage is to set the own source directory with INTERFACE if you keep your headers and source files in the same folder. The PUBLIC keyword is used when definitions and includes are relevant for the own and dependent libraries. It pretty much is the combination of PRIVATE and INTERFACE – whenever you’re temped to put something in both of those, put it in PUBLIC instead. It is probably the most common option.

The future!

I hope that all open-source libraries switch to this style sooner rather than later so you can easily include them in your build-trees. Just don’t use the old commands that add properties for all following targets like add_definitions, include_directories etc. and use the commands with the target_ prefix!

Keep your ovens clean

Let’s assume for a moment that you are a baker, producing different types of pastries in your small bakery. The production process is always the same: prepare the dough, put it in the oven, wait some time and retrieve the most delicious buns or bread. If we can abstract the real baking process to these steps, it’s the same as with software: prepare the sourcecode, put it in the compiler, wait some time and retrieve the most delicious binary or executable. There is only one difference: The oven of the baker is a self-contained, closed system, while our compilers require a distinct system setup around them in order to produce anything edible. The oven is independent from the kitchen around it, the compiler is depedent on the environment. To finish the analogy, what would a baker say if he can’t bake bread in his oven unless he nurtures a certain type of yeast in his kitchen?

A most unpleasant case

While developing a platform dependent application recently, we met a most unpleasant case of build dependency on a third-party library. It was an old dynamic link library (DLL) that requires registering in the windows registry. There was no other way than to register the DLL using the regsrv32 utility. If you didn’t do this, the build process would abort with an error stating unmet dependencies. If you ran the resulting program on a machine without registered DLL, it would crash with a runtime error complaining about the missing registry entry. And by the way, there are two totally independent regsrv32 utilities on a 64-bit windows system, one for 32-bit and one for 64-bit registrations. No, the name of the latter one isn’t regsvr64, that would be way too easy.

We accepted the fact that you need to prepare your system if you want to run the program, but we quarreled a lot with the nuisance that you need to alter your system just to build the software. This process of alteration is called snowflaking in the DevOp mentality and it’s not a desired activity. We would need to alter every build machine in our continuous integration cluster that comes into contact with the project. And we would need to de-snowflake them again afterwards, because this kind of tinkering adds up to inscrutable side-effects very fast.

A practicable workaround

We found a way around the abovementioned snowflaking for our build servers. It’s not a solution, it’s only a workaround, as it solves the immediate problem but produces some lesser problems on the way. Let’s look at what we did.

At first, our situation could be described with this module diagram:

dependency1We couldn’t modify the problematic DLL itself, it was a given binary. But we could wrap it in our own DLL. Wrapping less pleasant things into something you can control is a proven technique even in baking, by the way. We now had a system layout that looks like this:

dependency2Nothing gained so far, just that we now have a layer outside our system that can provide the functionality of the DLL and is actually under our control. The wrapper really does nothing on its own but to forward each call to the DLL. To profit from this indirection, we need to introduce another module, like this:

dependency3The second module provides the same interface as the first, but does nothing, not even forwarding anywhere. It’s a complete stub, just there to be uncomplicated during the build process. The goal is to build the system using this “empty” DLL and then replace it with the “problematic” DLL afterwards. The only question is: how do we build the problematic DLL? Here’s the workaround part of the solution: We actually had to compile the problematic DLL on a snowflaked system and add it to the project repository. Good thing our target system’s specification is known, so we only need to do this for one platform. Because we are reasonably sure that the DLL interface will not change over time (it had every opportunity in the last ten years and didn’t use it), we can assume that the interface of our two wrapper DLLs also won’t change. So it’s not too problematic to check in a precompiled binary that needs to satisfy an interface that’s reproduced with every build cycle. Still, we need to keep an eye on the method signatures of our two wrapper DLLs. If one of them changes, the modification needs to be replicated on the other wrapper, too. It’s a classic duplication.

When we balanced the duplication in the interfaces of the two wrapper DLLs against the snowflaking of every CI and developer machine, we found our aversion against snow outweighing the other negative aspects. Your mileage may vary.

Conclusion

We kept our build ovens clean by introducing a wrapping layer around the problematic depedency and then using the benefits of indirection by switching to a non-problematic stub during the build cycle. The technique is very old, but still use- and powerful.

Packaging kernel modules/drivers using DKMS

Hardware drivers on linux need to fit to the running kernel. When drivers you need are not part of the distribution in use you need to build and install them yourself. While this may be ok to do once or twice it soon becomes tedious doing it after every kernel update.

The Dynamic Kernel Module Support (DKMS) may help in such a situation: The module source code is installed on the target machine and can be rebuilt and installed automatically when a new kernel is installed. While veterans may be willing to manually maintain their hardware drivers with DKMS end user do not care about the underlying system that keeps their hardware working. They want to manage their software updates using the tools of their distribution and everything should be working automagically.

I want to show you how to package a kernel driver as an RPM package hiding all of the complexities of DKMS from the user. This requires several steps:

  1. Preparing/patching the driver (aka kernel module) to include dkms.conf and follow the required conventions of DKMS
  2. Creating a RPM spec-file to install the source, tool chain and integrate the module source with DKMS

While there is native support for RPM packaging in DKMS I found the following procedure more intuitive and flexible.

Preparing the module source

You need at least a small file called dkms.conf to describe the module source to the DKMS system. It usually looks like that:

PACKAGE_NAME="menable"
PACKAGE_VERSION=3.9.18.4.0.7
BUILT_MODULE_NAME[0]="menable"
DEST_MODULE_LOCATION[0]="/extra"
AUTOINSTALL="yes"

Also make sure that the source tarball extracts into the directory /usr/src/$PACKAGE_NAME-$PACKAGE_VERSION ! If you do not like /usr/src as a location for your kernel modules you can configure it in /etc/dkms/framework.conf.

Preparing the spec file

Since we are not building a binary and package it but install source code, register, build and install it on the target machine the spec file looks a bit different than usual: We have no build step, instead we just install the source tree and potentially additional files like udev rules or documentation and perform all DKMS work in the postinstall and preuninstall scripts. All that means, that we build a noarch-RPM an depend on dkms, kernel sources and a compiler.

Preparation section

Here we unpack and patch the module source, e.g.:

Source: %{module}-%{version}.tar.bz2
Patch0: menable-dkms.patch
Patch1: menable-fix-for-kernel-3-8.patch

%prep
%setup -n %{module}-%{version} -q
%patch0 -p0
%patch1 -p1

Install section

Basically we just copy the source tree to /usr/src in our build root. In this example we have to install some additional files, too.

%install
rm -rf %{buildroot}
mkdir -p %{buildroot}/usr/src/%{module}-%{version}/
cp -r * %{buildroot}/usr/src/%{module}-%{version}
mkdir -p %{buildroot}/etc/udev/rules.d/
install udev/10-siso.rules %{buildroot}/etc/udev/rules.d/
mkdir -p %{buildroot}/sbin/
install udev/men_path_id udev/men_uiq %{buildroot}/sbin/

Post-install section

In the post-install script of the RPM we add our module to the DKMS system build and install it:

occurrences=/usr/sbin/dkms status | grep "%{module}" | grep "%{version}" | wc -l
if [ ! occurrences > 0 ];
then
    /usr/sbin/dkms add -m %{module} -v %{version}
fi
/usr/sbin/dkms build -m %{module} -v %{version}
/usr/sbin/dkms install -m %{module} -v %{version}
exit 0

Pre-uninstall section

We need to remove our module from DKMS if the user uninstalls our package to leave the system in a clean state. So we need a pre-uninstall script like this:

/usr/sbin/dkms remove -m %{module} -v %{version} --all
exit 0

Conclusion

Packaging kernel modules using DKMS and RPM is not really hard and provides huge benefits to your users. There are some little quirks like the post-install and pre-uninstall scripts but after you got that working you (and your users) are rewarded with a great, fully integrated experience. You can use the full spec file of the driver in the above example as a template for your driver packages.

MSBuild Basics

MSBuild is Microsoft’s build system for Visual Studio. Visual Studio project files (*.csproj, *.vbproj) do not only describe the project structure, but are also build scripts for MSBuild. They’re executed when you click the run button in the IDE, but they can also be called via the MSBuild command line utility.

> MSBuild.exe Project.csproj

These project files / build scripts are in XML format, comparable to Ant scripts in the Java land.

Edit project files

You can edit these files in any text editor, of course. But if you want to edit them within Visual Studio, you have to unload the project first:

  • Right click on the project in the Solution Explorer -> Unload Project
  • Right click on the project in the Solution Explorer -> Edit MyProject.csproj

After you’re done editing you can reload the project again via the context menu.

Targets and tasks

The concepts of MSBuild are comparable to many other build systems: a build script contains a set of named targets, and each target consists of a sequence of task calls.

A project can have one or more default targets, referenced by the DefaultTargets attribute of the Project root element:

<Project DefaultTargets="Build" ...>

Multiple targets can be separated by semicolons.

Targets are declared via Target tags containig the task calls:

  <Target Name="Clean">
    <Delete Files="xyz.tmp" />
    ...
  </Target>

MSBuild comes with a set of common tasks, such as Message, Copy, Delete, Exec, …

If you need more tasks you should have a look at these community provided task collections:

Both are available as NuGet packages and can be checked into your code repository alongside the project for self-containment. For the Extension Pack you have to set the ExtensionTasksPath property correctly before importing the tasks, for example:

<PropertyGroup>
  <ExtensionTasksPath Condition="'$(ExtensionTasksPath)' == ''">$(MSBuildProjectDirectory)\packages\MSBuild.Extension.Pack.1.5.0\tools\net40</ExtensionTasksPath>
</PropertyGroup>

<Import Project="$(ExtensionTasksPath)MSBuild.ExtensionPack.tasks">

Properties

Properties are defined within PropertyGroup tags, containing one or many property tags. The names of these tags are the property names and the tag contents are the property values. Properties are referenced via $(PropertyName). A property definition can have an optional Condition attribute, which determines whether a property should be set or not. The condition ‘$(PropertyName)’ == ”, for example, checks if a property is not yet set.

Here’s an example build target that uses the ZIP compression task from the Extension Pack and some properties to create a ZIP file artifact from the build results:

<Target Name="AfterBuild">
  <MSBuild.ExtensionPack.Compression.Zip TaskAction="Create" CompressPath="$(OutputPath)" ZipFileName="bin\$(ProjectName)-$(BuildNumber).zip" />
</Target>

You can also set property values from the outside via the MSBuild call:

> MSBuild.exe /t:Build /p:Configuration=Release;BuildNumber=1234 Project.csproj

  • The /t switch determines which targets to run. Multiple targets can be separated by semicolons.
  • The /p switch sets properties in the form of PropertyName=value, also separated by semicolons.

This way you can pass environment variables like $BUILD_NUMBER from your Continuous Integration system (e.g. Jenkins) to your build script:

> MSBuild.exe /t:Build /p:Configuration=Release;BuildNumber=$BUILD_NUMBER Project.csproj

Now you could use the MSBuild.ExtensionPack.Framework.AssemblyInfo task to write the $(BuildNumber) property into your AssemblyInfo file.