Ugly problems, ugly solutions?

One type of our projects is to integrate some devices into our customers infrastructure. The tasks then mostly consist of writing bridging code for third party libs of the hardware vendor. The most fun part is when the libs do not have some needed capability or feature.

The situation

In my case I was building a device driver with following requirements:

  • asynchronous execution of long running tasks.
  • ability to cancel long running tasks.
  • at any time it is asked for its current status, it has to provide it.

The device is accompanied by a DLL with a following interface(simplified):

  • doWork(), a blocking function that returns after a configurable amount of time that can range from milliseconds to hours.
  • abortWork(), is supposed to cancel the process triggered by doWork() and to make doWork() return earlier.

First impressions

I was able to fullfill two requirements pretty fast. The state ist more or less a simple getter and the doWork function was called in a separate thread. Just cancelling the execution didn’t work. More precisely it didn’t work as expected. In the time between a call to doWork() and the moment it returned, the process always used 100% of one CPU core. After that it always dropped to nearly zero. Now, what happened, when I called abortWork()? There were two things: doWork() returned, but the CPU utilization stayed the same for an indefinite amount of time. Or the call was ignored completely. Especially funny was the first case, where the API seemed to work until the process run out of cores and the system practically grinded to a halt.

The “Solution”

Banging my head against the desk didn’t help, so my first thought was to forget abortWork() and kill the thread myself. Microsoft provides a nice function called TerminateThread for that purpose. Everyone who looks at the documentation, will see that the list of side effects is quite impressive, memory leaks being the least bad ones. I couldn’t guarantee that the application would work afterwards, so I decided against it. What would be the alternative? Process shutdown. When you stop the process all blocked threads should be away. Being too soft and trying to unload the DLL is a bad idea – you have a deadlock when the DllMain waits for the worker thread to finish. My last attempt was to suicide the process!

Now I was able to abort a running task, but my app were no longer available all the time. Every attempt to get the current status between the start of a shutdown and a completed startup failed. So a semi-persistent storage containing the last status of a living application was needed. To achieve this, I created an application with the same interface as the real device driver and proxy that delegated all the requests to it, caching the status responses. That way the polling application still assumed that the last action were still running until the restarting app was fully available again.

In the end the solution consisted of two device drivers, one for caching the state and the other for doing the work. When cancelling the task was required, the latter device driver died and restarted itself again.

Final thoughts

I hope that there is a way to do this in a more elegant way and I just overlooked some facts. It is unbelievable that you can lose all control over your app by a simple call to a third party library and that the only escape is death.

Building Visual C++ Projects with CMake

In previous post my colleague showed how to create RPM packages with CMake. As a really versatile tool it is also able to create and build Visual Studio projects on Windows. This property makes it very valuable when you want to integrate your project into a CI cycle(in our case Jenkins).

Prerequisites:

To be able to compile anything following packages needed to be installed beforehand:

  •  CMake. It is helpful to put it in the PATH environment variable so that absolute paths aren’t needed.
  • Microsoft Windows SDK for Windows 7 and .NET Framework 4 (the web installer or  the ISOs).  The part “.NET Framework 4” is very important, since when the SDK for the .NET Framework 3.5 is installed you will get following parse error for your *.vcxproject files:

    error MSB4066: The attribute “Label” in element is unrecognized

    at the following position:

    <ItemGroup Label=”ProjectConfigurations”>

    Probably equally important is the bitness of the installed SDK. The x64 ISO differs only in one letter from the x86 one. Look for the X if want 64 bit.

  • .NET Framework 4, necessary to make msbuild run

It is possible that you encounter following message during your SDK setup:

A problem occurred while installing selected Windows SDK components. Installation of the “Microsoft Windows SDK for Windows 7″ product has reported the following error: Please refer to Samples\Setup\HTML\ConfigDetails.htm document for further information. Please attempt to resolve the problem and then start Windows SDK setup again. If you continue to have problems with this issue, please visit the SDK team support page at http://go.microsoft.com/fwlink/?LinkId=130245. Click the View Log button to review the installation log. To exit, click Finish.

The reason behind this wordy and less informative error message were the Visual C++ Redistributables installed on the system. As suggested by Microsoft KB article removing them all helped.

Makefiles:

For CMake to build anything you need to have a CMakeLists.txt file in your project. For a tutorial on how to use CMake, look at this page. Here is a simple CMakeLists.txt to get you started:

project(MyProject)
 cmake_minimum_required(VERSION 2.6)
 set(source_files
 main.cpp
 )
 include_directories(
 ${CMAKE_CURRENT_SOURCE_DIR}
 )
 add_executable(MyProject ${source_files})

Building:

To build a project there are few steps necessary. You can enter them in your CI directy or put them in a batch file.

call "%ProgramFiles%\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /Release /x86

With this call all necessary environment variables are set. Be careful on 64 bit platforms as jenkins slave executes this call in a 32 bit context and so “%ProgramFiles%” is resolved to “ProgramFiles (x86)” where the SDK does not lie.

del CMakeCache.txt

This command is not strictly necessary, but it prevents you from working with outdated generated files when you change your configuration.

cmake -G "Visual Studio 10" .

Generates a Visual Studio 2010 Solution. Every change to the solution and the project files will be gone when you call it, so make sure you track all necessary files in the CMakeLists.txt.

cmake --build . --target ALL_BUILD --config Release

The final step. It will net you the MyProject.exe binary. The target parameter is equal to the name of the project in the solution and the config parameter is one of the solution configurations.

Final words:

The hardest and most time consuming part was the setup of prerequisites. Generic, not informative error messages are the worst you can do to a clueless customer. But… when you are done with it, you are only two small steps apart from an automatically built executable.

Build a RPM package using CMake

Some while ago I presented a way to package projects using different build systems as RPM packages. If you are using CMake for your projects you can use CPack to build RPM packages (in addition to tarballs, NSIS installers, deb packages and so on). This is a really nice option for deployment of your own projects because installation and update can be easily done by the users using familiar package management tools like zypper, yum and yast2.

Your first CPack RPM

It is really easy to add RPM using CPack to your existing project. Just set the mandatory CPack variables and include CPack below the variable definitions, usually as one of the last steps:

project (my_project)
cmake_minimum_required (VERSION 2.8)

set(VERSION "1.0.1")
<----snip your usual build instructions snip--->
set(CPACK_PACKAGE_VERSION ${VERSION})
set(CPACK_GENERATOR "RPM")
set(CPACK_PACKAGE_NAME "my_project")
set(CPACK_PACKAGE_RELEASE 1)
set(CPACK_PACKAGE_CONTACT "John Explainer")
set(CPACK_PACKAGE_VENDOR "My Company")
set(CPACK_PACKAGING_INSTALL_PREFIX ${CMAKE_INSTALL_PREFIX})
set(CPACK_PACKAGE_FILE_NAME "${CPACK_PACKAGE_NAME}-${CPACK_PACKAGE_VERSION}-${CPACK_PACKAGE_RELEASE}.${CMAKE_SYSTEM_PROCESSOR}")
include(CPack)

These few lines should be enough to get you going. After that you can execute a make package command should obtain the RPM package.

Spicing up the package

RPM packages can contain much more metadata and especially package dependencies and a version changelog. Most of the stuff can be specified using CPACK variables. We sometimes prefer to use a SPEC file template to be filled and used by CPack because it then contains most of the RPM specific stuff in a familiar manner instead of polluting the CMakeLists.txt itself:

project (my_project)
<----snip your usual CMake stuff snip--->
<----snip your additional CPack variables snip--->
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/my_project.spec.in" "${CMAKE_CURRENT_BINARY_DIR}/my_project.spec" @ONLY IMMEDIATE)
set(CPACK_RPM_USER_BINARY_SPECFILE "${CMAKE_CURRENT_BINARY_DIR}/my_project.spec")
include(CPack)

The variables in the RPM SPEC file will be replaced by the values provided in the CMakeLists.txt and then be used for the RPM package. It looks very similar to a standard SPEC file but you can omit the usual build instructions boiling down to something like this:

Buildroot: @CMAKE_CURRENT_BINARY_DIR@/_CPack_Packages/Linux/RPM/@CPACK_PACKAGE_FILE_NAME@
Summary:        My very cool Project
Name:           @CPACK_PACKAGE_NAME@
Version:        @CPACK_PACKAGE_VERSION@
Release:        @CPACK_PACKAGE_RELEASE@
License:        GPL
Group:          Development/Tools/Other
Vendor:         @CPACK_PACKAGE_VENDOR@
Prefix:         @CPACK_PACKAGING_INSTALL_PREFIX@
Requires:       opencv >= 2.4

%define _rpmdir @CMAKE_CURRENT_BINARY_DIR@/_CPack_Packages/Linux/RPM
%define _rpmfilename @CPACK_PACKAGE_FILE_NAME@.rpm
%define _unpackaged_files_terminate_build 0
%define _topdir @CMAKE_CURRENT_BINARY_DIR@/_CPack_Packages/Linux/RPM

%description
Cool project solving the problems of many colleagues.

# This is a shortcutted spec file generated by CMake RPM generator
# we skip _install step because CPack does that for us.
# We do only save CPack installed tree in _prepr
# and then restore it in build.
%prep
mv $RPM_BUILD_ROOT @CMAKE_CURRENT_BINARY_DIR@/_CPack_Packages/Linux/RPM/tmpBBroot

%install
if [ -e $RPM_BUILD_ROOT ];
then
  rm -Rf $RPM_BUILD_ROOT
fi
mv "@CMAKE_CURRENT_BINARY_DIR@/_CPack_Packages/Linux/RPM/tmpBBroot" $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
@CPACK_PACKAGING_INSTALL_PREFIX@/@LIB_INSTALL_DIR@/*
@CPACK_PACKAGING_INSTALL_PREFIX@/bin/my_project

%changelog
* Tue Jan 29 2013 John Explainer <john@mycompany.com> 1.0.1-3
- use correct maintainer address
* Tue Jan 29 2013 John Explainer <john@mycompany.com> 1.0.1-2
- fix something about the package
* Thu Jan 24 2013 John Explainer <john@mycompany.com> 1.0.1-1
- important bugfixes
* Fri Nov 16 2012 John Explainer <john@mycompany.com> 1.0.0-1
- first release

Conclusion

Integrating RPM (or other package formats) to your CMake-based build is not as hard as it seems and quite flexible. You do not need to rely on the tools provided by your OS vendor and still deliver your software in a way your users are accustomed to. This makes CPack very continuous integration (CI) friendly too!

Testing C programs using GLib

Writing programs in good old C can be quite refreshing if you use some modern utility library like GLib. It offers a comprehensive set of tools you expect from a modern programming environment like collections, logging, plugin support, thread abstractions, string and date utilities, different parsers, i18n and a lot more. One essential part, especially for agile teams, is onboard too: the unit test framework gtest.

Because of the statically compiled nature of C testing involves a bit more work than in Java or modern scripting environments. Usually you have to perform these steps:

  1. Write a main program for running the tests. Here you initialize the framework, register the test functions and execute the tests. You may want to build different test programs for larger projects.
  2. Add the test executable to your build system, so that you can compile, link and run it automatically.
  3. Execute the gtester test runner to generate the test results and eventually a XML-file to you in your continuous integration (CI) infrastructure. You may need to convert the XML ouput if you are using Jenkins for example.

A basic test looks quite simple, see the code below:

#include <glib.h>
#include "computations.h"

void computationTest(void)
{
    g_assert_cmpint(1234, ==, compute(1, 1));
}

int main(int argc, char** argv)
{
    g_test_init(&argc, &argv, NULL);
    g_test_add_func("/package_name/unit", computationTest);
    return g_test_run();
}

To run the test and produce the xml-output you simply execute the test runner gtester like so:

gtester build_dir/computation_tests --keep-going -o=testresults.xml

GTester unfortunately produces a result file which is incompatible with Jenkins’ test result reporting. Fortunately R. Tyler Croy has put together an XSL script that you can use to convert the results using

xsltproc -o junit-testresults.xml tools/gtester.xsl testresults.xml

That way you get relatively easy to use unit tests working on your code and nice some CI integration for your modern C language projects.

Update:

Recent gtester run the test binary multiple times if there are failing tests. To get a report of all (passing and failing) tests you may want to use my modified gtester.xsl script.

Building Windows C++ Projects with CMake and Jenkins

The C++ programming environment where I feel most comfortable is GCC/Linux (lately with some clang here and there). In terms of build systems I use cmake whenever possible. This environment also makes it easy to use Jenkins as CI server and RPM for deployment and distribution tasks.

So when presented with the task to set up a C++ windows project in Jenkins I tried to do it the same way as much as possible.

The Goal:

A Jenkins job should be set up that builds a windows c++ project on a Windows 7 build slave. For reasons that I will not get into here, compatibility with Visual Studio 8 is required.

The first step was to download and install the correct Windows SDK. This provides all that is needed to build C++ stuff under windows.

Then, after installation of cmake, the first naive try looked like this (in an “execute Windows Batch file” build step)

cmake . -DCMAKE_BUILD_TYPE=Release

This cannot work of course, because cmake will not find compilers and stuff.

Problem: Build Environment

When I do cmake builds manually, i.e. not in Jenkins, I open the Visual Studio 2005 Command Prompt which is a normal windows command shell with all environment variables set. So I tried to do that in Jenkins, too:

call “c:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\SetEnv.Cmd” /Release /x86

cmake . -DCMAKE_BUILD_TYPE=Release

This also did not work and even worse, produced strange (to me, at least) error messages like:

‘Cmd’ is not recognized as an internal or external command, operable program or batch file.

The system cannot find the batch label specified – Set_x86

After some digging, I found the solution: a feature of windows batch programming called delayed expansion, which has to be enabled for SetEnv.Cmd to work correctly.

Solution: SetEnv.cmd and delayed expansion

setlocal enabledelayedexpansion

call “c:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\SetEnv.Cmd” /Release /x86

cmake . -DCMAKE_BUILD_TYPE=Release

nmake

Yes! With this little trick it worked perfectly. And feels almost as with GCC/CMake under Linux:  nice, short and easy.

Basic Image Processing Tasks with OpenCV

For one of our customers in the scientific domain we do a lot of integration of pieces of hardware into the existing measurement- and control network. A good part of these are 2D detectors and scientific CCD cameras, which have all sorts of interfaces like ethernet, firewire and frame grabber cards. Our task is then to write some glue software that makes the camera available and controllable for the scientists.

One standard requirement for us is to do some basic image processing and analytics. Typically, this entails flipping the image horizontally and/or vertically, rotating the image around some multiple of 90 degrees, and calculcating some statistics like standard deviation.

The starting point there is always some image data in memory that has been acquired from the camera. Most of the time the image data is either gray values (8, or 16 bit), or RGB(A).

As we are generally not falling victim to the NIH syndrom we use open source image processing librarys. The first one we tried was CImg, which is a header-only (!) C++ library for image processing. The header-only part is very cool and handy, since you just have to #include <CImg.h> and you are done. No further dependencies. The immediate downside, of course, is long compile times. We are talking about > 40000 lines of C++ template code!

The bigger issue we had with CImg was that for multi-channel images the memory layout is like this: R1R2R3R4…..G1G2G3G4….B1B2B3B4. And since the images from the camera usually come interlaced like R1G1B1R2G2B2… we always had to do tricks to use CImg on these images correctly. These tricks killed us eventually in terms of performance, since some of these 2D detectors produce lots of megabytes of image data that have to be processed in real time.

So OpenCV. Their headline was already very promising:

OpenCV (Open Source Computer Vision) is a library of programming functions for real time computer vision.

Especially the words “real time” look good in there. But let’s see.

Image data in OpenCV is represented by instances of class cv::Mat, which is, of course, short for Matrix. From the documentation:

The class Mat represents an n-dimensional dense numerical single-channel or multi-channel array. It can be used to store real or complex-valued vectors and matrices, grayscale or color images, voxel volumes, vector fields, point clouds, tensors, histograms.

Our standard requirements stated above can then be implemented like this (gray scale, 8 bit image):

void processGrayScale8bitImage(uint16_t width, uint16_t height,
                               const double& rotationAngle,
                               uint8_t* pixelData)
{
  // create cv::Mat instance
  // pixel data is not copied!
  cv::Mat img(height, width, CV_8UC1, pixelData);

  // flip vertically
  // third parameter of cv::flip is the so-called flip-code
  // flip-code == 0 means vertical flipping
  cv::Mat verticallyFlippedImg(height, width, CV_8UC1);
  cv::flip(img, verticallyFlippedImg, 0);

  // flip horizontally
  // flip-code > 0 means horizontal flipping
  cv::Mat horizontallyFlippedImg(height, width, CV_8UC1);
  cv::flip(img, horizontallyFlippedImg, 1);

  // rotation (a bit trickier)
  // 1. calculate center point
  cv::Point2f center(img.cols/2.0F, img.rows/2.0F);
  // 2. create rotation matrix
  cv::Mat rotationMatrix =
    cv::getRotationMatrix2D(center, rotationAngle, 1.0);
  // 3. create cv::Mat that will hold the rotated image.
  // For some rotationAngles width and height are switched
  cv::Mat rotatedImg;
  if ( (rotationAngle / 90.0) % 2 != 0) {
    // switch width and height for rotations like 90, 270 degrees
    rotatedImg =
      cv::Mat(cv::Size(img.size().height, img.size().width),
              img.type());
  } else {
    rotatedImg =
      cv::Mat(cv::Size(img.size().width, img.size().height),
              img.type());
  }
  // 4. actual rotation
  cv::warpAffine(img, rotatedImg,
                 rotationMatrix, rotatedImg.size());

  // save into TIFF file
  cv::imwrite("myimage.tiff", gray);
}

The cool thing is that almost the same code can be used for our other image types, too. The only difference is the image type for the cv::Mat constructor:


8-bit gray scale: CV_U8C1
16bit gray scale: CV_U16C1
RGB : CV_U8C3
RGBA: CV_U8C4

Additionally, the whole thing is blazingly fast! All performance problems gone. Yay!

Getting basic statistical values is also a breeze:

void calculateStatistics(const cv::Mat& img)
{
  // minimum, maximum, sum
  double min = 0.0;
  double max = 0.0;
  cv::minMaxLoc(img, &min, &max);
  double sum = cv::sum(img)[0];

  // mean and standard deviation
  cv::Scalar cvMean;
  cv::Scalar cvStddev;
  cv::meanStdDev(img, cvMean, cvStddev);
}

All in all, the OpenCV experience was very positive, so far. They even support CMake. Highly recommended!

Use Boost’s Multi Index Container!

Sometimes, after you have used a special library or other special programming tool for a job, you forget about it because you don’t have that specific use case anymore. Boost’s multi_index container could fall in this category because you don’t have to hold data in memory with the need to access it by different keys all the time.

Therefore, this post is intended to be a reminder for c++ programmers that there exists this pretty cool thing called boost::multi_index_container and that you can use it in more situations than you would think at first.

(If you’re already using it on a regular basis you may stop here, jump directly to the comments and tell us about your typical use cases.)

I remember when I discovered boost::multi_index_container I found it quite intimidating at first sight. All those templates that are used in sometimes weird ways can trigger that feeling if you are not a template metaprogramming specialist (i.e. haven’t yet read Andrei Alexandrescu’s book “Modern C++ Design” ).

But if you look at it after you fought your way through the documentation and after your unit test is green that tests your first example, it doesn’t look that complicated anymore.

My latest use case for boost::multi_index_container was data objects that should be sorted by two different date-times. (For dates and times we use boost::date_time, of course). At first, the requirement was to store the objects sorted by one date time. I used a std::set for that with a custom comparator. Everything was fine.

With changing requirements it became necessary to retrieve objects by another date time, too. I started to use another std::set with a different comparator but then I remembered that there was some cool container somewhere in boost for which you can define multiple indices ….

After I had set it up with the two date time indices, the code also looked much cleaner because in order to update one object with a new time stamp I could just call container->replace(…) instead of fiddling around with the std::set.

Furthermore, I noticed that setting up a boost::multi_index_container with a specific key makes it much clearer what you intend with this data structure than using a std::set with a custom comparator. It is not that much more typing effort, and you can practice template metaprogramming a little bit :-)

Let’s compare the two implementations:

#include <boost/shared_ptr.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
using boost::posix_time::ptime;

// objects of this class should be stored
class MyDataClass
{
  public:
    const ptime& getUpdateTime() const;
    const ptime& getDataChangedTime() const;

  private:
    ptime _updateTimestamp;
    ptime _dataChangedTimestamp;
};
typedef boost::shared_ptr<MyDataClass> MyDataClassPtr;

Now the definition of a multi index container:

#include <boost/multi_index_container.hpp>
#include <boost/multi_index/ordered_index.hpp>
#include <boost/multi_index/mem_fun.hpp>
using namespace boost::multi_index;

typedef multi_index_container
<
  MyDataClassPtr,
  indexed_by
  <
    ordered_non_unique
    <
      const_mem_fun<MyDataClass, 
        const ptime&, 
        &MyDataClass::getUpdateTime>
    >
  >
> MyDataClassContainer;

compared to std::set:

#include <set>

// we need a comparator first
struct MyDataClassComparatorByUpdateTime
{
  bool operator() (const MyDataClassPtr& lhs, 
                   const MyDataClassPtr& rhs) const
  {
    return lhs->getUpdateTime() < rhs->getUpdateTime();
  }
};
typedef std::multiset<MyDataClassPtr, 
                      MyDataClassComparatorByUpdateTime> 
   MyDataClassSetByUpdateTime;

What I like is that the typedef for the multi index container reads almost like a sentence. Besides, it is purely declarative (as long as you get away without custom key extractors), whereas with std::multiset you have to implement the comparator.

In addition to being a reminder, I hope this post also serves as motivation to get to know boost::multi_index_container and to make it a part of your toolbox. If you still have fears of contact, start small by replacing usages of std::set/multiset.