Packaging Python projects for Debian/Ubuntu

Deployment of software using built-in software management tools is very convenient and provides a nice user experience (UX) for the users. For debian-based linux distributions like Ubuntu packaging software in .deb-packages is the way to go. So how can we prepare our python projects for packaging as a deb-package? The good news is that python is supported out-of-the-box in the debian package build system.

Alternatively, you can use the distutils-extension stdeb if you do not need complete flexibility in creating the packages.

Basic python deb-package

If you are using setuptools/distutils for your python project debian packaging consists of editing the package metadata and adding --with python to the rules file. For a nice headstart we can generate templates of the debian metadata files using two simple commands (the debhelper package is needed for dh_make:

# create a tarball with the current project sources
python setup.py sdist
# generate the debian package metadata files 
dh_make -p ${project_name}_${version} -f dist/${project_name}-${version}.tar.gz 

You have to edit at least the control-file, the changelog and the rules-file to build the python package. In the rules-file the make-target % is the crucial point and should include the flag to build a python project:

# main packaging script based on dh7 syntax
%:
	dh $@ --with python

After that you can build the package issueing dpkg-buildpackage.

The caveats

The debian packaging system is great in complaining about non-conformant aspects of your package. It demands digital signatures, correct file and directory names including version strings etc. Unfortunately it is not very helpful when you make packaging  mistakes resulting in empty, incomplete or broken packages.

Issues with setup.py

The setup.py build script has to reside on the same level as the debian-directory containing the package metadata. The packaging tools will not tell you if they could not find the setup script. In addition it will always run setup.py using python 2, even if you specified --with python3 in the rules-file.

Packaging for specific python versions

If you want better control over the target python versions for the package you should use Pybuild. You can do this by a little change to the rules-file, e.g. a python3-only build using Pybuild:

# main packaging script based on dh7 syntax
%:
	dh $@ --with python3 --buildsystem=pybuild

For pybuild to work it is crucial to add the needed python interpreter(s) besides the mandatory build dependency dh-python to the Build-Depends of the control-file, for python3-only it could look like this:

Build-Depends: debhelper (>=9), dh-python, python3-all
...
Depends: ${python3:Depends}

Without the dh-python build dependency pybuild will silently do nothing. Getting the build dependencies wrong will create incomplete or broken packages. Take extra care of getting this right!

Conclusion

Debian packaging looks quite intimidating at first because there are so many ways to build a package. Many different tools can ease package creation but also add confusion. Packaging python software is done easily if you know the quirks. The python examples from the Guide for Debian Maintainers are certainly worth a look!

A good name will shine forever

Naming things is supposedly one of the two hard things in Computer Science. Here are some tips on naming for programmers.

Getters

In the Java world property accessors are traditionally prefixed with “get” and “set”, the Java bean convention:

person.getFirstName()

Code becomes more pleasant to read if you omit the “get” prefix:

person.firstName()

Of course, you can do this only if you don’t use a framework that depends on the convention to recognize properties via reflection (like some OR mappers, for example).

What about setters? I rarely write setters anymore. If you design your classes as immutable types you don’t need setters. Even if your class has mutable state you probably want to control this state via methods more specific to the domain of the problem. Also, the more you apply the tell, don’t ask principle the less you will find the need for getters.

Brevity vs. verbosity

There were times when it was common to see mass variable declarations like the following at the beginning of a function:

int i, j, k, l, m, n;
float a, b, c, u, v, x, y, z;

Fortunately, times have changed for the better, and most programmers are aware that descriptive naming is important. However, some programmers do over-compensate. Length of an identifier is not a virtue by itself.

The Objective-C Cocoa framework is famous for overly long method names:

[array objectAtIndex:index]

Parts of Objective-C were inspired by Smalltalk. But in Smalltalk the same method is called at:

[array at:index]

This is a reasonably sufficient name for such a common functionality in programming.

Here’s another example: If the concept of a measurement station is very prevalent in the domain of your project then it’s ok to call instances just station instead of measurementStation if it’s the only kind of station in the domain.

Yes, the IDE does auto-complete long names. However, readability of the code decreases if the reader has to scan the same long-winded names over and over again:

MeasurementStation measurementStation = new MeasurementStation();
Measurement measurement = measurementStation.startMeasurement();

Often you can find names that are more to the point than longer descriptions, e.g. acquire instead of takeOwnershipOf. (source)

Hungarian notation and friends

The famous Hungarian notation is no longer in widespread use. However, there are variations of it that I would recommend against as well for the sake of readability. For example, bookList or bookArray can be simply books. Another variation would be conventions like myField or m_field for member variables. If you need these notations to determine the origin of a variable, then your scopes are probably too big, i.e. your methods, functions or classes are too long. Additionally, IDEs and editors for programmers can highlight these different scopes anyway. Other examples for unnecessary Hungarian-style notation are IFoo for interfaces, EFoo for enums or the infamous FooImpl.

Screaming constants

There is really no need for constants and enum values to constantly SCREAM at you and other readers. This SCREAMING_CASE convention has its origin in C, where constants used to be defined as macros when the const keyword wasn’t introduced yet, and it later found its way into other programming language ecosystems. Names for constants and enum values are not more important than other identifiers and don’t have to be spelled differently. Try it, you will enjoy the newfound silence in your code.

Conclusion

These are some tips to improve readability of code through better names. Some of these tips go against traditional conventions, so you should discuss them with your team before applying them. Consistency within an existing code base can definitely be more important. But if you have the freedom you should definitely give them a try.

Why I’m not using C++ unnamed namespaces anymore

Well okay, actually I’m still using them, but I thought the absolute would make for a better headline. But I do not use them nearly as much as I used to. Almost exactly a year ago, I even described them as an integral part of my unit design. Nowadays, most units I write do not have an unnamed namespace at all.

What’s so great about unnamed namespaces?

Back when I still used them, my code would usually evolve gradually through a few different “stages of visibility”. The first of these stages was the unnamed-namespace. Later stages would either be a free-function or a private/public member-function.

Lets say I identify a bit of code that I could reuse. I refactor it into a separate function. Since that bit of code is only used in that compile unit, it makes sense to put this function into an unnamed namespace that is only visible in the implementation of that unit.

Okay great, now we have reusability within this one compile unit, and we didn’t even have to recompile any of the units clients. Also, we can just “Hack away” on this code. It’s very local and exists solely to provide for our implementation needs. We can cobble it together without worrying that anyone else might ever have to use it.

This all feels pretty great at first. You are writing smaller functions and classes after all.

Whole class hierarchies are defined this way. Invisible to all but yourself. Protected and sheltered from the ugly world of external clients.

What’s so bad about unnamed namespaces?

However, there are two sides to this coin. Over time, one of two things usually happens:

1. The code is never needed again outside of the unit. Forgotten by all but the compiler, it exists happily in its seclusion.
2. The code is needed elsewhere.

Guess which one happens more often. The code is needed elsewhere. After all, that is usually the reason we refactored it into a function in the first place. Its reusability. When this is the case, one of these scenarios usually happes:

1. People forgot about it, and solve the problem again.
2. People never learned about it, and solve the problem again.
3. People know about it, and copy-and-paste the code to solve their problem.
4. People know about it and make the function more widely available to call it directly.

Except for the last, that’s a pretty grim outlook. The first two cases are usually the result of the bad discoverability. If you haven’t worked with that code extensively, it is pretty certain that you do not even know that is exists.

The third is often a consequence of the fact that this function was not initially written for reuse. This can mean that it cannot be called from the outside because it cannot be accessed. But often, there’s some small dependency to the exact place where it’s defined. People came to this function because they want to solve another problem, not to figure out how to make this function visible to them. Call it lazyness or pragmatism, but they now have a case for just copying it. It happens and shouldn’t be incentivised.

A Bug? In my code?

Now imagine you don’t care much about such noble long term code quality concerns as code duplication. After all, deduplication just increases coupling, right?

But you do care about satisfied customers, possibly because your job depends on it. One of your customers provides you with a crash dump and the stacktrace clearly points to your hidden and protected function. Since you’re a good developer, you decide to reproduce the crash in a unit test.

Only that does not work. The function is not accessible to your test. You first need to refactor the code to actually make it testable. That’s a terrible situation to be in.

What to do instead.

There’s really only two choices. Either make it a public function of your unit immediatly, or move it to another unit.

For functional units, its usually not a problem to just make them public. At least as long as the function does not access any global data.

For class units, there is a decision to make, but it is simple. Will using preserve all class invariants? If so, you can move it or make it a public function. But if not, you absolutely should move it to another unit. Often, this actually helps with deciding for what to create a new class!

Note that private and protected functions suffer many of the same drawbacks as functions in unnamed-namespaces. Sometimes, either of these options is a valid shortcut. But if you can, please, avoid them.

C++ Coroutines on Windows with the Fiber API

Last week, I had the chance to try out coroutines as a way to cooperatively interleave long tasks with event-processing. Unlike threads, where you can have interaction between between threads at any time, coroutines need to yield control explicitly, whic arguably makes synchronisation a little simpler. Especially in (legacy) systems that are not designed for concurrency. Of course, since coroutines do not run at the same time, you do not get the perks from concurrency either.

If you don’t know coroutines, think of them as functions that can be paused and resumed.

Unlike many other languages, C++ does not have built-in support for coroutines just yet. There are, however, several alternatives. On Windows, you can use the Fiber API to implement coroutines easily.

Here’s some example code of how that works:

auto coroutine=make_shared<FiberCoroutine>();
coroutine->setup([](Coroutine::Yield yield)
{
  for (int i=0; i<3; ++i)
  {
    cout << "Coroutine " 
         << i << std::endl;
    yield();
  }
});

int stepCount = 0;
while (coroutine->step())
{
  cout << "Main " 
       << stepCount++ << std::endl;
}

Somewhat surprisingly, at least if you have never seen coroutines, this will output the two outputs alternatingly:

Coroutine 0
Main 0
Coroutine 1
Main 1
Coroutine 2
Main 2

Interface

Since fibers are not the only way to implement coroutines and since we want to keep our client code nicely insulated from the windows API, there’s a pure-virtual base class as an interface:

class Coroutine
{
public:
  using Yield = std::function<void()>;
  using Run = std::function<void(Yield)>;

  virtual ~Coroutine() = default;
  virtual void setup(Run f) = 0;
  virtual bool step() = 0;
};

Typically, creation of a Coroutine type allocates all the resources it needs, while setup “primes” it with an inner “Run” function that can use an implementation-specific “Yield” function to pass control back to the caller, which is whoever calls step.

Implementation

The implementation using fibers is fairly straight-forward:

class FiberCoroutine
  : public Coroutine
{
public:
  FiberCoroutine()
  : mCurrent(nullptr), mRunning(false)
  {
  }

  ~FiberCoroutine()
  {
    if (mCurrent)
      DeleteFiber(mCurrent);
  }

  void setup(Run f) override
  {
    if (!mMain)
    {
      mMain = ConvertThreadToFiber(NULL);
    }
    mRunning = true;
    mFunction = std::move(f);

    if (!mCurrent)
    {
      mCurrent = CreateFiber(0,
        &FiberCoroutine::proc, this);
    }
  }

  bool step() override
  {
    SwitchToFiber(mCurrent);
    return mRunning;
  }

  void yield()
  {
    SwitchToFiber(mMain);
  }

private:
  void run()
  {
    while (true)
    {
      mFunction([this]
                { yield(); });
      mRunning = false;
      yield();
    }
  }

  static VOID WINAPI proc(LPVOID data)
  {
    reinterpret_cast<FiberCoroutine*>(data)->run();
  }

  static LPVOID mMain;
  LPVOID mCurrent;
  bool mRunning;
  Run mFunction;
};

LPVOID FiberCoroutine::mMain = nullptr;

The idea here is that the caller and the callee are both fibers: lightweight threads without concurrency. Running the coroutine switches to the callee’s, the run function’s, fiber. Yielding switches back to the caller. Note that it is currently assumed that all callers are from the same thread, since each thread that participates in the switching needs to be converted to a fiber initially, even the caller. The current version only keeps the fiber for the initial thread in a single static variable. However, it should be possible to support this by replacing the single static fiber pointer with a map that maps each thread to its associated fiber.

Note that you cannot return from the fiberproc – that will just terminate the whole thread! Instead, just yield back to the caller and either re-use or destroy the fiber.

Assessment

Fiber-based coroutines are a nice and efficient way to model non-linear control-flow explicitly, but they do not come without downsides. For example, while this example worked flawlessly when compiled with visual studio, Cygwin just terminates without even an error. If you’re used to working with the visual studio debugger, it may surprise you that the caller gets hidden completely while you’re in the run function. The run functions stack completely replaces the callers stack until you call yield(). This means that you cannot find out who called step(). On the other hand, if you’re actually doing a lot of processing in the run function, this is quite nice for profiling, as the “processing” call tree seemingly has its own root in the call-tree.

I just wish the visual studio debugger had a way to view the states of the different fibers like it has for threads.

Alternatives

  • On Linux, you can use the ucontext.
  • Visual Studio 2015 also has another, newer, implementation.
  • Coroutines can be implemented using threads and condition-variables.
  • There’s also Boost.Coroutine, if you need an independent implementation of the concept. From what I gather, they only use Fibers optionally, and otherwise do the required “trickery” themselves. Maybe this even keeps the caller-stack visible – it is certainly worth exploring.
  • ECMAScript 6 is coming

    ECMAScript is the standardized specification of the scripting language that is the basis for JavaScript implementations in web browsers.

    ECMAScript 5, which was released in 2009, is in widespread use today. Six years later in 2015 ECMAScript 6 was released, which introduced a lot of new features, especially additions to the syntax.

    However, browser support for ES 6 needed some time to catch up.
    If you look at the compatibility table for ES 6 support in modern browsers you can see that the current versions of the main browsers now support ES 6. This means that soon you will be able to write ES 6 code without the necessity of a ES6-to-ES5 cross-compiler like Babel.

    The two features of ES 6 that will change the way JavaScript code will be written are in my opinion: the new class syntax and the new lambda syntax with lexical scope for ‘this’.

    Class syntax

    Up to now, if you wanted to define a new type with a constructor and methods on it, you would write something like this:

    var MyClass = function(a, b) {
    	this.a = a;
    	this.b = b;
    };
    MyClass.prototype.methodA = function() {
    	// ...
    };
    MyClass.prototype.methodB = function() {
    	// ...
    };
    MyClass.prototype.methodC = function() {
    	// ...
    };
    

    ES 6 introduces syntax support for this pattern with the class keyword, which makes class definitions shorter and more similar to class definitions in other object-oriented programming languages:

    class MyClass {
    	constructor(a, b) {
    		this.a = a;
    		this.b = b;
    	}
    	methodA() {
    		// ...
    	}
    	methodB() {
    		// ...
    	}
    	methodC() {
    		// ...
    	}
    }
    

    Arrow functions and lexical this

    The biggest annoyance in JavaScript were the scoping rules of this. You always had to remember to rebind this if you wanted to use it inside an anonymous function:

    var self = this;
    this.numbers.forEach(function(n) {
        self.doSomething(n);
    });
    

    An alternative was to use the bind method on an anonymous function:

    this.numbers.forEach(function(n) {
        this.doSomething(n);
    }.bind(this));
    

    ES 6 introduces a new syntax for lambdas via the “fat arrow” syntax:

    this.numbers.forEach((n) => this.doSomething(n));
    

    These arrow functions do not only preserve the binding of this but are also shorter. Especially if the function body consists of only a single expression. In this case the curly braces and and the return keyword can be omitted:

    // ES 5
    numbers.filter(function(n) { return isPrime(n); });
    // ES 6
    numbers.filter((n) => isPrime(n));
    

    Wide-spread support for ECMAScript 6 is just around the corner, it’s time to familiarize yourself with it if you haven’t done so yet. A nice overview of the new features can be found at es6-features.org.

    Explicit types – and when to use them

    Many modern programming languages offer a way declare variables without an explicit type if the type can be inferred, either dynamically or statically. Many also allow for variables to be explicitly defined with a type. For example, Scala and C# let you omit the explicit variable type via the var keyword, but both also allow defining variables with explicit types. I’m coming from the C++ world, where “auto” is available for this purpose since the relatively recent C++11. However, people are still debating whether you should actually use it.

    Pros

    Herb Sutter popularised the almost-always-auto style. He advocates that using more type inference is good because it is roughly equivalent to programming against interfaces instead of implementations. He says that “Overcommitting to explicit types makes code less generic and more interdependent, and therefore more brittle and limited.” However, he also mentions that you might sometimes want to use explicit types.

    Now what exactly is overcommiting here? When is the right time to use explicit types?

    Cons

    Opponents to implicit typing, many of them experienced veterans, often state that they want the actual type visible in the source code. They don’t want to rely on type inference being right. They want the code to explicitly state what’s going on.

    At first, I figured that was just conservatism in the face of a new “scary” feature that they did not fully understand. After all, IDEs can usually infer the type on-the-fly and you can hover on a variable to let it show you the type.

    For C++, the function signature is a natural boundary where you often insert explicit types, unless you want to commit to the compile time and physical dependency cost that comes with templates. Other languages, such as Groovy, do not have this trade-off and let you skip explicit types almost everywhere. After working with Groovy/Grails for a while, where the dominant style seems to be to omit types whereever possible, it dawned on me that the opponents of implicit typing have a point. Not only does the IDE often fail to show me the inferred type (even though it still works way more often than I would have anticipated), but I also found it harder to follow and modify code that did not mention explicit types. Seemingly contrary to Herb Sutter’s argument, that code felt more brittle than I had liked.

    Middle-ground

    As usual, the truth seems to be somewhere in the middle. I propose the following rule for when to use explicit types:

    • Explicit typing for domain-types
    • Implicit typing everywhere else

    Code using types from the problem domain should be as specific as possible. There’s no need for it to be generic – it’s actually counter-productive, as otherwise the code model would be inconsistent with model of the problem domain. This is also the most important aspect to grok when reading code, so it should be explicit. The type is as important as the action on it.

    On the other hand, for pure-fabrication types that do not respresent a concept in the domain, the action is important, while the type is merely a means to achieve this action. Typically, most of the elements from a language’s standard library fall into this category. All your containers, iterators, callables. Their types are merely implementation details: an associative container could be an array, or a hash-map or a tree structure. Exchanging it rarely changes the meaning of the code in the problem domain – it just changes its performance characteristics.

    Containers will occasionally contain domain-types in their type. What do you do about those? I think they belong in the “everywhere else” catergory, but you should be take extra care to name the contained type when working with it – for example when declaring the variable of the for-each loop on it, or when inserting something into it. This way, the “collection of domain-type” aspect will become clear, but the specific container implementation will stay implicit – like it should.

    What do you think? Is this a useful proposition for your code?

    TANGO device server step-by-step tutorial

    Now that we learned about TANGO in general and the architecture of device servers it is time to get our hands dirty. Here is a step-by-step tutorial for making your software remotely accessible as TANGO devices.

    We will develop a small C++ class that can provide us the current time and date as a string and then build a device server that makes our functionality available over TANGO to remote clients. Our plain C++ project structure looks like this:

    $PROJECT_ROOT/
      CMakeLists.txt
      TimeProvider/
        CMakeLists.txt
        TimeProvider.h
        TimeProvider.cpp
        main.cpp
    

    Here are our CMake build files:
    toplevel

    project(Time)
    cmake_minimum_required(VERSION 2.8)
    
    find_package(PkgConfig)
    
    add_subdirectory(TimeProvider)
    

    and for the TimeProvider

    project(TimeProvider)
    
    add_library(time TimeProvider.cpp)
    
    add_executable(timeprovider main.cpp)
    target_link_libraries(timeprovider time)
    
    

    And the C++ sources for our standalone application:
    TimeProvider.h

    #include <string>
    
    class TimeProvider
    {
    public:
        TimeProvider() {}
    
        const std::string now();
    };
    

    TimeProvider.cpp

    #include "TimeProvider.h"
    
    #include <ctime>
    
    const std::string TimeProvider::now()
    {
        time_t now = time(0);
        struct tm time;
        char timeString[100];
        time = *localtime(&now);
        strftime(timeString, sizeof(timeString), "%Y-%m-%d %X", &time);
        return timeString;
    }
    

    main.cpp

    #include <iostream>
    #include "TimeProvider.h"
    
    int main()
    {
        TimeProvider tp;
        std::cout << tp.now() << std::endl;
        return 0;
    }
    

    Next we create a new subdirectory “TimeDevice” and add it to our toplevel CMakeLists.txt along with the TANGO package lookup:

    ...
    find_package(PkgConfig)
    pkg_check_modules(TANGO tango>=7.2.6 REQUIRED)
    
    add_subdirectory(TimeProvider)
    add_subdirectory(TimeDevice)
    

    In this newly created directory we now run the Pogo application with pogo TimeDevice from our TANGO installation to generate our device server skeleton:

    Pogo-Create Deviceand add the Attribute:Pogo-AddAttributeso the result looks like:

    Pogo-TimeDevice

    Now we need to add the generated sources to our CMake build like this:

    project(TimeDevice)
    
    set(SOURCES
        ${PROJECT_NAME}.cpp
        ${PROJECT_NAME}Class.cpp
        ${PROJECT_NAME}StateMachine.cpp
        ClassFactory.cpp
        main.cpp
    )
    
    # this is needed because of wrong generation of include statements
    # you may correct them in generated code because they are in protected regions
    include_directories(.)
    
    include_directories(
        ${TimeProvider_SOURCE_DIR}
        ${TANGO_INCLUDE_DIRS}
    )
    
    add_definitions("-std=c++11")
    
    add_executable(time_device_server ${SOURCES})
    target_link_libraries(time_device_server
        time
        ${TANGO_LIBRARIES}
    )
    

    As the last step, we implement the code for the CurrentTime attribute like this:

    void TimeDevice::read_CurrentTime(Tango::Attribute &attr)
    {
    	DEBUG_STREAM << "TimeDevice::read_CurrentTime(Tango::Attribute &attr) entering... " << endl;
    	/*----- PROTECTED REGION ID(TimeDevice::read_CurrentTime) ENABLED START -----*/
    
        attr_CurrentTime_read = new Tango::DevString;
        TimeProvider timeProvider;
        *attr_CurrentTime_read = Tango::string_dup(timeProvider.now().c_str());
        //	Set the attribute value
        attr.set_value(attr_CurrentTime_read, 1, 0, true);
    
    	/*----- PROTECTED REGION END -----*/	//	TimeDevice::read_CurrentTime
    }
    

    For other correct implementations of string attributes see the documentation on the TANGO website.
    Now we should end up with a ready to run TANGO device server executable.

    Conclusion
    If  you structure your project with hindsight you can integrate your drivers or services in your TANGO control system with very low effort. In the next post we we will show how to add a device server to a TANGO database and use its facilities like device properties for configuration or jive for inspection of a device.

    Feel free to download the full source code of this tutorial.