Remote development with PyCharm

PyCharm is a fantastic tool for python development. One cool feature that I quite like is its support for remote development. We have quite a few projects that need to interact with special hardware, and that hardware is often not attached to the computer we’re developing on.
In order to test your programs, you still need to run it on that computer though, and doing this without tool support can be especially painful. You need to use a tool like scp or rsync to transmit your code to the target machine and then execute it using ssh. This all results in painfully long and error prone iterations.
Fortunately, PyCharm has tool support in its professional edition. After some setup, it allows you do develop just as you would on a local machine. Here’s a small guide on how to set it up with an ubuntu vagrant virtual machine, connecting over ssh. It work just as nicely on remote computers.

1. Create a new deployment configuration

In the Tools->Deployment->Configurations click the small + in the top left corner. Pick a name and choose the SFTP type.
add_server

In the “Connection” Tab of the newly created configuration, make sure to uncheck “Visible only for this project”. Then, setup your host and login information. The root path is usually a central location you have access to, like your home folder. You can use the “Autodetect” button to set this up.

connection
For my VM, the settings look like this.

On the “Mappings” Tab, set the deployment path for your project. This would be the specific folder of your project within the root you set on the previous page. Clicking the associated “…” button here helps, and even lets you create the target folder on the remote machine if it does not exist yet.

2. Activate the upload

Now check “Tools->Deployment->Automatic Upload”. This will do an upload when you change a file, so you still need to do the initial upload manually via “Tools->Deployment->Upload to “.

3. Create a project interpreter

Now the files are synced up, but the runtime environment is not on the remote machine. Go to the “Project Interpreter” page in File->Settings and click the little gear in the top-right corner. Select “Add Remote”.

remote_interpreter
It should have the Deployment configuration you just created already selected. Once you click ok, you’re good to go! You can run and debug your code just like on a local machine.

Have fun developing python applications remotely.

The migration path for Java applets

Java applets have been a deprecated technology for a while. It has become increasingly difficult to run Java applets in modern browsers. Browser vendors are phasing out support for browser plugins based on the Netscape Plugin API (NPAPI) such as the Java plugin, which is required to run applets in the browser. Microsoft Edge and Google Chrome already have stopped supporting NPAPI plugins, Mozilla Firefox will stop supporting it at the end of 2016. This effectively means that Java applets will be dead by the end of the year.

However, it does not necessarily mean that Java applet based projects and code bases, which can’t be easily rewritten to use other technologies such as HTML5, have to become useless. Oracle recommends the migration of Java applets to Java Web Start applications. This is a viable option if the application is not required to be run embedded within a web page.

To convert an applet to a standalone Web Start application you put the applet’s content into a frame and call the code of the former applet’s init() and start() from the main() method of the main class.

Java Web Start applications are launched via the Java Network Launching Protocol (JNLP). This involves the deployment of an XML file on the web server with a “.jnlp” file extension and content type “application/x-java-jnlp-file”. This file can be linked on a web page and looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<jnlp spec="1.0+" codebase="http://example.org/demo/" href="demoproject.jnlp">
 <information>
  <title>Webstart Demo Project</title>
  <vendor>Softwareschneiderei GmbH</vendor>
  <homepage>http://www.softwareschneiderei.de</homepage>
  <offline-allowed/>
 </information>
 <resources>
  <j2se version="1.7+" href="http://java.sun.com/products/autodl/j2se"/>
  <jar href="demo-project.jar" main="true"/>
  <jar href="additional.jar"/>
 </resources>
 <application-desc name="My Project" main-class="com.schneide.demo.WebStartMain">
 </application-desc>
 <security>
  <all-permissions/>
 </security>
 <update check="always"/>
</jnlp>

The JNLP file describes among other things the required dependencies such as the JAR files in the resources block, the main class and the security permissions.

The JAR files must be placed relative to the URL of the “codebase” attribute. If the application requires all permissions like in the example above, all JAR files listed in the resources block must be signed with a certificate.

deployJava.js

The JNLP file can be linked directly in a web page and the Web Start application will be launched if Java is installed on the client operating system. Additionally, Oracle provides a JavaScript API called deployJava.js, which can be used to detect whether the required version of Java is installed on the client system and redirect the user to Oracle’s Java download page if not. It can even create an all-in-one launch button with a custom icon:

<script src="https://www.java.com/js/deployJava.js"></script>
<script>
  deployJava.launchButtonPNG = 'http://example.org/demo/launch.png';
  deployJava.createWebStartLaunchButton('http://example.org/demo/demoproject.jnlp', '1.7.0');
</script>

Conclusion

The Java applet technology has now reached its “end of life”, but valuable applet-based projects can be saved with relatively little effort by converting them to Web Start applications.

Modern developer Issue #2: RPM like deployment on Windows

Deployment is a crucial step in every development project. Without shipping no one would ever see our work (and we get no feedback if our work is good).

drawer

Often we fear deploying to production because of the effort involved and the errors we make. Questions like ‘what if we forget a step?’ or ‘what if the new version we install is buggy?’ buzz in our mind.

fears

Deployment needs to be a non-event, a habit. For this we need to automate every step besides the first one: clicking a button to start deployment.

deploy

On Linux we have wonderful tools for this but what if you are stuck with deploying to Windows?

brave

Fear not, brave developer! Even on Windows we can use a package manager to install and rollback buggy versions. Let me introduce you to chocolatey.

choco

Chocolatey (or choco in short) uses the common NuGet package format. Formerly developed for the .net platform we can use it for other platforms, too. In our following example we use a simple Java application which we install as a service and as a task.
Setting up we need a directory structure for the package like this:

folders

We need to create two files: one which specifies our package (my_project.nuspec) and one script which holds the deployment steps (chocolateyinstall.ps1). The specification file holds things like the package name, the package version (which can be overwritten when building the package), some pointers to project, source and license URLs. We can configure files and directories which will be copied to the package: in our example we use a directory containing our archives (aptly named archives) and a directory containing the installation steps (named tools). Here is a simple example:

<?xml version="1.0" encoding="utf-8"?>
<!-- Do not remove this test for UTF-8: if “Ω” doesn’t appear as greek uppercase omega letter enclosed in quotation marks, you should use an editor that supports UTF-8, not this one. -->
<package xmlns="http://schemas.microsoft.com/packaging/2015/06/nuspec.xsd">
  <metadata>
    <id>my_project</id>
    <title>My Project (Install)</title>
    <version>0.1</version>
    <authors>Me</authors>
    <owners>Me</owners>
    <summary></summary>
    <description>Just an example</description>
    <projectUrl>http://localhost/my_project</projectUrl>
    <packageSourceUrl>http://localhost/git</packageSourceUrl>
    <tags>example</tags>
    <copyright>My company</copyright>
    <licenseUrl>http://localhost/license</licenseUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <releaseNotes></releaseNotes>
  </metadata>
  <files>
    <file src="tools\**" target="tools" />
    <file src="archives\**" target="archives" />
  </files>
</package>

This file tells choco how to build the packages and what to include. For the deployment process we need a script file written in Powershell.

powershell

A Powershell primer

Powershell is not as bad as you might think. Let’s take a look at some basic Powershell syntax.

Variables

Variables are started with a $ sign. As in many other languages ‘=’ is used for assignments.

$ErrorActionPreference = 'Stop'

Strings

Strings can be used with single (‘) and double quotes (“).

$serviceName = 'My Project'
$installDir = "c:\examples"

In double quoted strings we can interpolate by using a $ directly or with curly braces.

$packageDir = "$installDir\my_project"
$packageDir = "${installDir}\my_project"

For escaping double quotes inside a double quoting string we need back ticks (`)

"schtasks /end /f /tn `"${serviceName}`" "

Multiline strings are enclosed by @”

$cmdcontent = @"
cd /d ${packageDir}
java -jar ${packageName}.jar >> output.log 2>&1
"@

Method calls

Calling methods looks a mixture of command line calls with uppercase names.

Write-Host "Stopping and deleting current version of ${packageName}"
Get-Date -format yyyyddMMhhmm
Copy-Item $installFile $packageDir

Some helpful methods are:

  • Write-Host or echo: for writing to the console
  • Get-Date: getting the current time
  • Split-Path: returning the specified part of a path
  • Join-Path: concatenating a path with a specified part
  • Start-Sleep: pause n seconds
  • Start-ChocolateyProcessAsAdmin: starting an elevated command
  • Get-Service: retrieving a Windows service
  • Remove-Item: deleting a file or directory
  • Test-Path: testing for existence of a path
  • New-Item: creating a file or directory
  • Copy-Item: copying a file or directory
  • Set-Content: creating a file with the specified contents
  • Out-Null: swallowing output
  • Resolve-Path: display the path after resolving wildcards

The pipe (|) can be used to redirect output.

Conditions

Conditions can be evaluated with if:

if ($(Get-Service "$serviceName" -ErrorAction SilentlyContinue).Status -eq "Running") {
}

-eq is used for testing equality. -ne for difference.

Deploying with Powershell

For installing our package we need to create the target directories and copy our archives:

$packageName = 'myproject'
$installDir = "c:\examples"
$packageDir = "$installDir\my_project"

Write-Host "Making sure $installDir is in place"
if (!(Test-Path -path $installDir)) {New-Item $installDir -Type Directory  | Out-Null}

Write-Host "Making sure $packageDir is in place"
if (!(Test-Path -path $packageDir)) {New-Item $packageDir -Type Directory  | Out-Null}

Write-Host "Installing ${packageName} to ${packageDir}"
Copy-Item $installFile $packageDir

When reinstalling we first need to delete existing versions:

$installDir = "c:\examples"
$packageDir = "$installDir\my_project"

if (Test-Path -path $packageDir) {
  Remove-Item -recurse $(Join-Path $packageDir "\*") -exclude *.conf, *-bak*, *-old*
}

Now we get to the meat creating a Windows service.

$installDir = "c:\examples"
$packageName = 'myproject'
$serviceName = 'My Project'
$packageDir = "$installDir\my_project"
$cmdFile = "$packageDir\$packageName.cmd"

if (!(Test-Path ($cmdFile)))
{
    $cmdcontent = @"
cd /d ${packageDir}
java -jar ${packageName}.jar >> output.log 2>&1
"@
    echo "Dropping a ${packageName}.cmd file"
    Set-Content $cmdFile $cmdcontent -Encoding ASCII -Force
}

if (!(Get-Service "${serviceName}" -ErrorAction SilentlyContinue))
{
  echo "No ${serviceName} Service detected"
  echo "Installing ${serviceName} Service"
  Start-ChocolateyProcessAsAdmin "install `"${serviceName}`" ${cmdFile}" nssm
}

Start-ChocolateyProcessAsAdmin "set `"${serviceName}`" Start SERVICE_DEMAND_START" nssm

First we need to create a command (.cmd) file which starts our java application. Installing a service calling this command file is done via a helper called nssm. We set it to starting manual because we want to start and stop it periodically with the help of a task.

For enabling a reinstall we first stop an existing service.

$installDir = "c:\examples"
$serviceName = 'My Project'
$packageDir = "$installDir\my_project"

if (Test-Path -path $packageDir) {
  Write-Host $(Get-Service "$serviceName" -ErrorAction SilentlyContinue).Status

  if ($(Get-Service "$serviceName" -ErrorAction SilentlyContinue).Status -eq "Running") {
    Start-ChocolateyProcessAsAdmin "Stop-Service `"${serviceName}`""
    Start-Sleep 2
  }
}

Next we install a task with help of the build in schtasks command.

$serviceName = 'My Project'
$installDir = "c:\examples"
$packageDir = "$installDir\my_project"
$cmdFile = "$packageDir\$packageName.cmd"

echo "Installing ${serviceName} Task"
Start-ChocolateyProcessAsAdmin "schtasks /create /f /ru system /sc hourly /st 00:30 /tn `"${serviceName}`" /tr  `"$cmdFile`""

Stopping and deleting the task enables us to reinstall.

$packageName = 'myproject'
$serviceName = 'My Project'
$installDir = "c:\examples"
$packageDir = "$installDir\my_project"

if (Test-Path -path $packageDir) {
  Write-Host "Stopping and deleting current version of ${packageName}"
  Start-ChocolateyProcessAsAdmin "schtasks /delete /f /tn `"${serviceName}`" "
  Start-Sleep 2
  Start-ChocolateyProcessAsAdmin "schtasks /end /f /tn `"${serviceName}`" "
  Remove-Item -recurse $(Join-Path $packageDir "\*") -exclude *.conf, *-bak*, *-old*
}

tl;dr

Putting it all together looks like this:

$ErrorActionPreference = 'Stop'; # stop on all errors

$packageName = 'myproject'
$serviceName = 'My Project'
$installDir = "c:\examples"
$packageDir = "$installDir\my_project"
$cmdFile = "$packageDir\$packageName.cmd"
$currentDatetime = Get-Date -format yyyyddMMhhmm
$scriptDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"
$installFile = (Join-Path $scriptDir -ChildPath "..\archives\$packageName.jar") | Resolve-Path


if (Test-Path -path $packageDir) {
  Write-Host "Stopping and deleting current version of ${packageName}"
  Start-ChocolateyProcessAsAdmin "schtasks /delete /f /tn `"${serviceName}`" "
  Start-Sleep 2
  Start-ChocolateyProcessAsAdmin "schtasks /end /f /tn `"${serviceName}`" "
  Remove-Item -recurse $(Join-Path $packageDir "\*") -exclude *.conf, *-bak*, *-old*

  Write-Host $(Get-Service "$serviceName" -ErrorAction SilentlyContinue).Status

  if ($(Get-Service "$serviceName" -ErrorAction SilentlyContinue).Status -eq "Running") {
    Write-Host "Stopping and deleting current version of ${packageName}"
    Start-ChocolateyProcessAsAdmin "Stop-Service `"${serviceName}`""
    Start-Sleep 2
  }

  if ($(Get-Service "$serviceName"  -ErrorAction SilentlyContinue).Status -ne "Running") {
    Write-Host "Cleaning ${packageDir} directory"
    Remove-Item -recurse $(Join-Path $packageDir "\*") -exclude *.conf, *-bak*, *-old*
  }
}
 
Write-Host "Making sure $installDir is in place"
if (!(Test-Path -path $installDir)) {New-Item $installDir -Type Directory  | Out-Null}

Write-Host "Making sure $packageDir is in place"
if (!(Test-Path -path $packageDir)) {New-Item $packageDir -Type Directory  | Out-Null}

Write-Host "Installing ${packageName} to ${packageDir}"
Copy-Item $installFile $packageDir

if (!(Test-Path ($cmdFile)))
{
    $cmdcontent = @"
cd /d ${packageDir}
java -jar ${packageName}.jar >> output.log 2>&1
"@
    echo "Dropping a ${packageName}.cmd file"
    Set-Content $cmdFile $cmdcontent -Encoding ASCII -Force
}

if (!(Get-Service "${serviceName}" -ErrorAction SilentlyContinue))
{
  echo "No ${serviceName} Service detected"
  echo "Installing ${serviceName} Service"
  Start-ChocolateyProcessAsAdmin "install `"${serviceName}`" ${cmdFile}" nssm
}

Start-ChocolateyProcessAsAdmin "set `"${serviceName}`" Start SERVICE_DEMAND_START" nssm

echo "Installing ${serviceName} Task"
Start-ChocolateyProcessAsAdmin "schtasks /create /f /ru system /sc hourly /st 00:30 /tn `"${serviceName}`" /tr  `"$cmdFile`""

Finally

Now we just need to create the package in our build script. The package will be named my_project.version.nupkg.
On our build machine we need to install choco. On the target machine we need the following tools installed:
chocolatey and nssm (for service management). Now we can create the package with:

  choco pack --version=${version}

Copy it to the target machine and install the current version with:

choco install -f -y c:\\installations\\${archive.name} --version=${version}

Put these steps inside a build script and use your favourite contininuous integration platform and voila.
Done.

deploy

A procedure to deal with big amounts of email

The problem

You probably know the problem already: A day with less than 500 emails feels like your internet connection might be lost. The amount of emails you receive can accurately predict the time of day. In my case, I’ll always receive my 300th email each day right before lunch. Imagine that I’ve spent one minute for each message, then I would have done nothing but reading emails yet. And by the time I return from lunch, more emails have found their way into my inbox. My job description is not “email reader”. It actually is one of the lesser prioritized activities of my job. But I keep most answering times low and always know the content of my inbox. You’ll seldom hear “sorry, I haven’t seen your email yet” from me. How I keep the email flood in check is the topic of this blog entry. It’s my personal procedure, so nothing fancy with a big name, but you might recognize some influence from well-known approaches like “Getting Things Done” by David Allen.

The disclaimer

Disclaimer: You might entirely disagree with my approach. That’s totally acceptable. But keep in mind that it works for at least one person for a long time now, even if it doesn’t fit your style. Email processing seems to be a delicate topic, please keep your comments constructive. By the way, I’d love to hear about your approach. I’m always eager to learn and improve.

The analogy

Let me start with a common analogy: Your email inbox is like your mailbox. All letters you receive go through your mailbox. All emails you receive go through your inbox. That’s where this analogy ends and it was never useful to begin with. Your postman won’t show up every five minutes and stuff more commercial mailings, letters, postcards and post-it notes into your mailbox (raising that little flag again that indicates the presence of mail). He also won’t announce himself by ringing your door bell (every five minutes, mind you) and proclaiming the first line of three arbirtrarily chosen letters. Also, I’ve rarely seen mailboxes that contain hundreds if not thousands of letters, some read, some years old, in different states of decay. It’s a common sight for inboxes whose owners gave up on keeping up. I’ve seen high stacks of unanswered correspondence, but never in the mailbox. And this brings me to my new analogy for your email inbox: Your email inbox is like your desk. The stacks of decaying letters and magazines? Always on desks (and around it in extreme cases). The letters you answer directly? You bring them to your desk first. Your desk is usually clear of pending work documents and this should be the case for your inbox, too.

(c) Fotolia Datei: #87397590 | Urheber: thodonal

The rules

My procedure to deal with the continuous flood of e-amils is based on three rules:

  • The inbox is the only queue of emails that needs attention. It is only filled with new emails (which require activity from my side) or emails that require my attention in the foreseeable future. The inbox is therefor only filled with pending work.
  • Email processing is done manually. I look at each email once and hopefully only once. There are no automatic filters that sort emails into different queues before I’ve seen them.
  • For every email, interaction results in an activity or decision on my side. No email gets “left there”.

Let me explain the context of the rules in a bit more detail:

My email account has lots and lots of folders to store all emails until eternity. The folders are organized in a hierarchy, but that doesn’t really matter, because every folder can hold emails. The hierarchy of folders isn’t pre-planned, it emerges from the urge to group emails together. It’s possible that I move specific emails from one folder to another because the hierarchy has changed. I will use automatic filters to process emails I’ve already read. But I will mark every email as read by myself and not move unread emails around automatically. This narrows the place to look for new emails to one place: the inbox. Every other folder is only for archivation, not for processing.

The sweeps

The amount of unread emails in my inbox is the amount of work I need to do to return to the “only pending work” state. Let’s say that I opened my email reader and it shows 50 new messages in the inbox. Now I’ll have to process and archivate these 50 emails to be in the same state as before I had opened it. I usually do three separate processing steps:

  • The first sweep is to filter out any spam messages by immediate tell-tale signs. This is the only automatic filter that I’ll allow: the junk filter. To train it, I mark any remaining junk mail as spam and let the filter deal with it. I’m still not sure if the junk filter really makes it easier for me, because I need to scan through the junk regularly to “rescue” false positives (legitimate mails that were wrongfully sorted out), but the junk filter in combination with my fast spam sweep will lower the message count significally. In our example, we now have 30 mails left.
  • The second sweep picks every email that is for information purpose only. Usually those mails are sent by software tools like issue trackers, wikis, code review tools or others. Machines don’t feel the effort of writing an email, so they’ll write a lot. Most of the time, the message content is only a few lines of text. I grab each of these mails and drag them into their corresponding folder. While I’m dragging, I read (and memorize) the content of the email. The problem with this kind of information is that it’s a lot of very small chunks of data for a lot of different contexts. In order to understand those messages, you have to switch your mental context in a matter of seconds. You can do it, but only if you aren’t interrupted by different mental states. So ignore any email that requires more than a few seconds of focused attention from you. Let them sit into your inbox along with the emails that require an answer. The only activity for mails included in your second sweep is “drag to folder & memorize”. Because machines write often, we now have 10 mails left in our example.
  • The third sweep now attends to each remaining mail independently. Here, the three-minutes rule applies: If it takes less than three minutes to reply to the email right now, then do it right now and archivate the mail in a suitable folder (you might even create a new folder for it). Remember: if you’ve processed an email, it leaves the inbox. If it takes more than three minutes, you need to schedule an attention slot for this particular message on your todo-list. This is the only time the email remains in the inbox, because it’s a signal of pending work. In our example, 7 emails could be answered with short replies, but 3 require deep concentration or some more text for the answer.

After the three sweeps, only emails that indicate pending work remain in the inbox. They only leave the inbox after I’ve dealt with them. I need to schedule a timebox to work on them, but after that, they’ll find themselves out of the inbox and in a folder. As soon as an email is in a folder, I forget about its existence. I need to remember the information that were in it, but not the message itself.

The effects

While dealing with each email manually sounds painfully slow at first, it becomes routine after only a short while. The three sweeps usually take less than five minutes for 50 emails, excluding the three major correspondence tasks that make their way as individual items on my work schedule. Depending on your ratio of spam to information messages to real correspondence, your results may vary.

The big advantage of dealing manually with each email is that I’ve seen each message with my own eyes. Every email that an automatic filter grabs and hides before you can see it should not have been sent in the first place – it’s simulating a pull notification scheme (you decide when to receive it by opening the folder) rather than leveraging the push notification scheme (you need to deal with it right now, not later) that emails are inherently. Things like timelines, activity streams or message boards are pull-oriented presentations of presumably the same information, perhaps that’s what you should replace your automatically hidden emails with.

You’ll have an ever-growing archive with lots of folders for different things (think of a shelf full of document files in real life), but you’ll never look into them as long as you don’t desperately search “that one mail from 3 months ago”. You’ll also have a clean inbox, preferably in “blank slate” condition or at least with only emails that require actions from you. So you’ll have a clear overview of your pending work (the things in your inbox) and the work already done (the things  in your folders).

The epilogue

That’s when you discover that part of your work can be described as “look at each email and move it to a folder” as if you were an official in charge for virtual paper. We virtualized our paperwork, letters, desks, shelves and document files. But the procedures to deal with them is still the same.

Your turn now

How do you process your emails? What are your rules and habits? What are your experiences with folders vs. tags? I would love to hear from you – in a comment, not an email.

C++ Coroutines on Windows with the Fiber API

Last week, I had the chance to try out coroutines as a way to cooperatively interleave long tasks with event-processing. Unlike threads, where you can have interaction between between threads at any time, coroutines need to yield control explicitly, whic arguably makes synchronisation a little simpler. Especially in (legacy) systems that are not designed for concurrency. Of course, since coroutines do not run at the same time, you do not get the perks from concurrency either.

If you don’t know coroutines, think of them as functions that can be paused and resumed.

Unlike many other languages, C++ does not have built-in support for coroutines just yet. There are, however, several alternatives. On Windows, you can use the Fiber API to implement coroutines easily.

Here’s some example code of how that works:

auto coroutine=make_shared<FiberCoroutine>();
coroutine->setup([](Coroutine::Yield yield)
{
  for (int i=0; i<3; ++i)
  {
    cout << "Coroutine " 
         << i << std::endl;
    yield();
  }
});

int stepCount = 0;
while (coroutine->step())
{
  cout << "Main " 
       << stepCount++ << std::endl;
}

Somewhat surprisingly, at least if you have never seen coroutines, this will output the two outputs alternatingly:

Coroutine 0
Main 0
Coroutine 1
Main 1
Coroutine 2
Main 2

Interface

Since fibers are not the only way to implement coroutines and since we want to keep our client code nicely insulated from the windows API, there’s a pure-virtual base class as an interface:

class Coroutine
{
public:
  using Yield = std::function<void()>;
  using Run = std::function<void(Yield)>;

  virtual ~Coroutine() = default;
  virtual void setup(Run f) = 0;
  virtual bool step() = 0;
};

Typically, creation of a Coroutine type allocates all the resources it needs, while setup “primes” it with an inner “Run” function that can use an implementation-specific “Yield” function to pass control back to the caller, which is whoever calls step.

Implementation

The implementation using fibers is fairly straight-forward:

class FiberCoroutine
  : public Coroutine
{
public:
  FiberCoroutine()
  : mCurrent(nullptr), mRunning(false)
  {
  }

  ~FiberCoroutine()
  {
    if (mCurrent)
      DeleteFiber(mCurrent);
  }

  void setup(Run f) override
  {
    if (!mMain)
    {
      mMain = ConvertThreadToFiber(NULL);
    }
    mRunning = true;
    mFunction = std::move(f);

    if (!mCurrent)
    {
      mCurrent = CreateFiber(0,
        &FiberCoroutine::proc, this);
    }
  }

  bool step() override
  {
    SwitchToFiber(mCurrent);
    return mRunning;
  }

  void yield()
  {
    SwitchToFiber(mMain);
  }

private:
  void run()
  {
    while (true)
    {
      mFunction([this]
                { yield(); });
      mRunning = false;
      yield();
    }
  }

  static VOID WINAPI proc(LPVOID data)
  {
    reinterpret_cast<FiberCoroutine*>(data)->run();
  }

  static LPVOID mMain;
  LPVOID mCurrent;
  bool mRunning;
  Run mFunction;
};

LPVOID FiberCoroutine::mMain = nullptr;

The idea here is that the caller and the callee are both fibers: lightweight threads without concurrency. Running the coroutine switches to the callee’s, the run function’s, fiber. Yielding switches back to the caller. Note that it is currently assumed that all callers are from the same thread, since each thread that participates in the switching needs to be converted to a fiber initially, even the caller. The current version only keeps the fiber for the initial thread in a single static variable. However, it should be possible to support this by replacing the single static fiber pointer with a map that maps each thread to its associated fiber.

Note that you cannot return from the fiberproc – that will just terminate the whole thread! Instead, just yield back to the caller and either re-use or destroy the fiber.

Assessment

Fiber-based coroutines are a nice and efficient way to model non-linear control-flow explicitly, but they do not come without downsides. For example, while this example worked flawlessly when compiled with visual studio, Cygwin just terminates without even an error. If you’re used to working with the visual studio debugger, it may surprise you that the caller gets hidden completely while you’re in the run function. The run functions stack completely replaces the callers stack until you call yield(). This means that you cannot find out who called step(). On the other hand, if you’re actually doing a lot of processing in the run function, this is quite nice for profiling, as the “processing” call tree seemingly has its own root in the call-tree.

I just wish the visual studio debugger had a way to view the states of the different fibers like it has for threads.

Alternatives

  • On Linux, you can use the ucontext.
  • Visual Studio 2015 also has another, newer, implementation.
  • Coroutines can be implemented using threads and condition-variables.
  • There’s also Boost.Coroutine, if you need an independent implementation of the concept. From what I gather, they only use Fibers optionally, and otherwise do the required “trickery” themselves. Maybe this even keeps the caller-stack visible – it is certainly worth exploring.
  • Using passwords with Jenkins CI server

    For many of our projects the Jenkins continuous integration (CI) server is one important cornerstone. The well known “works on my machine” means nothing in our company. Only code in repositories and built, tested and packaged by our CI servers counts. In addition to building, testing, analyzing and packaging our projects we use CI jobs for deployment and supervision, too. In such jobs you often need some sort of credentials like username/password or public/private keys.

    If you are using username/password they do not only appear in the job configuration but also in the console build logs. In most cases this is undesirable but luckily there is an easy way around it: using the Environment Injector Plugin.

    In the plugin you can “inject passwords to the build as environment variables” for use in your commands and scripts.inject-passwords-configuration

    The nice thing about this is that the passwords are not only masked in the job configuration (like above) but also in the console logs of the builds!inject-passwords-console-log

    Another alternative doing mostly the same is the Credentials Binding Plugin.

    There is a lot more to explore when it comes to authentication and credential management in Jenkins as you can define credentials at the global level, use public/private key pairs and ssh agents, connect to a LDAP database and much more. Just do not sit back and provide security related stuff plaintext in job configurations or your deployments scripts!

    A zine for the modern developer

    Martin Fowler once said:

    Test for value of an article isn’t novelty of its ideas, but how many who could benefit who haven’t considered them, or are using them badly

    So I thought that I should share some things I am learning and have learned as a software developer. Maybe one time I will write a book but this requires a huge effort. Luckily Julia Evans (thank you!) just wrote a post having the same dilemma. She decided to create and publish a lean version of a book: a zine.
    This pushed me to do my own zine. I believe that a modern software developer is confronted with many topics besides programming and so the topics will range from software architecture over user experience (UX) to marketing.
    My first zine is about CQRS, an architecture I am excited about. Enjoy (PDF download).