Database Migration Categories

October 6, 2014

Most long-running projects need to manage changes to the database schema of the system and data migrations in some way. As the system evolves new datatypes/tables and properties/columns are added, some are removed and others are changed. Relationships between objects also change in unpredictable ways so that you have to deal with these changes in some way. Not all changes are equal in nature, so we handle them differently!

One tool we use to manage our database is liquibase. The units of change are called migrations and are logged in the database itself (table “databasemigrations”) so that you can actually see in which state the schema is. Our experience using such tools for several years is very positive because there is no manual work on the database of some production system needed and new installations automatically create the database schema matching the running software. There are however a few situations when you want to do things manually. So we identified three types of changes and defined how to handle them:

1. structural changes

Structural changes modify the database schema but no data. In some cases you have to care about the default values for not null columns. These changes are handled by database management/versioning tools. They are relevant for all instances and specific for each deployed version of the system. The changes are stored with the source code under version control. Most of the time they are needed when extending the functionality of the system and implementing new features. In SQL the typical commands are CREATE, DROP and ALTER.

2. data rule changes

Changes to the way how the data is stored we call data rule changes. Examples for this are changing the representation of an enum from integer to string or a relation from one-to-many to many-to-many. In such a case the schema and importantly existing data has to be changed. For these migrations you do not need explicit ids of an object in the database but you change all entries in the same way according to the new rules. The changes can be applied to each instance of the system that is update to the new database (and software) version. Like structural changes they are executed using the database migration tool and stored under version control. The typical SQL command after the involveld structural changes needed is UPDATE with an where clause and sometimes CASE WHEN statements.

3. data modifications

Sometimes you have to change individual data sets of one instance of a system. That may be because of a bug in the software or corrupted/wrong entries that cannot by fixed using the system itself, e.g. as super user. Here you fix the entries of one instance of the system manually or with a SQL script. You will usually name specific object ids of the database and perform these exact changes only on this instance. It may be necessary to perfom similar tasks on other instances using different object ids. Because of this one-time and instance-specific nature of the changes we do not use a migration tool but some kind of SQL shell. Such manual changes have to be performed with extra caution and need to be thoroughly documented, e.g. in your issue tracker and wiki. If possible use a non-destructive approach and make backups of the data before executing the changes. Typical SQL statements are UPDATE or DELETE containing ids or business keys.

Conclusion

With categories and guidelines above developers can easily figure out how to deal with changes to the database. They can keep the software, database schema and customer data up-to-date, nice and clean over many years while improving and evolving the system and managing several instances running possibly different versions of the system.


Build your own performance and log monitoring solution

September 29, 2014

Tips for a better performance like using views or reducing the complexity of your algorithms only help when you know where you can improve performance. So to find the places where speed is needed you can build extensive load and performance tests or even better you can monitor the performance of your app in production. Many solutions exists which give you varying levels of detail (the costs of them also varies). But often a simple solution is enough. We start with a typical (CRUD) web app and build a monitor and a tool to analyse response times. The goal is to see what are the ten worst performing queries of the last week. We want an answer to this question continuously and maybe ask additional questions like what were the details of the responses like user/role, URL, max/min, views or persistence. A web app built on Rails gives us a lot of measurements we need out of the box but how do we extract this information from the logs?
Typical log entries from an app running on JRuby on Rails 4 using Tomcat looks like this:

Sep 11, 2014 7:05:29 AM org.apache.catalina.core.ApplicationContext log
Information: I, [2014-09-11T07:05:29.455000 #1234]  INFO -- : [19e15e24-a023-4a33-9a60-8474b61c95fb] Started GET "/my-app/" for 127.0.0.1 at 2014-09-11 07:05:29 +0200

...

Sep 11, 2014 7:05:29 AM org.apache.catalina.core.ApplicationContext log
Information: I, [2014-09-11T07:05:29.501000 #1234]  INFO -- : [19e15e24-a023-4a33-9a60-8474b61c95fb] Completed 200 OK in 46ms (Views: 15.0ms | ActiveRecord: 0.0ms)

Important to identify log entries for the same request is the request identifier, in our case 19e15e24-a023-4a33-9a60-8474b61c95fb. To see this in the log you need to add the following line to your config/environments/production.rb:

config.log_tags = [ :uuid ]

Now we could parse the logs manually and store them in a database. That’s what we do but we use some tools from the open source community to help us. Logstash is a tool to collect, parse and store logs and events. It reads the logs via so called inputs, parses, aggregates and filters with the help of filters and stores by outputs. Since logstash is by Elasticsearch – the company – we use elasticsearch – the product – as our database. Elasticsearch is a powerful search and analytcs platform. Think: a REST frontend to Lucene – only much better.

So first we need a way to read in our log files. Logstash stores its config in logstash.conf and reads file with the file input:

input {
  file {
    path => "/path/to/logs/localhost.2014-09-*.log"
    # uncomment these lines if you want to reread the logs
    # start_position => "beginning"
    # sincedb_path => "/dev/null"
    codec => multiline {
      pattern => "^%{MONTH}"
      what => "previous"
      negate => true
    }
  }
}

There are some interesting things to note here. We use wildcards to match the desired input files. If we want to reread one or more of the log files we need to tell logstash to start from the beginning of the file and forget that the file was already read. Logstash remembers the position and the last time the file was read in a sincedb_path to ignore that we just specify /dev/null as a path. Inputs (and outputs) can have codecs. Here we join the lines in the log which do not start with a month. This helps us to record stack traces or multiline log entries as one event.
Add an output to stdout to the config file:

output {
  stdout {
    codec => rubydebug{}
  }
}

Start logstash with

logstash -f logstash.conf --verbose

and you should see your log entries as json output with the line in the field message.
To analyse the events we need to categorise them or tag them for this we use the grep filter:

filter {
  grep {
    add_tag => ["request_started"]
    match => ["message", "Information: .* Started .*"]
    drop => false
  }
}

Grep normally drops all non matching events, so we need to pass drop => false. This filter adds a tag to all events with the message field matching our regexp. We can add filters for matching the completed and error events accordingly:

  grep {
    add_tag => ["request_completed"]
    match => ["message", "Information: .* Completed .*"]
    drop => false
  }
  grep {
    add_tag => ["error"]
    match => ["message", "\:\:Error"]
    drop => false
  }

Now we know which event starts and which ends a request but how do we extract the duration and the request id? For this logstash has a filter named grok. One of the more powerful filters it can extract information and store them into fields via regexps. Furthermore it comes with predefined expressions for common things like timestamps, ip addresses, numbers, log levels and much more. Take a look at the source to see a full list. Since these patterns can be complex there’s a handy little tool with which you can test your patterns called grok debug.
If we want to extract the URL from the started event we could use:

grok {
    match => ["message", ".* \[%{TIMESTAMP_ISO8601:timestamp} \#%{NUMBER:}\].*%{LOGLEVEL:level} .* \[%{UUID:request_id}\] Started %{WORD:method} \"%{URIPATHPARAM:uri}\" for %{IP:} at %{GREEDYDATA:}"]
 }

For the duration of the completed event it looks like:

grok {
    match => ["message", ".* \[%{TIMESTAMP_ISO8601:timestamp} \#%{NUMBER:}\].*%{LOGLEVEL:level} .* \[%{UUID:request_id}\] Completed %{NUMBER:http_code} %{GREEDYDATA:http_code_verbose} in %{NUMBER:duration:float}ms (\((Views: %{NUMBER:duration_views:float}ms \| )?ActiveRecord: %{NUMBER:duration_active_record:float}ms\))?"]
 }

Grok patterns are inside of %{} like %{NUMBER:duration:float} where NUMBER is the name of the pattern, duration is the optional field and float the data type. As of this writing grok only supports floats or integers as data types, everything else is stored as string.
Storing the events in elasticsearch is straightforward replace or add to your stdout output an elasticsearch output:

output {
  elasticsearch {
    protocol => "http"
    host => localhost
    index => myindex
  }
}

Looking at the events you see that start events contain the URL and the completed events the duration. For analysing it would be easier to have them in one place. But the default filters and codecs do not support this. Fortunately it is easy to develop your own custom filter. Since logstash is written in JRuby, all you need to do is write a Ruby class that implements register and filter:

require "logstash/filters/base"
require "logstash/namespace"

class LogStash::Filters::Transport < LogStash::Filters::Base

  # Setting the config_name here is required. This is how you
  # configure this filter from your logstash config.
  #
  # filter {
  #   transport { ... }
  # }
  config_name "transport"
  milestone 1

  def initialize(config = {})
    super
    @threadsafe = false
    @running = Hash.new
  end 

  def register
    # nothing to do
  end

  def filter(event)
    if event["tags"].include? 'request_started'
      @running[event["request_id"]] = event["uri"]
    end
    if event["tags"].include? 'request_completed'
      event["uri"] = @running.delete event["request_id"]
    end
  end
end

We name the class and config name ‘transport’ and declare it as milestone 1 (since it is a new plugin). In the filter method we remember the URL for each request and store it in the completed event. Insert this into a file named transport.rb in logstash/filters and call logstash with the path to the parent of the logstash dir.

logstash --pluginpath . -f logstash.conf --verbose

All our events are now in elasticsearch point your browser to http://localhost:9200/_search?pretty or where your elasticsearch is running and it should return the first few events. You can test some queries like _search?q=tags:request_completed (to see the completed events) or _search?q=duration:[1000 TO *] to get the events with a duration of 1000 ms and more. Now to the questions we want to be answered: what are the worst top ten response times by URL? For this we need to group the events by URL (field uri) and calculate the average duration:

curl -XPOST 'http://localhost:9200/_search?pretty' -d '
{
  "size":0,
  "query": {
    "query_string": {
      "query": "tags:request_completed AND timestamp:[7d/d TO *]"
     }
  },
  "aggs": {
    "group_by_uri": {
      "terms": {
        "field": "uri.raw",
        "min_doc_count": 1,
        "size":10,
        "order": {
          "avg_per_uri": "desc"
        }
      },
      "aggs": {
        "avg_per_uri": {
          "avg": {"field": "duration"}
        }
      }
    }
  }
}'

See that we use uri.raw to get the whole URL. Elasticsearch separates the URL by the /, so grouping by uri would mean grouping by every part of the path. now-7d/d means 7 days ago. All groups of events are included but if we want to limit our aggregation to groups with a minimum size we need to alter min_doc_count. Now we have an answer but it is pretty unreadable. Why not have a website with a list?
Since we don’t need a whole web app we could just use Angular and the elasticsearch JavaScript API to write a small page. This page displays the top ten list and when you click on one it lists all events for the corresponding URL.

<!DOCTYPE html>
<html>
  <head>
    <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.11/angular.min.js"></script>
    <script type="text/javascript" src="elasticsearch-js/elasticsearch.angular.js"></script>
    <script type="text/javascript" src="monitoring.js"></script>

    <link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.0/css/bootstrap.min.css">
    <link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.0/css/bootstrap-theme.min.css">
  </head>
  <body>
    <div ng-app="Monitoring" class="container" ng-controller="Monitoring">
      <div class="col-md-6">
        <h2>Last week's top ten slowest requests</h2>
        <table class="table">
          <thead>
            <tr>
              <th>URL</th>
              <th>Average Response Time (ms)</th>
            </tr>
          </thead>
          <tbody>
            <tr ng-repeat="request in top_slow track by $index">
              <td><a href ng-click="details_of(request.key)">{{request.key}}</a></td>
              <td>{{request.avg_per_uri.value}}</td>
            </tr>
          </tbody>
        </table>
      </div>
      <div class="col-md-6">
        <h3>Details</h3>
        <table class="table">
          <thead>
            <tr>
              <th>Logs</th>
            </tr>
          </thead>
          <tbody>
            <tr ng-repeat="line in details track by $index">
              <td>{{line._source.message}}</td>
            </tr>
          </tbody>
        </table>
      </div>
    </div>
  </body>
</html>

And the corresponding Angular app:

var module = angular.module("Monitoring", ['elasticsearch']);

module.service('client', function (esFactory) {
  return esFactory({
    host: 'localhost:9200'
  });
});

module.controller('Monitoring', ['$scope', 'client', function ($scope, client) {
var indexName = 'myindex';
client.search({
index: indexName,
body: {
  "size":0,
  "query": {
    "query_string": {
      "query": "tags:request_completed AND timestamp:[now-1d/d TO *]"
     }
  },
  "aggs": {
    "group_by_uri": {
      "terms": {
        "field": "uri.raw",
        "min_doc_count": 1,
        "size":10,
        "order": {
          "avg_per_uri": "desc"
        }
      },
      "aggs": {
        "avg_per_uri": {
          "avg": {"field": "duration"}
        }
      }
    }
  }
}
}, function(error, response) {
  $scope.top_slow = response.aggregations.group_by_uri.buckets;
});

$scope.details_of = function(url) {
client.search({
index: indexName,
body: {
  "size": 100,
  "sort": [
    { "timestamp": "asc" }
  ],
  "query": {
    "query_string": {
      "query": 'timestamp:[now-1d/d TO *] AND uri:"' + url + '"'
     }
  },
}
}, function(error, response) {
  $scope.details = response.hits.hits;
});
};
}]);

This is just a start. Now we could filter out the errors, combine logs from different sources or write visualisations with d3. At least we see where performance problems lie and take further steps at the right places.


The power of analysis

September 22, 2014

Quite some years ago, I heard a story about the power of analysis that happened even deeper in the past. Its moral holds true until today, though. It’s the insight that to fully analyse a particular challenge or task, you have to think outside your own box. Let’s hear the story before we analyse it:

The problem

A small company for sensor technology usually solved customer problems like distance measurement without contact or gas mixture control. The team was informed about all the latest sensors and trained to come up with solutions even to really challenging tasks. This lead to a word of mouth recommendation for a new customer that promptly described his problem.

christmas-star-lamp-smallThe customer ran a workshop for physically handicapped people that mostly worked with wood and produced a wide variety of products that got sold on various markets. One product was the Christmas lamp in shape of a star. It proved to be a best-seller and had a good economic ratio. At least it could have, if only the rejects rate would be lower. To assemble the lamp from little wooden laths was difficult for skilled workers and even harder for skilled handicapped workers. The main difficulty was to glue the laths in just the right angle to result in the desired star shape. The customer needed some set-up of sensors that would indicate to the worker when the angle was right. He imagined something like a cheap navigation system that would yell/display “left” and “right” until the angle was “correct”.

The solution finding

The team accepted the task and started the creative solution process that lasted several days of thinking, doodling, researching and scribbling. Then, the team gathered for a solution finding session. A multitude of ideas were presented and almost instantly rejected. From laser distance measurement over acoustic ultrasonic sensors to camera-based image evaluation, everything cool and remotely feasible was presented and rejected because nothing had an even remote chance to succeed outside of laboratory settings. Not one approach survived the applicability check. The team was devastated and returned to the creative phase, if not as reckless as the first time.

The solution

A few days later, the second solution finding session had only a few new ideas, none standing a chance. Finally, a student spoke up: “This isn’t a problem that should be solved with sensors!”. Well, this was a bold sentence in a team of sensor technologists. The student explained: “The real problem is the placement of the small laths, not the correct angle itself. Even if we build a sensor that can reliably indicate right and wrong angles, it would just tell the worker that whatever he tries, he won’t get it right. These workers don’t need supervision, they need assistance. No sensor is going to deliver that.” The team was baffled. The student went on: “I thought about a solution that will assist the worker during the assembly, but it’s nothing we will get rich with. A simple mould in the right shape, perhaps non-adhering to the glue they use for the wood, would let them produce one half of the lamp. Glue two of those halves together and everything fits. No need for batteries even.”

christmas-lamp-star-sideWhen the sensor technology company proposed this solution to the customer, he laughed loud and long. It was the most elegant and inexpensive solution he never thought of. It was exactly what was needed. It worked perfectly from the first prototypes onward. The Christmas lamp rejects rate dwindled to almost zero instantly. In short: perfect score. Just that the sensor technology company wouldn’t earn anything with maintenance or improvements was a minor drawback.

Good analysis

This story is my illustrative material when I have to explain what good analysis is. Let’s take a look at the bafflement of the team: They had all started their solution finding with the premise that this was a problem inside their area of expertise. Even the customer said so. Good analysis works out the real nature of a problem regardless of what anybody says about it. This includes any description given by the customer or even the wood workers in charge of the actual work. Good analysis finds a solution that fits the problem, not the field of expertise of the analyst.

Analysis is the process of thinking in terms of the problem space. In this story, an important part of the analysis was already done by the customer: Most of the rejects have wrong angles, so we need to make sure the angles are correct and we need a machine to tell us, because apparently the workers themselves can’t. The machine needs sensors, so lets assign a sensor company on the task. This was the initial premise that nobody except the student questioned. And this was half of the analysis that nobody bothered to repeat. You cannot really understand the problem if you begin your thinking mid-flight.

Applying good analysis

It’s easy to tell a story (even if it really happened) and derive insights from it. It’s much harder to apply these insights in the own work. The crucial step is to fully understand the actual problem that should be solved (in the story: correct justification instead of correct angle). The next step is to incorporate the value system of the customer: if I alter some key characteristics of the solution, will it still serve the customer’s actual needs? In the story: A cheap aluminium mould serves the customer even better than some expensive fancy machine. The mould can be duplicated nearly infinitely, the machine probably not. The mould is grasped instantly, the machine needs instructions. The mould keeps working long after the machine ran out of battery. The mould assists, the machine merely scolds.

If, after thoroughly working on these two steps, the solution lies still inside your field of expertise, you can proceed to design the solution. You’ve just left the analysis process to concentrate on one possible solution. That’s all right, but remember to return to the earliest steps of analysis when you get stuck. Designing a solution for a falsely analysed premise almost always leads nowhere in the long run.


Ansible: Play it again, Sam

September 15, 2014

Recently we started using Ansible for the provisioning of some of our servers. Ansible is one of many configuration management / provisioning tools that are popular right now. Puppet and Chef are probably more widely known representatives of their kind, but what attracted us to Ansible was the fact that it’s agentless: the target machines don’t need an agent installed, all you need is remote access via SSH. Well, almost. It turns out that Python is also required on the remote machines, otherwise you’ll be limited to a very basic set of functionality (the raw module). Fortunately, most Linux distributions have Python installed by default.

With Ansible you describe the desired target configuration as a sequence of tasks in a YAML file called Playbook: package installation, copying files, enabling and starting services, etc. The playbook is semi-declarative. Each step usually describes a goal, e.g. package XY should be present. Action is only taken if necessary. On the other hand it’s also very imperative: steps are executed sequentially and you can have conditionals and loops (e.g. “with_items”). You can also define handlers, which are executed once after they have been notified, for example if you want to restart the Apache web server after its configuration has changed.

Before a playbook is applied to a remote machine Ansible will query “facts” about this machine. These facts are available as variables in the playbook. You can also define your own variables.

A playbook is usually applied to a set of machines. Available machines are listed in a separate file, the inventory, where they can be grouped by roles. With one command you can configure or update all the machines of a specific role at once. You can also execute a “dry run”, which simulates a playbook run and tells you what changes would be applied.

So far our experience with Ansible has been good. The concepts are easy to grasp. YAML syntax requires getting used to, but at least it’s not XML. On the website the actual documentation is a bit hidden among promotion for their commercial products, but you can also directly visit docs.ansible.com.


Configuring your Java webapp

September 8, 2014

There are several ways to configure your Java Servlet-based webapp with values for deployment-specific things like the database connection or directories for data and logs. Let us take a look at the alternatives and their benefits and drawbacks.

web.xml

The deployment descriptor (web.xml) resides inside your WAR file. You can specify init parameters available using the ServletContext.
web.xml

<context-param>
    <param-name>LogDirectory</param-name>
    <param-value>/myapp/logs</param-value>
</context-param>

Accessing the parameter in your Servlet:

String logDirectory = getServletContext().getInitParameter("LogDirectory");
// do something with it

The nice thing about this solution is the self-containment of your packaged application. The price is building a customized web.xml/WAR for each deployment instance.

Environment variables

Another possibility is to pass environment variables to your servlet container at startup, e.g. using JAVA_OPTS in the case of Apache Tomcat.
tomcat.conf

...
JAVA_OPTS="-DLogDirectory=/myapp/logs"
...

They can be easily accessed using

    System.getProperty("LogDirectory");

This is very easy to employ but has several drawbacks:

  • you have to mess with the configuration of your servlet container/host to set the variables
  • they are valid for the whole servlet container, possibly interferring with other webapps or the container itself
  • the settings are harder to find than in one file that you deliver with your webapp
  • need of server restart to change the values

context.xml

Using context.xml and JNDI is our preferred way of configuring our webapps. You can ship a default context.xml in the META-INF directory of your WAR and easily configure resources and beans:

<Context>
    <Environment name="LogDirectory" value="/myapp/logs" type="java.lang.String" />
    <!-- Development DB -->
    <Resource name="jdbc/devdb" auth="Container" type="javax.sql.DataSource"
               maxActive="100" maxIdle="30" maxWait="-1"
               username="sa" password="" driverClassName="org.h2.Driver"
               url="jdbc:h2:mem:devDB;mode=Oracle"/>
</Context>

A context.xml outside of your WAR has to be copied in the context configuration directory of your servlet container, e.g.:

cp context.xml /etc/tomcat7/Catalina/localhost/myapp.xml

You can then access the configuration items using JNDI:

Context ctx = (Context) new InitialContext().lookup("java:comp/env");
String logDirectory = (String) ctx.lookup("LogDirectory");
// do something

You can of course use context-params and the ServletContext to retrieve simple String parameters stored in the context.xml instead of web.xml, too.

The name of the context file must match the name of the deployed application. That way we can deploy the same WAR on several target machines and configure the applications separately. The context.xml not only contains the JNDI datasources (which is very common) but also configuration parameters that may change for each target system.


Recap of the Schneide Dev Brunch 2014-08-31

September 1, 2014

brunch64-borderedYesterday, we held another Schneide Dev Brunch, a regular brunch on a sunday, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was well-attended this time but the weather didn’t allow for an outside session. There were lots of topics and chatter. As always, this recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Docker – the new (hot) kid in town

Docker is the hottest topic in software commissioning this year. It’s a lightweight virtualization technology, except that you don’t obtain full virtual machines. It’s somewhere between a full virtual machine and a simple chroot (change root). And it’s still not recommended for production usage, but is already in action in this role in many organizations.
We talked about the magic of git and the UnionFS that lay beneath the surface, the ease of migration and disposal and even the relative painlessness to run it on Windows. I can earnestly say that Docker is the technology that everyone will have had a look at before the year is over. We at the Softwareschneiderei run an internal Docker workshop in September to make sure this statement holds true for us.

Git – the genius guy with issues

The discussion changed over to Git, the distributed version control system that supports every versioning scheme you can think of but won’t help you if you entangle yourself in the tripwires of your good intentions. Especially the surrounding tooling was of interest. Our attendees had experience with SmartGit and Sourcetree, both capable of awesome dangerous stuff like partial commmits and excessive branching. We discovered a lot of different work styles with Git and can agree that Git supports them all.
When we mentioned code review tools, we discovered a widespread suspiciousness of heavy-handed approaches like Gerrit. There seems to be an underlying motivational tendency to utilize reviews to foster a culture of command and control. On a technical level, Gerrit probably messes with your branching strategy in a non-pleasant way.

Teamwork – the pathological killer

We had a long and deep discussion about teamwork, liability and conflicts. I cannot reiterate everything, but give a few pointers how the discussion went. There is a common litmus test about shared responsibility – the “hold the line” mindset. Every big problem is a problem of the whole team, not the poor guy that caused it. If your ONOZ lamp lights up and nobody cares because “they didn’t commit anything recently”, you just learned something about your team.
Conflicts are inevitable in every group of people larger than one. We talked about team dynamics and how most conflicts grow over long periods only to erupt in a sudden and painful way. We worked out that most people aren’t aware of their own behaviour and cannot act “better”, even if they were. We learned about the technique of self-distancing to gain insights about one’s own feelings and emotional drive. Two books got mentioned that may support this area: “How to Cure a Fanatic” by Amos Oz and “On Liberty” from John Stuart Mill. Just a disclaimer: the discussion was long and the books most likely don’t match the few headlines mentioned here exactly.

Code Contracts – the potential love affair

An observation of one attendee was a starting point for the next topic: (unit) tests as a mean for spot checks don’t exactly lead to the goal of full confidence over the code. The explicit declaration of invariants and subsequent verification of those invariants seem to be more likely to fulfil the confidence-giving role.
Turns out, another attendee just happened to be part of a discussion on “next generation verification tools” and invariant checking frameworks were one major topic. Especially the library Code Contracts from Microsoft showed impressive potential to really be beneficial in a day-to-day setting. Neat features like continuous verification in the IDE and automatic (smart) correction proposals makes this approach really stand out. This video and this live presentation will provide more information.

While this works well in the “easy” area of VM-based languages like C#, the classical C/C++ ecosystem proves to be a tougher nut to crack. The common approach is to limit the scope of the tools to the area covered by LLVM, a widespread intermediate representation of source code.

Somehow, we came across the book titles “The Economics of Software Quality” by Capers Jones, which provides a treasure of statistical evidence about what might work in software development (or not). Another relatively new and controversial book is “Agile! The Good, the Hype and the Ugly” from Bertrand Meyer. We are looking forward to discuss them in future brunches.

Visual Studio – the merchant nobody likes but everybody visits

One attendee asked about realistic alternatives to Visual Studio for C++ development. Turns out, there aren’t many, at least not free of charge. Most editors and IDEs aren’t particularly bad, but lack the “everything already in the box” effect that Visual Studio provides for Windows-/Microsoft-only development. The main favorites were Sublime Text with clang plugin, Orwell Dev-C++ (the fork from Bloodshed C++), Eclipse CDT (if the code assist failure isn’t important), Code::Blocks and Codelite. Of course, the classics like vim or emacs (with highly personalized plugins and setup) were mentioned, too. KDevelop and XCode were non-Windows platform-based alternatives, too.

Stinky Board – the nerdy doormat

One attendee experiments with input devices that might improve the interaction with computers. The Stinky Board is a foot-controlled device with four switches that act like additional keys. In comparison to other foot switches, it’s very sturdy. The main use case from our attendee are keys that you need to keep pressed for their effect, like “sprint” or “track enemy” in computer games. In a work scenario, there are fewer of these situations. The additional buttons may serve for actions that are needed relatively infrequently, but regularly – like “run project”.

This presentation produced a lot of new suggestions, like the Bragi smart headphones, which include sensors for head gestures. Imagine you shaking your head for “undo change” or nod for “run tests” – while listening to your fanciest tunes (you might want to refrain from headbanging then). A very interesting attempt to combine mouse, keyboard and joystick is the “King’s Assembly“, a weird two-piece device that’s just too cool not to mention. We are looking forward to hear more from it.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The high number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.


Bit-fiddling is possible in Java

August 26, 2014

We have a service interface for Modbus devices that we can use remotely from Java. Modbus supports only very basic data types like single bits and 16-bit words. Our service interface provides the contents of a 16-bit input or holding register as a Java integer.

Often one 16-bit register is not enough to solve a problem in your domain, like representing a temperature as floating point number. A common solution is to combine two 16-bit registers and interpret their contents as a floating point number.

So the question is how to combine the bits of the two int values and convert them to a float in Java. There are at least two possiblities I want to show you.

The “old-fashioned” way includes bit-shifting and bit-wise operators which you actually can use in Java despite of the major flaw regarding primitive types: there are no unsigned data types; even byte is signed!

public static float floatFrom(int mostSignificat16Bits, int leastSignificant16Bits) {
    int bits = (mostSignificat16Bits << 16) | leastSignificant16Bits;
    return Float.intBitsToFloat(bits);
}

As seemingly more modern way is using java.nio.ByteBuffer:

public static float nioFloatFrom(int mostSignificat16Bits, int leastSignificant16Bits) {
    final ByteBuffer buf = ByteBuffer.allocate(4).putShort((short) mostSignificat16Bits).putShort((short) leastSignificant16Bits);
    buf.rewind(); // rewind to the beginning of the buffer, so it can be read!
    return buf.getFloat();
}

The second method seems superior when working with many values because you can fill the buffer conveniently in one phase and extract the float values conveniently by subsequent calls to getFloat().

Conclusion

Even if it is not one of Java’s strengths you actually can work on the bit- and byte-level and perform low level tasks. Do not let the lack of unsigned data types scare you; signedness is irrelevant for the bits in memory.


Follow

Get every new post delivered to your Inbox.

Join 92 other followers