Gigapixel images in pure Java

April 14, 2014

Not long ago, I read this nice little blog entry about the basic properties and usages of Java arrays. It’s a long time since I last used an array in Java myself, because my programming style evolved to heavily leverage the power of collections (and Iterables in particular, the Java 5 poor man’s substitute for Java 8 Streams). But I immediately noticed that one important fact was missing from the array blog entry:

The maximum length of an array in Java is Integer.MAX_VALUE or ((2^32)-1), aka 2.147.483.647

This is indirectly specified in the Java language specification, chapter 10.4 Array Access:

Arrays must be indexed by int values.

This little fact crossed my path when writing a little tool in pure Java that operated on large numbers of large images, combining them to a gigantic image. The customer used the tool to create images that had a size of about 100 MB, but took several hours to print because the decompression tax kicked in. One day, he reported a strange bug:

array-error-cropped

 

“Oh, a negative array size, what a strange bug to appear in a tested application” was my first thought. Only after reading the stacktrace more carefully did it dawn on me: The array size wasn’t negative, it was just bigger than Integer.MAX_VALUE and got wrapped around into the negative numbers. And sure enough, 72350 times 44914 is a respectable 3.249.527.900 pixels, more than 1,5 times as much as an array in Java can hold. This image was right in the multi-gigapixel range where all kinds of technical obstacles appear. The maximum length of an array in Java was mine.

Trying to stay pure

One cornerstone of the tool was being lightweight. It shouldn’t carry around unnecessary luggage and weighted around 200 kB when the bug appeared – enough to just copy it into the data directories instead of pulling the directories into the program. But when I examined the root cause of the problem at hand, I found the frustrating truth that Java’s built-in imaging library also relies on one cornerstone: all data is stored in one array. And this array can only hold around 2G entries of data.

My approach was to “partition” the full image into smaller parts that only stored a fraction of the overall pixels. To hide this fact from the ImageIO that ultimatively writes all the data into one file, my PartitionedImage implements RenderedImage and has to translate every call into a series of appropriate subcalls to the partition images. Before we look at some code, let me show you the limitations of this approach:

Greedy JPEGs, credulous PNGs

In the RenderedImage interface, there are two methods that can be used to obtain pixel data:

  • Raster getData(): Returns the image as one large tile (for tile based images this will require fetching the whole image and copying the image data over).
  • Raster getData(Rectangle rect): Computes and returns an arbitrary region of the RenderedImage.

If an image writer calls the first method, my code is screwed. There is no mentally sane way to construct a Raster instance without colliding with the array length limitation. Unfortunately, the JPEG writer does just that: He gets greedy and demands all the pixels at once. I found it easier to avoid the JPEG format and therefore trade disk space for pragmatism.

The PNG writer uses the getData(Rectangle) method to obtain the pixel data. It calls the whole image line by line: the region has always the full width of the image, but is only one pixel in height. So I guess my tool will write a lot of large PNG images in the future.

Our partitions should adapt to this behaviour by always retaining the full width of the original image and only allowing enough height that the amount of pixels per partition doesn’t exceed Integer.MAX_VALUE.

The remaining trick is to implement an AdjustingRaster that knows the original Raster of the partition and translates the row asked by the PNG writer to the according row in the partition image. The AdjustingRaster needs to know about the vertical offset. The only pitfall here is that the vertical offset has to be zero while the AdjustingRaster gets written to and needs to be set once it switches into read mode.

Slow, but working

By composing a gigapixel image from several partitions (sometimes called tiles) you can circumnavigate the frustrating limitation of Java’s arrays (I mean, it’s 2014 and 64-bit systems are somewhat prevailing now. No need to stick to 32-bit limits without a good reason). The result isn’t overwhelmingly fast, but I suspect that’s caused by the PNG image writer more than by our indirections. And we shouldn’t forget that it’s a lot of pixels to write after all.

Conclusion

Sometimes when you explore bigger and bigger use cases, you hit some arbitrary limitation. And some are fundamental ones. In our case here, we’ve reached the limit of Java arrays and got stuck because the image library in Java never heard of real gigapixel imaging and coupled itself hard to the array limit. By introducing another indirection layer on top of the image library implementation and using composition to emulate a bigger image than we actually could create, we can convince non-sceptical image writers to save all those pixels for us and even manipulate the image beforehand.

What was your approach for gigapixel image processing? How did it work out in the long run? Share your story in the comments, please.


Dart and TypeScript as JavaScript alternatives

April 7, 2014

JavaScript was designed at Netscape by Brendan Eich within a couple of weeks as a simple scripting language for the web browser. It’s an interesting mixture of Self‘s prototype-based object model, first-class functions inspired by LISP, a C/AWK-like syntax and a misleading name imposed by marketing.

Unfortunately, the haste in which JavaScript was designed by a single person shows in many places. Lots of features are inconsistent and violate the principle of least surprise. Just skim through the JavaScript Garden to get an idea.

Another aspect casting a poor light on JavaScript is the bad design of the browser DOM API, including incompatibilities between different browser implementations.

Douglas Crockford redeemed the reputation of JavaScript somewhat, by writing articles like “JavaScript: The World’s Most Misunderstood Programming Language“, the (relatively thin) book “JavaScript: The Good Parts” and discovering the JSON format. But even his book consists for the most part of advice on how to avoid the bad and the ugly parts.

However, JavaScript is ubiquitous. It is the world’s most widely deployed programming language, it’s the only programming language option available in all browsers on all platforms. The browser DOM API incompatibilities were ironed out by libraries like jQuery. And thanks to the JavaScript engine performance race started by Google some time ago with their V8 engine, there are now implementations available with decent performance - at least for a scripting language.

Some people even started to like JavaScript and are writing server-side code in it, for example the node.js community. People write office suites, emulators and 3D games in JavaScript. Atwood’s Law seems to be confirmed: “Any application that can be written in JavaScript, will eventually be written in JavaScript.”

Trans-compiling to JavaScript is a huge thing. There are countless transpilers of existing or new programming languages to JavaScript. One of these, CoffeeScript, is a syntactic sugar mixture of Ruby and Python on top of JavaScript semantics, and has gained some name recognition and adoption, at least in the Rails community.

But there are two other JavaScript alternatives, backed by large companies, which also happen to be browser manufacturers: Dart by Google and TypeScript by Microsoft. Both have recently reached version 1.0 (Dart even 1.2), and I will have a look at them in this blog post.

Large-scale application development and types

Scripting languages with dynamic type systems are neat and flexible for small and medium sized projects, but there is evidence that organizations with large code bases and large teams prefer at least some amount of static types. For example, Google developed the Google Web Toolkit, which compiled Java to JavaScript and the Closure compiler, which adds type information and checks to JavaScript via special comments, and now Dart. Facebook recently announced their Hack language, which adds a static type system to PHP, and Microsoft develops TypeScript, a static type add-on to JavaScript.

The reasoning is that additional type information can help finding bugs earlier, improve tool support, e.g. auto-completion in IDEs and refactoring capabilities such as safe, project-wide renaming of identifiers. Types can also help VMs with performance optimization.

TypeScript

This weekend the release of TypeScript 1.0 was announced by Microsoft’s language designer Anders Hejlsberg, designer of C#, also known as the creator of the Turbo Pascal compiler and Delphi.

TypeScript is not a completely new language. It’s a superset of JavaScript that mainly adds optional type information to the language via Pascal-like colon notation. Every JavaScript program is also a valid TypeScript program.

The TypeScript compiler tsc takes .ts files and translates them into .js files. The output code does not change a lot and is almost the same code that you would write by hand in JavaScript, but with erased type annotations. It does not add any runtime overhead.

The type system is heavily based on type inference. The compiler tries to infer as much type information as possible by following the flow of types through the code.

TypeScript has interfaces that are very similar to interfaces in Go: A type does not have to declare which interfaces it implements. Interfaces are satisfied implicitly if a type has all the required methods and properties – in short, TypeScript has a structural type system.

Type definitions for existing APIs and libraries such as the browser DOM API, jQuery, AngularJS, Underscore.js, etc. can be added via .d.ts files.
These definition files are very similar to C header files and contain type signatures of the API’s functions. There’s a community maintained repository of .d.ts files called Definitely Typed for almost all popular JavaScript libraries.

TypeScript also enhances JavaScript with functionaliy that is planned for ECMAScript 6, such as classes, inheritance, modules and shorthand lambda expressions. The syntax is the same as the proposed ES6 syntax, and the generated code follows the usual JavaScript patterns.

TypeScript is an open source project under Apache License 2.0. The project even accepts contributions and pull-requests (yes, Microsoft). Microsoft has integrated TypeScript support into Visual Studio 2013, but there is support for other IDEs and editors such as JetBrain’s IDEA or Sublime Text.

Dart

Dart is a JavaScript alternative developed by Google. Two of the main brains behind Dart are Lars Bak and Gilad Bracha. In the early 90s they worked in the Self VM group at Sun. Then they left Sun for LongView Technologies (Animorphic Systems), a company that developed Strongtalk, a statically typed variant of Smalltalk, and later the now-famous HotSpot VM for Java. Sun bought LongView Technologies and made HotSpot Java’s default VM. Bracha co-authored parts of the Java specification, and designed an object-oriented language in the tradition of Self and Smalltalk called Newspeak. At Google, Lars Bak was head developer of the V8 JavaScript engine team.

Unlike TypeScript, Dart is not a JavaScript superset, but a language of its own. It’s a curly-braces-and-semicolons language that aims for familiarity. The object model is very similar to Java: it has classes, inheritance, abstract classes and methods, and an @override annotation. But it also has the usual grab bag of features that “more sugar than Java but similar” languages like C#, Groovy or JetBrain’s Kotlin have:

Lambdas (via the fat arrow =>), mixins, operator overloading, properties (uniform access for getters and setters), string interpolation, multi-line strings (in triple quotes), collection literals, map access via [], default values for arguments, optional arguments.

Like TypeScript, Dart allows optional type annotations. Wrong type annotations do not stop Dart programs from executing, but they produce warnings. It has a simple notion of generics, which are optional as well.

Everything in Dart is an object and every variable can be nullable. There are no visibility modifiers like public or private: identifiers starting with an underscore are private. The “truthiness” rules are simple compared to JavaScript: all values except true are false.

Dart comes with batteries included: it has a standard library offering collections, APIs for asynchronous programming (event streams, futures), a sane HTML/DOM API, removing the need for jQuery, unit testing and support for interoperating with JavaScript. A port of Angular.js to Dart exists as well and is called AngularDart.

Dart supports a CSP-like concurrency model based on isolates – independent worker threads that don’t share memory and can communicate via SendPorts and
ReceivePorts.

However, the Dart language is only one half of the Dart project. The other important half is the Dart VM. Dart can be compiled to JavaScript for compatibility with every browser, but it offers enhanced performance compared to JavaScript when the code is directly executed on the Dart VM.

Dart is an open source project under BSD license. Google provides an Eclipse based IDE for Dart called the “Dart Editor” and Dartium, a special build of the Chromium browser that includes the Dart VM.

Conclusion

TypeScript follows a less radical approach than Dart. It’s a typed superset of JavaScript and existing JavaScript projects can be converted to TypeScript simply by renaming the source files from *.js to *.ts. Type annotations can be added gradually. It would even be simple to switch back from TypeScript to JavaScript, because the generated JavaScript code is extremely close to the original source code.

Dart is a more ambitious project. It comes with a new VM and offers performance improvements. It will be interesting to see if Google is going to ship Chrome with the Dart VM one day.


Using Rails with a legacy database schema

March 10, 2014

Rails is known for its convention over configuration design paradigm. For example, database table and column names are automatically derived and generated from the model classes. This is very convenient, but when you want to build a new application upon an existing (“legacy”) database schema you have to explore the configuration side of this paradigm.

The most basic operation for dealing with a legacy schema in Rails is to explicitly set the table_names and the primary_keys of model classes:

class User < ActiveRecord::Base
  self.table_name = 'benutzer'
  self.primary_key = 'benutzer_nr'
  # ...
end

Additionally you might want to define aliases for your column names, which are mapped by ActiveRecord to attributes:

class Article < ActiveRecord::Base
  # ...
  alias_attribute :author_id, :autor_nr
  alias_attribute :title, :titel
  alias_attribute :date, :datum
end

This automatically generates getter, setter and query methods for the new alias names. For associations like belongs_to, has_one, has_many you can explicitly specify the foreign_key:

class Article < ActiveRecord::Base
  # ...
  belongs_to :author, class_name: 'User', foreign_key: :autor_nr
end

Here you have to repeat the original name. You can’t use the previously defined alias attribute name. Another place where you have to live with the legacy names are SQL queries:

q = "%#{query.downcase}%"
Article.where('lower(titel) LIKE ?', q).order('datum')

While the usual attribute based finders such as find_by_* are aware of ActiveRecord aliases, Arel queries aren’t:

articles = Article.arel_table
Article.where(articles[:titel].matches("%#{query}%"))

And lastly, the YML files with test fixture data must be named after the database table name, not after the model name. So for the example above the fixture file name would be benutzer.yml, not user.yml.

Conclusion

If you step outside the well-trodden path of convention be prepared for some inconveniences.


Solution for ng-quiz #1: LetterCrush

February 22, 2014

In this solution for the first ng-quiz you learn about Angularjs in general using controller and services. Also you learn about using the HTML File API. You can download the source at my GitHub repository.

The HTML for this solution is pretty straightforward: (we omit the CSS here)

<!DOCTYPE html>
<html>
  <head>
    <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.11/angular.min.js"></script>
    <script type="text/javascript" src="lettercrush.js"></script>

    <link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.0/css/bootstrap.min.css">
    <link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.0/css/bootstrap-theme.min.css">
    <link rel="stylesheet" href="lettercrush.css">
  </head>
  <body>
    <div ng-app="LetterCrush" class="container">
      <div ng-controller="LetterCrush" class="col-md-9">
        <div class="jumbotron">
          <h1>Letter Crush</h1>
          <p>Fun with words</p>
        </div>
        <div class="col-md-7">
          <div class="form-group">
            <label for="dictionary" class="control-label">Please choose a dictionary to play with (one word per line)</label>
            <input type="file" id="dictionary" ng-model="file">
          </div>
        </div>
        <div class="col-md-5">
          <form ng-submit="testWord()">
            <div class="form-group">
              <label for="score" class="control-label">Your score</label>
              <span class="form-control-static"><big>{{score}}</big></span>
            </div>
            <div class="form-group">
              <label for="word" class="control-label">Your word</label>
              <input ng-model="word" type="text" id="word">
            </div>
          </form>
        </div>
        <div class="col-md-12">
          <table class="table">
            <tr ng-repeat="row in board.content track by $index"><td ng-repeat="cell in row track by $index">{{cell}}</td></tr>
          </table>
        </div>
    </div>
  </body>
</html>

The bits you should take a deeper look at are the attributes starting with ng (called Angularjs directives) and the variables enclosed in double curly braces. Directives are the glue between JavaScript and the DOM. The first important ng attribute is ng-app in line 12. The value of the ng-app attribute is the name of the module used inside the JavaScript.

var module = angular.module("LetterCrush", []);

Everything inside this div is treated specially by Angularjs. So you can use references like {{score}}. These references enclose expressions which evaluate properties defined on the Angularjs $scope object. For bundling functionality you can use controllers. Controllers are used by setting ng-controller. In line 13 we declare a controller named LetterCrush. This controller is defined inside the JavaScript like

module.controller('LetterCrush', ['$scope', 'board', 'dictionary', 'util',
                  function ($scope, board, dictionary, util) {
}]);

The strings in the array are the dependencies which should be injected. Every declared dependency needs to be a parameter in the function which implements the functionality of the controller. Angularjs’ own objects are prefixed with $ as you see with $scope. All other ones are defined by us using a concept called services. These services are defined similar to controllers but with a factory.

// with no special dependencies, just jQueryLite $q and log
module.factory('fileReader', function($q, $log) {
});

// with our own dependencies
module.factory('board', ['letterGenerator', 'wordFinder', function(letterGenerator, wordFinder) {
}]);

The factory returns an object. Services are singletons and are commonly used for backend access, service like functionality or model like objects. Services can be injected into other services, directives or controllers.

All expressions enclosed in double curly braces or as value of a ng-model directive are used for two way data binding. These expressions are evaluated on the $scope object and all changes either from the DOM or via JavaScript are reflected on the other side. This means when the user enters text into the input in line 32 $scope.word is updated with the value. Also if the code updates $scope.word the value of the input is updated accordingly.

In this solution we use two other directives: ng-submit and ng-repeat. ng-submit just calls a function when the form is submitted and ng-repeat repeats the enclosed DOM subtree for every item in the passed array. Note here the track by in line 38. Normally Angularjs tracks the item by its value but since we can have the same letter more than once we need a different tracking mechanisms, so we use the index of the array here.

Accessing local files can only be done when they are inside a file input. We adapted a solution from ode to code for handling the details of the file API.

// FileReader service
// adapted from http://odetocode.com/blogs/scott/archive/2013/07/03/building-a-filereader-service-for-angularjs-the-service.aspx
module.factory('fileReader', function($q, $log) {
  var onLoad = function(reader, deferred, scope) {
    return function () {
      scope.$apply(function () {
        deferred.resolve(reader.result);
      });
    };
  };
  var onError = function (reader, deferred, scope) {
    return function () {
      scope.$apply(function () {
        deferred.reject(reader.result);
      });
    };
  };
  var onProgress = function(reader, scope) {
    return function (event) {
      scope.$broadcast("fileProgress", {
                        total: event.total,
                        loaded: event.loaded
                      });
    };
  };
  var getReader = function(deferred, scope) {
    var reader = new FileReader();
    reader.onload = onLoad(reader, deferred, scope);
    reader.onerror = onError(reader, deferred, scope);
    reader.onprogress = onProgress(reader, scope);
    return reader;
  };
  var readAsText = function (file, scope) {
    var deferred = $q.defer();
            
    var reader = getReader(deferred, scope);         
    reader.readAsText(file);
            
    return deferred.promise;
  };

  return {readAsText: readAsText};
});

We can then use this service to read out the file. Therefore we need to listen to changes of the input and then read out the content:

module.factory('dictionary', ['fileReader', 'util', function(fileReader, util) {
  var dictionary = [];
  var init = function(scope) {
    var getFile = function (evt) {
      fileReader.readAsText(evt.target.files[0], scope).then(function(result) {
        dictionary = result.split("\n");
      });
    };
    document.getElementById('dictionary').addEventListener('change', getFile, false);
  };
  var containsWord = function(w) {
    return util.containsIgnoreCase(dictionary, w);
  };
  var isEmpty = function() {
    return dictionary.length === 0;
  };
  return {init: init, containsWord: containsWord, isEmpty: isEmpty};
}]);

The fibonacci sequence for calculating the score and the random letter generation are pretty straightforward.

module.factory('util', function($q, $log) {
  var containsIgnoreCase = function(array, e) {
    for (var i = 0; i < array.length; i++) {
      if (e.toUpperCase() === array[i].toUpperCase()) {
        return true;
      }
    }
  return false;                        
  };
  var fib = function(n) {
    if (n === 0) {
      return 0;
    }
    if (n === 1) {
      return 1;
    }
    return fib(n - 1) + fib(n - 2);
  };
  return {containsIgnoreCase: containsIgnoreCase, fib: fib};
});

module.factory('letterGenerator', function($q, $log) {
    var frequencies = [8.167,1.492,2.782,4.253,12.702,2.228,2.015,6.094,6.966,0.153,0.772,4.025,2.406,6.749,7.507,1.929,0.095,5.987,6.327,9.056,2.758,0.978,2.360,0.150,1.974,0.074];
    var codeA = 65;
    var newLetter = function() {
        var frequency = Math.random() * 100;
        for (var i = 0; i < frequencies.length; i++) {
            frequency -= frequencies[i];
            if (frequency <= 0) {
                return String.fromCharCode(codeA + i);
            }
        }
        return 'Z';
    };
    return {newLetter: newLetter};
});

One slightly more difficult piece is the algorithm for finding the word on the board. Here we used a depth first search and represent the neighbours as an array instead of two loops which improves the readability of the algorithm.

module.factory('wordFinder', function($q, $log) {
  var insideBoard = function(board, row, column) {
    return row >= 0 && column >= 0 && row < board.length && column < board[row].length;
  };
  var neighboursOf = function(cell) {
    return [
      [cell[0] - 1, cell[1] - 1], [cell[0] - 1, cell[1]], [cell[0] - 1, cell[1] + 1],
      [cell[0],     cell[1] - 1],                         [cell[0],     cell[1] + 1],
      [cell[0] + 1, cell[1] - 1], [cell[0] + 1, cell[1]], [cell[0] + 1, cell[1] + 1]
    ];
  };
  var contains = function(array, e) {
    for (var i = 0; i < array.length; i++) {
      if (e[0] === array[i][0] && e[1] === array[i][1]) {
        return true;
      }
    }
    return false;                        
  };
  var findNextLetter = function(board, word, path) {
    if (word.length === 0) {
      return path;
    }
    var position = path[path.length - 1];
    var neighbours = neighboursOf(position);
    for (var i = 0; i < neighbours.length; i++) {
      var neighbour = neighbours[i];
      if (!insideBoard(board, neighbour[0], neighbour[1])) {
        continue;
      }
      if (contains(path, neighbour)) {
        continue;
      }
      if (word.charAt(0).toUpperCase() === board[neighbour[0]][neighbour[1]]) {
        var foundPath = findNextLetter(board, word.slice(1), path.concat([neighbour]));
        if (foundPath) {
          return foundPath;
        }
      }
    }
    return null;
  };
  var find = function(board, word) {
    var foundPath;
    angular.forEach(board, function(row, i) {
      angular.forEach(row, function(column, j) {
        if (word.charAt(0).toUpperCase() === column) {
          var path = findNextLetter(board, word.slice(1), [[i, j]]);
          if (path) {
            foundPath = path;
          }
        }
      });
    });
    return foundPath;
  };

  return {find: find};
});

For the board we use an Angularjs service and encapsulate the letters as a two dimensional array, the word finding algorithm, the letter generation and the logic for clearing and falling of letters.

module.factory('board', ['letterGenerator', 'wordFinder', function(letterGenerator, wordFinder) {
  var content = [];
  var init = function(boardSize) {
    for (var lineNo = 0; lineNo < boardSize; lineNo++) {
      var line = [];
      for (var count = 0; count < boardSize; count++) {
        line.push(letterGenerator.newLetter());
      }
      content.push(line);
    }
  };
  var fall = function() {
    for (var i = content.length - 1; i > 0; i--) {
      for (var j = 0; j < content[i].length; j++) {
        if (content[i][j] === '') {
          for (var k = i - 1; k >= 0; k--) {
            if (content[k][j] !== '') {
              content[i][j] = content[k][j];
              content[k][j] = '';
              break;
            }
          }
        }
      }
    }
  };
  var fillEmpty = function() {
    angular.forEach(content, function(row, i) {
      angular.forEach(row, function(column, j) {
        if (column === '') {
          content[i][j] = letterGenerator.newLetter();
        }
      });
    });
  };
  var clear = function(path) {
    angular.forEach(path, function(pos, i) {
      content[pos[0]][pos[1]] = '';
    });
    fall();
    fillEmpty();
  };
  var find = function(word) {
    return wordFinder.find(content, word);
  };
  return {content: content, init: init, clear: clear, find: find};
}]);

Finally we glue everything together inside the controller:

module.controller('LetterCrush', ['$scope', 'board', 'dictionary', 'util',
                  function ($scope, board, dictionary, util) {
    var penalty = 1;

    $scope.score = 0;
    $scope.board = board;
    board.init(5);
    
    dictionary.init($scope);
    
    $scope.testWord = function() {
      if (dictionary.isEmpty()) {
        alert('Please specify a dictionary file.');
        return;
      }
      if (!dictionary.containsWord($scope.word)) {
        $scope.score -= penalty;
        alert($scope.word + ' is no word.');
        $scope.word = '';
        return;
      }
      var found = $scope.board.find($scope.word);
      if (!found) {
        $scope.score -= penalty;
        alert($scope.word + ' is not on the board.');
        $scope.word = '';
        return;
      }
      $scope.score += $scope.calculateScore(found.length);
      $scope.word = '';
      $scope.board.clear(found);
    };
    $scope.calculateScore = function(len) {
        return util.fib(len);
    };
}]);

This concludes our first ng-quiz. We hope you had fun and learned something along the way. If you have questions or improvements please don’t hesitate to comment. In the next ng-quiz we tackle a smaller piece of functionality since this first one seemed a bit too big.


Introducing ng-quiz – a JavaScript / Angular quiz: #1 Letter Crush

February 9, 2014

Learning a new language or framework can be quite daunting. It helps me when I have small tasks which motivate me to explore my chosen field further and in a fun and goal driven way. So in the spirit of the Ruby quiz I introduce ng-quiz, a quiz for learning Angularjs. Every month I post a small task which should be solved in a short time and helps you to learn and deepen your Angularjs and JavaScript skills. It will touch different areas and explores other JavaScript libraries. You can post your solutions (as a link to a jsfiddle or github) as a comment. Two weeks later I will update the post with the solutions.

ng-quiz #1: Letter Crush

In the first quiz you will implement a simple game named ‘Letter Crush’. In ‘Letter Crush’ a player tries to find letters forming a word in a random letter ‘soup’. The board consists of nxn quadratic cells containing a random letter.

A	G	H	M	I
T	C	O	U	E
F	E	P	R	Q
K	D	S	A	N
V	L	F	T	Y

The player can enter a word. This word must be in an english dictionary and needs to be found without overlapping. If both criteria are satisified, the word is removed from the board, the letters above the empty cells fall down and the top most empty cells are filled with new random letters.

Example

A	G	H	M	I
T	C	O	U	E
F	E	P	R	Q
K	D	S	A	N
V	L	F	T	Y

If the player enters the word ‘test’ (case insensitive) and since it is a real word the letters are removed from the board.

A	G	H	M	I
 	C	O	U	E
F	 	P	R	Q
K	D	 	A	N
V	L	F	 	Y

Now the letters above fall down.

 	 	 	 	I
A	G	H	M	E
F	C	O	U	Q
K	D	P	R	N
V	L	F	A	Y

The top most empty cells are filled again.

I	W	S	T	I
A	G	H	M	E
F	C	O	U	Q
K	D	P	R	N
V	L	F	A	Y

Now the next turn begins and the player could enter RUN, ACORN, MOURN, THORN or GO or others.

Letter Generation

To not fill the board with garbage letters, the new letters need to be generated taking the relative frequency of letters in the english language.

A B C D E F G
8.167 1.492 2.782 4.253 12.702 2.228 2.015
H I J K L M N
6.094 6.966 0.153 0.772 4.025 2.406 6.749
O P Q R S T U
7.507 1.929 0.095 5.987 6.327 9.056 2.758
V W X Y Z
0.978 2.360 0.150 1.974 0.074

Words

Entering a word could be done via the mouse or the keyboard. To form a valid word the letters need to be direct or diagonal adjacent. Every letter can be used only once.

Score

The player starts with a score of 0. A wrong input reduces the score by 1. A valid word is scored by its length and according to the following fibonacci sequence:

Length 1 2 3 4 5 6 7
Score 1 1 2 3 5 8 13

Optional fun: special fields

The above version of ‘Letter Crush’ is playable but if the task is too small for you or you want to explore it further, you can implement some special fields.

Duplication – lower case letter

A duplication field consists of a lower case letter and doubles the score of the word. It has a frequency of 5 percent.

Wildcard – ?

A wild card can be used as any letter and is represented as a question mark. It should be generated with 0.5 percent.

Extension – +

An extension field just lengthen a word if it is adjacent to the last letter. A plus sign marks the field and is generated with a relative frequency of 1 percent.

Bomb – *

An asterisk shows a bomb and is produced every 0.25 percent. A bomb detonates when the chosen word is adjacent to it. Every letter which is not part of the word but adjacent to the bomb is also removed.

Example:

B	M	A	S
O	R	B	*
N	B	T	B
E	U	O	L

The player chooses the word bomb and the bomb detonates. So after the removal but before the filling it looks like:


	R		
N	B		
E	U	O	L

P.S. if you are new in JavaScript and coming from a Java background you might find our JavaScript for Java developers post helpful.


Testing C++ code with OpenCV dependencies

February 3, 2014

The story:

Pushing for more quality and stability we integrate google test into our existing projects or extend test coverage. One of such cases was the creation of tests to document and verify a bugfix. They called a single function and checked the fields of the returned cv::Scalar.

TEST(ScalarTest, SingleValue) {
  ...
  cv::Scalar actual = target.compute();
  ASSERT_DOUBLE_EQ(90, actual[0]);
  ASSERT_DOUBLE_EQ(0, actual[1]);
  ASSERT_DOUBLE_EQ(0, actual[2]);
  ASSERT_DOUBLE_EQ(0, actual[3]);
}

Because this was the first test using OpenCV, the CMakeLists.txt also had to be modified:

target_link_libraries(
  ...
  ${OpenCV_LIBS}
  ...
)

Unfortunately, the test didn’t run through: it ended either with a core dump or a segmentation fault. The analysis of the called function showed that it used no pointers and all variables were referenced while still in scope. What did gdb say to the segmentation fault?

(gdb) bt
#0  0x00007ffff426bd25 in raise () from /lib64/libc.so.6
#1  0x00007ffff426d1a8 in abort () from /lib64/libc.so.6
#2  0x00007ffff42a9fbb in __libc_message () from /lib64/libc.so.6
#3  0x00007ffff42afb56 in malloc_printerr () from /lib64/libc.so.6
#4  0x00007ffff54d5135 in void std::_Destroy_aux&amp;lt;false&amp;gt;::__destroy&amp;lt;testing::internal::String*&amp;gt;(testing::internal::String*, testing::internal::String*) () from /usr/lib64/libopencv_ts.so.2.4
#5  0x00007ffff54d5168 in std::vector&amp;lt;testing::internal::String, std::allocator&amp;lt;testing::internal::String&amp;gt; &amp;gt;::~vector() ()
from /usr/lib64/libopencv_ts.so.2.4
#6  0x00007ffff426ec4f in __cxa_finalize () from /lib64/libc.so.6
#7  0x00007ffff54a6a33 in ?? () from /usr/lib64/libopencv_ts.so.2.4
#8  0x00007fffffffe110 in ?? ()
#9  0x00007ffff7de9ddf in _dl_fini () from /lib64/ld-linux-x86-64.so.2
Backtrace stopped: frame did not save the PC

Apparently my test had problems at the end of the test, at the time of object destruction. So I started to eliminate every statement until the problem vanished or no statements were left. The result:

#include &quot;gtest/gtest.h&quot;
TEST(DemoTest, FailsBadly) {
  ASSERT_EQ(1, 0);
}

And it still crashed! So the code under test wasn’t the culprit. Another change introduced previously was the addition of OpenCV libs to the linker call. An incompatibility between OpenCV and google test? A quick search spitted out posts from users experiencing the same problems, eventually leading to the entry in OpenCVs bug tracker: http://code.opencv.org/issues/1608 or http://code.opencv.org/issues/3225. The opencv_ts library which appeared in the stack trace, exports symbols that conflict with google test version we link against. Since we didn’t need opencv_ts library, the solution was to clean up our linker dependencies:

Before:

find_package(OpenCV)

 

/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab ../gtest-1.7.0/libgtest.a -lpthread -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab

After:


find_package(OpenCV REQUIRED core highgui)

 

/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_highgui -lopencv_core ../gtest-1.7.0/libgtest.a -lpthread

Lessons learned:

Know what you really want to depend on and explicitly name it. Ignorance or trust in build tools’ black magic is a recipe for blog posts.


Integrating googletest in CMake-based projects and Jenkins

January 27, 2014

In my – admittedly limited – perception unit testing in C++ projects does not seem as widespread as in Java or the dynamic languages like Ruby or Python. Therefore I would like to show how easy it can be to integrate unit testing in a CMake-based project and a continuous integration (CI) server. I will briefly cover why we picked googletest, adding unit testing to the build process and publishing the results.

Why we chose googletest

There are a plethora of unit testing frameworks for C++ making it difficult to choose the right one for your needs. Here are our reasons for googletest:

  • Easy publishing of result because of JUnit-compatible XML output. Many other frameworks need either a Jenkins-plugin or a XSLT-script to make that work.
  • Moderate compiler requirements and cross-platform support. This rules out xUnit++ and to a certain degree boost.test because they need quite modern compilers.
  • Easy to use and integrate. Since our projects use CMake as a build system googletest really shines here. CppUnit fails because of its verbose syntax and manual test registration.
  • No external dependencies. It is recommended to put googletest into your source tree and build it together with your project. This kind of self-containment is really what we love. With many of the other frameworks it is not as easy, CxxTest even requiring a Perl interpreter.

Integrating googletest into CMake project

  1. Putting googletest into your source tree
  2. Adding googletest to your toplevel CMakeLists.txt to build it as part of your project:
    add_subdirectory(gtest-1.7.0)
  3. Adding the directory with your (future) tests to your toplevel CMakeLists.txt:
    add_subdirectory(test)
  4. Creating a CMakeLists.txt for the test executables:
    include_directories(${gtest_SOURCE_DIR}/include)
    set(test_sources
    # files containing the actual tests
    )
    add_executable(sample_tests ${test_sources})
    target_link_libraries(sample_tests gtest_main)
    
  5. Implementing the actual tests like so (@see examples):
    #include "gtest/gtest.h"
    
    TEST(SampleTest, AssertionTrue) {
        ASSERT_EQ(1, 1);
    }
    

Integrating test execution and result publishing in Jenkins

  1. Additional build step with shell execution containing something like:
    cd build_dir && test/sample_tests --gtest_output="xml:testresults.xml"
  2. Activate “Publish JUnit test results” post-build action.

Conclusion

The setup of a unit testing environment for a C++ project is easier than many developers think. Using CMake, googletest and Jenkins makes it very similar to unit testing in Java projects.


Learning JavaScript: Great libraries

January 20, 2014

After taking a look at JavaScript as a language we continue with some interesting and helpful libraries for web development.

underscore – going the functional route

What: Underscore has a lot of utility functions for helping with collections, arrays, objects, functions and even some templating.

How: Just a glimpse at the many, many functions provided (taken from the examples at underscorejs.org)

_.map([1, 2, 3], function(num){ return num * 3; });
=> [3, 6, 9]

_.map({one: 1, two: 2, three: 3}, function(num, key){ return num * 3; });
=> [3, 6, 9]

_.every([true, 1, null, 'yes'], _.identity);
=> false

_.groupBy([1.3, 2.1, 2.4], function(num){ return Math.floor(num); });
=> {1: [1.3], 2: [2.1, 2.4]}

_.escape('Curly, Larry & Moe');
=> "Curly, Larry &amp; Moe"

var compiled = _.template("hello: <%= name %>");
compiled({name: 'moe'});
=> "hello: moe"

Why: JavaScript is a functional language but its standard libraries miss a lot of the functional goodies we are used to from other languages. Here underscore comes to the rescue.

when.js – lightweight concurrency

What: When.js is a lightweight implementation of the promise concurrency model.

How:

var when = require('when');
var rest = require('rest');

when.reduce(when.map(getRemoteNumberList(), times10), sum)
    .done(function(result) {
        console.log(result);
    });

function getRemoteNumberList() {
    // Get a remote array [1, 2, 3, 4, 5]
    return rest('http://example.com/numbers').then(JSON.parse);
}

function sum(x, y) { return x + y; }
function times10(x) {return x * 10; }

Why: Concurrency is hard. Callbacks can get out of hand pretty quickly even more so when they are nested. Promises make writing and reading concurrency code simpler.

require.js – modules (re)loaded

What: Require.js uses a common module format (AMD – Asynchronous Module Definition) and helps you loading them.

How: You include your module loading file as a data-main attribute on the script tag.

<script data-main="js/app.js" src="js/require.js"></script>

Inside your module loading file you define the modules you want to load and where they are located.

requirejs.config({
    baseUrl: 'js/lib',
    paths: {
        app: '../app'
    }
});

requirejs(['jquery', 'canvas', 'app/sub'],
function   ($,        canvas,   sub) {
    //jQuery, canvas and the app/sub module are all
    //loaded and can be used here now.
});

Why: Your scripts need to be in modules, cleanly separated from each other. If you don’t have an asset pipeline want to load your modules asynchronously or just want to manage your modules from your JavaScript require.js is for you.

bower – packages managed

What: Bower lets you define your dependencies.

How: Just define a bower.json file and run bower install

{
	"name": "My project",
	"version": "0.1.0",
	"dependencies": {
		"jquery": "1.9",
		"underscore": "latest",
		"requirejs": "latest"
	}
}

Why: Proper dependency management is a tough nut to crack and it is even harder to be pleasantly used (I am looking at you, maven).

grunt – the worker

What: Grunt is a task runner or build tool much like Java’s Ant.

How: Create a GruntFile and start automating:

module.exports = function(grunt) {

  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),
    uglify: {
      options: {
        banner: '/*! <%= pkg.name %> <%= grunt.template.today("yyyy-mm-dd") %> */\n'
      },
      build: {
        src: 'src/<%= pkg.name %>.js',
        dest: 'build/<%= pkg.name %>.min.js'
      }
    }
  });

  grunt.loadNpmTasks('grunt-contrib-uglify');
  grunt.registerTask('default', ['uglify']);
};

Why: Repetitive tasks need to be automated. If you plan to use grunt and bower consider using yeoman which combines both into a workflow.

d3 – visualizations

What: d3 – data driven documents, a library for manipulating the DOM based on data using HTML, CSS and SVG.

How: There are many, many examples on d3js.org but to get a glimpse on how d3 works, here a small example.

Why: There are tons of chart or visualization libraries out there but d3 takes a more general approach. It defines a way of thinking, a way of manipulating the document based on data and data alone. It treats the DOM as part of your web application. Your visualization is a part of your document and not just a component you can forget about after creating it. This general approach allows you to create arbitrary visualizations not just charts.

last but not least: jQuery

What: jQuery is more of a collection of libraries then a single one, It features AJAX, effects, events, concurrency, DOM manipulation and general utility functions for collections, functions and objects.

How: Here we can just take a very quick overview because jQuery is so big.

// DOM querying and manipulation
$('.cssclass').each(function (index, element) {
  $(element).attr('name') = 'new name';
});

// AJAX calls
$.get('/blog/posts', function(data) {
  $('.posts').html(data);
});

// events
$( document ).ready(function() {
  console.log('DOM loaded');
});

// deferred objects aka promises
$.when(asyncEvent()).then(
  function( status ) {
    // done
  },
  function( status ) {
    // fail
  },
  function( status ) {
    // progress
  }
);

// utilities
var object = $.extend({}, object1, object2);

$('li').each(function(index) {
  console.log(index + ": " + $(this).text());
});

Why: jQuery is almost ubiquitous and this is not by chance. It provides a great foundation and helps in many common scenarios.

Others

There are many more libraries out there and I would appreciate a pointer to a great library. This article can only give a short glimpse so please take a further look at their respective homepages. A word on testing or test runners: these will get an additional blog post.


From ugly to pretty – Three steps is all it takes

December 30, 2013

makeupI hold lectures in software engineering for over a decade now. One major topic is testing, specifically unit tests. Other corner stones are refactorings and code readability. So whenever I have the chance to challenge my students in cross-topic aspects of software development, it’s almost always a source of insight for them and especially for me. But one golden moment holds a special place in my memory. This is the (rather elaborate, sorry) story of this moment.

During a lecture about unit tests with JUnit, my students had the task to develop tests for a bank account class. That’s about as boring as testing can be – the account was related to a customer and had a current balance. The customer can withdraw money, but only some customers can overdraw their account. To spice things up a bit, we also added the mock object framework EasyMock to the mix. While I would recommend other mock frameworks for production usage, the learning curve of EasyMock is just about right for first time exposure in a “sheep dip” fashion.

Our first test dealt with drawing money from an empty account that can be overdrawn:

@Test
public void canWithdrawOnCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(true);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(new Euro(30), cash);
  assertEquals(new Euro(-30), account.balance());
  EasyMock.verify(customer);
}

The second test made sure that this withdrawal behaviour only works for customers with sufficient credit standing. We decided to pay out nothing (0 Euro) if the customer tries to withdraw more money than his account currently holds:

@Test
public void cannotTakeUpCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(false);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(Euro.ZERO, cash);
  assertEquals(Euro.ZERO, account.balance());
  EasyMock.verify(customer);
}

As you can tell, a lot of copy and paste was going on in the creation of this test. Just look at the name of the local variable “required” – it’s misleading now. Right up to this point, my main topic was the usage of the mock framework, not perfect code. So I explained the five stages of normalized mock-based unit tests (initialize, train mocks, execute tested code, assert results, verify mocks) and then changed the topic by expressing my displeasure about the duplication and the inferior readability of the code (it even tries to trick you with the “required” variable!). Now it was up to my students to improve our situation (this trick works only a few times for every course before they preventively become even pickier than me). A student accepted the challenge and gave advice:

First step: Extract Method refactoring

The obvious first step was to extract the duplication in its own method and adjust the calls by their parameters. This is an easy refactoring that will almost always improve the situation. Let’s see where it got us. Here is the extracted method:

protected void performWithdrawalTestWith(
    boolean customerCanOverdraw,
    Euro amountOfWithdrawal,
    Euro expectedCash,
    Euro expectedBalance) {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(customerCanOverdraw);
  EasyMock.replay(customer);
  Account account = new Account(customer);

  Euro cash = account.withdraw(amountOfWithdrawal);

  assertEquals(expectedCash, cash);
  assertEquals(expectedBalance, customer.balance());
  EasyMock.verify(customer);
}

And the two tests, now really concise:

@Test
public void canWithdrawOnCredit() {
  performWithdrawalTestWith(
      true,
      new Euro(30),
      new Euro(30),
      new Euro(-30));
}

 

@Test
public void cannotTakeUpCredit() {
  performWithdrawalTestWith(
      false,
      new Euro(30),
      Euro.ZERO,
      Euro.ZERO);
}

Well, that did resolve the duplication indeed. But the test methods now lacked any readability. They appeared as if somebody had extracted all the semantics out of the code. We were unhappy, but decided to interpret the current code as an intermediate step to the second refactoring:

Second step: Introduce Explaining Variable refactoring

In the second step, the task was to re-introduce the semantics back into the test methods. All parameters were nameless, so that was our angle of attack. By introducing local variables, we gave the parameters meaning again:

@Test
public void canWithdrawOnCredit() {
  boolean canOverdraw = true;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = new Euro(30);
  Euro resultingBalance = new Euro(-30);

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean canOverdraw = false;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = Euro.ZERO;
  Euro resultingBalance = Euro.ZERO;

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

That brought back the meaning to the test methods, but didn’t improve readability. The code wasn’t intentionally cryptic any more, but still far from being intuitively understandable – and that’s what really readable code should be. If even novices can read your code fluently and grasp the main concepts in the first pass, you’ve created expert code. I challenged the student to further transform the code, without any idea how to carry on myself. My student hesitated, but came up with the decisive refactoring within seconds:

Third step: Rename Variable refactoring

The third step doesn’t change the structure of the code, but its approachability. Instead of naming the local variables after their usage in the extracted method, we name them after their purpose in the test method. A first time reader won’t know about the extracted method (and preferably shouldn’t need to know), so it’s not in the best interest of the reader to foreshadow its details. Instead, we concentrate about telling the reader a coherent story:

@Test
public void canWithdrawOnCredit() {
  boolean aCustomerThatCanOverdraw = true;
  Euro heWithdraws30Euro = new Euro(30);
  Euro receivesTheFullAmount = new Euro(30);
  Euro andIsNow30EuroInTheRed = new Euro(-30);

  performWithdrawalTestWith(
      aCustomerThatCanOverdraw,
      heWithdraws30Euro,
      receivesTheFullAmount,
      andIsNow30EuroInTheRed);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean aCustomerThatCannotOverdraw = false;
  Euro heTriesToWithdraw30Euro = new Euro(30);
  Euro butReceivesNothing = Euro.ZERO;
  Euro andStillHasABalanceOfZero = Euro.ZERO;

  performWithdrawalTestWith(
      aCustomerThatCannotOverdraw,
      heTriesToWithdraw30Euro,
      butReceivesNothing,
      andStillHasABalanceOfZero);
}

If the reader is able to ignore some crude verbalization and special characters, he can read the test out loud and instantly grasp its meaning. The first lines of every test method are a bit confusing, but necessary given Java’s lack of named parameters.

The result might remind you a lot of Behavior Driven Development notation and that’s probably not by chance. In a few minutes during that programming exercise, my students taught themselves to think in scenarios or stories when approaching unit tests. I couldn’t have taught it any better – instead, I got enlightened by this exercise, too.


How to use partial mocks in real life

December 23, 2013

Partial mocks are an advanced feature of modern mocking libraries like mockito. Partial mocks retain the original code of a class only stubbing the methods you specify. If you build your system largely from scratch you most likely will not need to use them. Sometimes there is no easy way around them when working with dependencies not designed for testability. Let us look at an example:

/**
 * Evil dependency we cannot change
 */
public final class CarvedInStone {

    public CarvedInStone() {
        // may do unwanted things
    }

    public int thisHasSideEffects(int i) {
        return 31337;
    }

    // many more methods
}

public class ClassUnderTest {

    public Result computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = new CarvedInStone().thisHasSideEffects(42);
        // more interesting code
        return new Result(intermediateResult * 1337);
    }
}

We want to test the computeSomethingInteresting() method of our ClassUnderTest. Unfortunately we cannot replace CarvedInStone, because it is final and does not implement an interface containing the methods of interest. With a small refactoring and partial mocks we can still test almost the complete class:

public class ClassUnderTest {
    public int computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = intermediateResultsFromCarvedInStone(42);
        // more interesting code
        return intermediateResult * 1337;
    }

    protected int intermediateResultsFromCarvedInStone(int input) {
        return new CarvedInStone().thisHasSideEffects(input);
    }
}

We refactored our dependency into a protected method we can use to stub out with our partial mocking to be tested like this:

public class ClassUnderTestTest {
    @Test
    public void interestingComputation() throws Exception {
        ClassUnderTest cut = spy(new ClassUnderTest());
        doReturn(1234).when(cut).intermediateResultsFromCarvedInStone(42);
        assertEquals(1649858, cut.computeSomethingInteresting());
    }
}

Caveat: Do not use the usual when-thenReturn-style:

when(cut.intermediateResultsFromCarvedInStone(42)).thenReturn(1234);

with partial mocks because the real method will get called once!

So the only untested code is a simple delegation. Measures like that refactoring and partial mocking generally serve as a first step and not the destination.

Where to go from here

To go the whole way we would encapsulate all unmockable dependencies into wrapper objects providing the functionality we need here and inject them into our ClassUnderTest. Then we can replace our wrapper(s) easily using regular mocking.

Doing all this can be a lot of work and/or risk depending on the situation so the depicted process serves as an low risk intermediate step for getting as much important code under test as possible.

Note that the wrappers themselves stay largely untestable like our protected delegating method.


Follow

Get every new post delivered to your Inbox.

Join 64 other followers