How the most interesting IT debate is revealing our values as software developers

May 15, 2014

TDD is dead. Is TDD dead? A question that seems to divide our profession.
On the one side: developers which write their tests first and let them drive their code. They prefer the mockist approach to testing. Code should be tested in isolation, under lab like circumstances. Clean code is their book. Practices and principles guide their thinking. An application should not be bound to frameworks and have a hexagonal architecture. The GOOS book showed how it can be done.
On the other side: developers which focus on readability and clarity. They use their experience and gut to drive their decisions. Because of past experiences they test their the code the classical way. They are pragmatic. Practices and principles are used when they improve the understanding of the code. Code is there to be refactored. Just like a gardener trims bushes and a writer edits his prose they work with their code.

What are your values?

What does this debate have to do with you?

Ask yourself:
What if you could write a proof of your program costing 10 or just 5 times as much as the implementation? It would prove your code would work correctly under all possible circumstances. Would you do it?

Or would you rather improve the existing architecture, design or clarity of your code? So that you remove technical debt and are better positioned for future changes.

Or would you write new features and improve your application for the people using it?

What are your values?

History

At the beginnings of my developer life in the late 80s/early 90s I remember that the industry was focussed on one goal: code reuse. Modules, components, libraries, frameworks were introduced. Then patterns came. All of that was working towards one side of the equation: low coupling.
High cohesion was neglected in pursuit of a noble goal. But what happened? The imbalance produced layer after layer, indirection after indirection, over-separation and over-abstraction. You had to deal with dependency injection (containers), configuration, class hierarchies, interfaces, event buses, callbacks, … just to understand a hello world.
Today we have more computing power and are solving more and more complex things. We think in higher abstractions. Much more people benefit from our skills and our works.
On the user facing side design focusses on simplicity and usability. Even complex relationships can be made understandable and manageable. A wise man once said: design is about intent.
The same with code: Code is about intent. Intent should be the measure of the quality of our code. Not testability, not coupling: intent. If the code (and this includes the code comments) would reveal its intent, you could fix bugs in it, improve it, change it, refactor it. Tests would be your safety net to ensure you are not breaking your intent.
You might say: but this is what TDD is all about! But I think we got it all backwards. The code and its intention revealing nature is more important than the tests. The tests support. But tests should never replace or even harm the clarity of the code.
The quality of the code is important. But most important are the people using your application.
My goal is to delight the people who use my software and my way there is writing intention revealing software. I am not there and I am learning every day but I take step after step.

What are your values?


A hierarchy of project needs

May 12, 2014

A few weeks ago, I traded stories with a fellow software developer when he told me this little gem: A developer programs a web shop that looks pretty and runs smooth. But as soon as you place multiple items in the shopping cart, you’ll inevitably end up with an amount of XX.999999999998 euros (or whatever currency you want). When asked why the shopping cart “computes the wrong amounts”, the developer answered that the amount is correct and that’s just the way a floating point number behaves. He didn’t see a problem with the functionality. My immediate answer: “Wow, that’s very low on the Maslow pyramid”. We both understood, but since then, I tried to come up with a Maslow-like pyramid that would explain my sentence to a larger crowd. So here is how my attempt has grown so far.

Maslow’s hierarchy of needs640px-Maslow's_Hierarchy_of_Needs.svg

Abraham Maslow was an american psychologist that studied mental health and human potential. He invented an hierarchy of human needs that is also known as Maslow’s pyramid. On a side note, he also pointed out the human tendency to over-apply known tools. His pyramid has five stages (IT people would call them layers) of human needs that begin with the very basic ones (e.g. air, water) and scales to the abstracts like morality and creativity. If something is “low on the pyramid”, then it can be seen as granted by priviledged people. Most of us never think about our air supply requirements. Everything “high on the pyramid” can be seen as “expendable” in times of crisis. Morality will be forgotten as soon as we seriously lack water.

A hierarchy of project needs

My immediate answer to the story in the introduction suggested that I think an equivalent pyramid exists for the needs of a software development project. And a quick research on the internet reveals that I’m not the only one with that idea. For example, Scott Hanselman blogged about it in 2012, and Francis Shanahan came up with an extended version in 2009. Both adaptions are reasonable and stand on their own – I don’t want to invalidate or change them. Instead, I publish my attempt as an addition to the discussion, if there is any.

Here is my five-layered pyramid of project needs:

project maslovLayer 0: Executable

Let’s face it: If your project doesn’t compile or crashes right after being started, it isn’t much worth. And just because it runs on your machine doesn’t make it any more useful to others. So the most basic need a project has is to be executable on the target machine. This includes some form of correctness – if your program doesn’t perform the right operations, it can run indefinitely and not provide any value. Please note that the program doesn’t have to be bug-free or tested to be useful. It just has to adhere to the intended use case. In our introduction story, the web shop looks pretty and runs smoothly. It certainly is “Executable”.

Layer 1: Abstraction

This is where I placed the mishap in the introduction story. Every project needs some form of abstraction or separation between the internal representation of data and functionality and the external presentation to the user. This is probably trivial to most of you, but I’ve seen way to much code that uses external presentations (e.g. strings from the GUI) to make important decisions and others have, too. A key rule is “once data is formatted, it is eternally lost and unavailable to computing / data processing“. The rule for the other way is that you should never present data without proper (human-readable) formatting. The amount of work you save by not pretty-printing (formatting is just the formal term for adding syntactic sugar to make the data edible for humans) is largely offset by the amount of work your users will have to invest to decipher the output.

Layer 2: Architecture

You can call it design, architecture or whatever you like, any reasonably large code base needs some kind of structuring that prevents it from imploding. A whole theory of patterns was invented to keep code aerated enough to prevent it from decomposing to compost. And we all know what compost code looks and smells like. Applying architecture to your code keeps it maintainable and refactorable and in outstanding cases even modularizable. This is the layer where most projects fail on the long run. Even if at first there was a design, it gets watered down with every modification. Good principles to counter this effect are the “no broken windows” approach and the boy scout rule.

Layer 3: Verification

There is a moment in programming when you hand your code over to the next developer. Usually, this moment is called “commit” (if you don’t use version control, have a good look at Scott Hanselman’s lowest pyramid layer!). Oftentimes, the next developer is future you – and you have no clue what past you thought when he wrote that crap. You can’t even distinguish between features and bugs. That’s why your project wants verification. It’s not utmost important if you verify your code with unit tests, integration tests, acceptance tests, contracts or all of them together. It’s important that your code is accompanied by automated guardian angels that catch the most dangerous accidental modifications and help to point out the bugs among the features. Automated verification tells future you that whatever past you wanted to build, it’s still intact. This layer is the life insurance for functionality as much as the architecture layer was for code.

Layer 4: Style

Every program in the world can still do its job properly even if we would eliminate everything “stylish” in their codebase. Style is the most human-centered need in the pyramid. No machine or compiler has yet developed aesthetic likings. Scott Hanselman called this layer “bragging rights”, another thing computers don’t care about. This is the level where most bickering among developers takes place, but it’s also the level that can most easily be ignored without sacrificing critical project needs. Or, to put it bluntly: Your project most likely doesn’t care half as much about style as you do.

Where to go from here?

My most important message with the hierarchy of project needs is that we often focus on the higher needs and take the lower ones too much for granted. If your code lacks in the fundamental layers, the damage is much greater in terms of project value. A stylistic displeasing code will hurt the next developer, but a code lacking abstraction will hurt every user of your software, as exemplified by the story in the introduction. As we developers should be the advocates of our project’s needs, we have to think more in regard of its benefit than our personal self-actualization. But the required traits to do so properly aren’t even on the original Maslow’s pyramid, so it’s a big challenge for any of us.


Translating strings in internationalized applications

May 5, 2014

Internationalization (“i18n”) and localization (“l10n”) of software is a complex topic with many facets. One aspect of internationalization is the translation of strings in programs into different languages.

Here’s an example of how not to do it (assuming t is a translation lookup function):

StringBuilder sb = new StringBuilder(t("User "));
sb.append(user.name());
sb.append(t(" logged in "));
sb.append(minutes);
sb.append(" ");
if (minutes == 1) {
    sb.append(t("minute"));
} else {
    sb.append(t("minutes"));
}
sb.append(t(" ago."));
return sb.toString();

Translatable strings and concatenation don’t mix well, be it via StringBuilder, the plus operator or in template files like JSPs. Different languages have different sentence structures. You can’t know in advance in which order the parts must appear in the translated text. So the most basic rule is: never construct sentences programmatically from sentence fragments if they are intended for translation.

Here’s a slightly better variant:

if (minutes == 1) {
    return t("User {0} logged in {1} minute ago.", user.name(), minutes);
}
return t("User {0} logged in {1} minutes ago.", user.name(), minutes);

I18n frameworks always offer the possibility to pass arguments to the translation lookup function. This way translators can freely choose the positions of these arguments via placeholders in the translated string.

However, not all languages have pluralization rules similar to English, where you have to handle only two cases (one and zero/many). For example, Russian and Polish use different forms of nouns with different numerals higher than one. Here’s an extensive table listing the plural rules for different languages: The rules are classified into these categories: “one”, “two”, “few”, “many”, “other”. Good i18n frameworks provide translation lookup functions where you can pass the count as an additional argument. The framework then dispatches to different translation keys, depending on the count and the target language:

user.login.minutes.one=...
user.login.minutes.two=...
user.login.minutes.many=...
user.login.minutes.other=...

There are other traps that you have to watch out for, e.g.

  • different punctuation marks: you can’t simply assume that you can convert any translated text into a label by appending “:” to it, or that you can convert any translated text into a quotation by surrounding it with ” and “.
  • gender rules, which can be handled similarly to the pluralization rules

Conclusion

This article gave a small glimpse into the topic of internationalization, to help avoid the most basic mistakes. Check out the documentation of your internationalization framework to see what it can offer.


Look into the past, become more agile: egoless programming

April 29, 2014

Agile software development methods like extreme programming and Scrum have been around for almost 20 years now. Many think of them as “the right way” or some kind of “holy grail”. So it is no wonder that many companies try to hop on the agile train to improve quality and efficiency of their software development.

Why is it that I hear of failures in adopting/switching to agile most of the time?

In my experience there are two main reasons why implementing agile methods does not work as expected:

  1. It is the management deciding to change the development process, not the developers. After a short while many aspects – especially using rigid methods like Scrum – do not seem to fit the organisation and therefore are changed. This leads to process implementations like ScrumBut
  2. Many developers do not have the mindset for the interactive, open and collaborative work style required by agile methods. Too often developers try to secure their job by isolating themselves and their solutions. Or they are doing stuff the way they are used to for many years not willing to learn, change and improve. Communication is hard, especially for programming nerds…

What can be done about it?

I would suggest trying to slowly change the culture in the company based on concepts from the 1970′s: egoless programming. In essence you have to let developers become more open and collaborative by internalising the Ten Commandments of Egoless Programming:

  1. Understand and accept that you will make mistakes.
  2. You are not your code.
  3. No matter how much “karate” you know, someone else will always know more.
  4. Don’t rewrite code without consultation.
  5. Treat people who know less than you with respect, deference, and patience.
  6. The only constant in the world is change.
  7. The only true authority stems from knowledge, not from position.
  8. Fight for what you believe, but gracefully accept defeat.
  9. Don’t be “the guy in the room.”
  10. Critique code instead of people—be kind to the coder, not to the code.

There is a PDF version of the commandments with some amplifying sentenctes from Builder.com published on techrepublic in 2002. If you manage to create a development culture based on these values you are more than half-way down the agile road and can try to implement one of the popular agile methods – or your own!


Developer power tools: Big O = big impact

April 21, 2014

Scalability and performance are often used interchangeable but they are very different. The resource consumption of a program is its performance. So performance looks at a specific part of a program, say the user activating an action and tries to fix the input and circumstances.
Scalability defines how the resource consumption of a program changes when additional resources are added or the input size increases. So a program can have good performance (it runs fast) but when the load increases it can behavely badly (a bad scalability). Take bubble sort for example. Since the algorithm has low overhead it runs fast with small arrays but when the arrays get bigger the runtime gets worse. It doesn’t scale.
There are many ways to improve scalability here we look at one particular: reducing algorithmic complexity. The time complexity of an algorithm is measured with the big T notation. The T notation describes the time behavior of the algorithm when the input size changes. Say you have an algorithm that takes n objects and needs 4 times as long with an overhead of 2. This would be:

T(n) = 4n + 2

Often the correct numbers are not necessary, you just want to have a magnitude. This is where the big O notation comes into play. It describes the asymptotical upper bound or just the behaviour of the fastest growing part. In the above case this would be:

T(n) = 4n + 2 = O(n)

We call such an algorithm linear because the resource consumption grows linear with the increase in input size. Why does this matter? Let’s look at an example.

Say we have a list of numbers and want to know the highest. How long would this take? A straight forward implementation would look like this:

int max = Integer.MIN_VALUE;
for (int number : list) {
  if (number > max) {
    max = number;
  }
}

So we need if the size of the number list is n we need n compare operations. So our algorithm is linear. What if we have two lists of length n? We just our algorithm twice and compare both maximums so it would be

T(n) = 2n + 1 = O(n)

also linear.
Why does this all matter? If our algorithm runs quadratic, so O(n^2) and our input size is n = 1000, we need a multitude of 1 million operations. If it is linear it just needs a multitude of 1000 operations, much less. Reducing the complexity of an algorithm can be business critical or the difference between getting an instant result or losing a customer because he waited too long.
Time complexity isn’t the only complexity you can measure other resources like space can also be measured with big O. I hope this clears some questions and if you have further ones please feel free and leave a comment.


Gigapixel images in pure Java

April 14, 2014

Not long ago, I read this nice little blog entry about the basic properties and usages of Java arrays. It’s a long time since I last used an array in Java myself, because my programming style evolved to heavily leverage the power of collections (and Iterables in particular, the Java 5 poor man’s substitute for Java 8 Streams). But I immediately noticed that one important fact was missing from the array blog entry:

The maximum length of an array in Java is Integer.MAX_VALUE or ((2^32)-1), aka 2.147.483.647

This is indirectly specified in the Java language specification, chapter 10.4 Array Access:

Arrays must be indexed by int values.

This little fact crossed my path when writing a little tool in pure Java that operated on large numbers of large images, combining them to a gigantic image. The customer used the tool to create images that had a size of about 100 MB, but took several hours to print because the decompression tax kicked in. One day, he reported a strange bug:

array-error-cropped

 

“Oh, a negative array size, what a strange bug to appear in a tested application” was my first thought. Only after reading the stacktrace more carefully did it dawn on me: The array size wasn’t negative, it was just bigger than Integer.MAX_VALUE and got wrapped around into the negative numbers. And sure enough, 72350 times 44914 is a respectable 3.249.527.900 pixels, more than 1,5 times as much as an array in Java can hold. This image was right in the multi-gigapixel range where all kinds of technical obstacles appear. The maximum length of an array in Java was mine.

Trying to stay pure

One cornerstone of the tool was being lightweight. It shouldn’t carry around unnecessary luggage and weighted around 200 kB when the bug appeared – enough to just copy it into the data directories instead of pulling the directories into the program. But when I examined the root cause of the problem at hand, I found the frustrating truth that Java’s built-in imaging library also relies on one cornerstone: all data is stored in one array. And this array can only hold around 2G entries of data.

My approach was to “partition” the full image into smaller parts that only stored a fraction of the overall pixels. To hide this fact from the ImageIO that ultimatively writes all the data into one file, my PartitionedImage implements RenderedImage and has to translate every call into a series of appropriate subcalls to the partition images. Before we look at some code, let me show you the limitations of this approach:

Greedy JPEGs, credulous PNGs

In the RenderedImage interface, there are two methods that can be used to obtain pixel data:

  • Raster getData(): Returns the image as one large tile (for tile based images this will require fetching the whole image and copying the image data over).
  • Raster getData(Rectangle rect): Computes and returns an arbitrary region of the RenderedImage.

If an image writer calls the first method, my code is screwed. There is no mentally sane way to construct a Raster instance without colliding with the array length limitation. Unfortunately, the JPEG writer does just that: He gets greedy and demands all the pixels at once. I found it easier to avoid the JPEG format and therefore trade disk space for pragmatism.

The PNG writer uses the getData(Rectangle) method to obtain the pixel data. It calls the whole image line by line: the region has always the full width of the image, but is only one pixel in height. So I guess my tool will write a lot of large PNG images in the future.

Our partitions should adapt to this behaviour by always retaining the full width of the original image and only allowing enough height that the amount of pixels per partition doesn’t exceed Integer.MAX_VALUE.

The remaining trick is to implement an AdjustingRaster that knows the original Raster of the partition and translates the row asked by the PNG writer to the according row in the partition image. The AdjustingRaster needs to know about the vertical offset. The only pitfall here is that the vertical offset has to be zero while the AdjustingRaster gets written to and needs to be set once it switches into read mode.

Slow, but working

By composing a gigapixel image from several partitions (sometimes called tiles) you can circumnavigate the frustrating limitation of Java’s arrays (I mean, it’s 2014 and 64-bit systems are somewhat prevailing now. No need to stick to 32-bit limits without a good reason). The result isn’t overwhelmingly fast, but I suspect that’s caused by the PNG image writer more than by our indirections. And we shouldn’t forget that it’s a lot of pixels to write after all.

Conclusion

Sometimes when you explore bigger and bigger use cases, you hit some arbitrary limitation. And some are fundamental ones. In our case here, we’ve reached the limit of Java arrays and got stuck because the image library in Java never heard of real gigapixel imaging and coupled itself hard to the array limit. By introducing another indirection layer on top of the image library implementation and using composition to emulate a bigger image than we actually could create, we can convince non-sceptical image writers to save all those pixels for us and even manipulate the image beforehand.

What was your approach for gigapixel image processing? How did it work out in the long run? Share your story in the comments, please.


Dart and TypeScript as JavaScript alternatives

April 7, 2014

JavaScript was designed at Netscape by Brendan Eich within a couple of weeks as a simple scripting language for the web browser. It’s an interesting mixture of Self‘s prototype-based object model, first-class functions inspired by LISP, a C/AWK-like syntax and a misleading name imposed by marketing.

Unfortunately, the haste in which JavaScript was designed by a single person shows in many places. Lots of features are inconsistent and violate the principle of least surprise. Just skim through the JavaScript Garden to get an idea.

Another aspect casting a poor light on JavaScript is the bad design of the browser DOM API, including incompatibilities between different browser implementations.

Douglas Crockford redeemed the reputation of JavaScript somewhat, by writing articles like “JavaScript: The World’s Most Misunderstood Programming Language“, the (relatively thin) book “JavaScript: The Good Parts” and discovering the JSON format. But even his book consists for the most part of advice on how to avoid the bad and the ugly parts.

However, JavaScript is ubiquitous. It is the world’s most widely deployed programming language, it’s the only programming language option available in all browsers on all platforms. The browser DOM API incompatibilities were ironed out by libraries like jQuery. And thanks to the JavaScript engine performance race started by Google some time ago with their V8 engine, there are now implementations available with decent performance - at least for a scripting language.

Some people even started to like JavaScript and are writing server-side code in it, for example the node.js community. People write office suites, emulators and 3D games in JavaScript. Atwood’s Law seems to be confirmed: “Any application that can be written in JavaScript, will eventually be written in JavaScript.”

Trans-compiling to JavaScript is a huge thing. There are countless transpilers of existing or new programming languages to JavaScript. One of these, CoffeeScript, is a syntactic sugar mixture of Ruby and Python on top of JavaScript semantics, and has gained some name recognition and adoption, at least in the Rails community.

But there are two other JavaScript alternatives, backed by large companies, which also happen to be browser manufacturers: Dart by Google and TypeScript by Microsoft. Both have recently reached version 1.0 (Dart even 1.2), and I will have a look at them in this blog post.

Large-scale application development and types

Scripting languages with dynamic type systems are neat and flexible for small and medium sized projects, but there is evidence that organizations with large code bases and large teams prefer at least some amount of static types. For example, Google developed the Google Web Toolkit, which compiled Java to JavaScript and the Closure compiler, which adds type information and checks to JavaScript via special comments, and now Dart. Facebook recently announced their Hack language, which adds a static type system to PHP, and Microsoft develops TypeScript, a static type add-on to JavaScript.

The reasoning is that additional type information can help finding bugs earlier, improve tool support, e.g. auto-completion in IDEs and refactoring capabilities such as safe, project-wide renaming of identifiers. Types can also help VMs with performance optimization.

TypeScript

This weekend the release of TypeScript 1.0 was announced by Microsoft’s language designer Anders Hejlsberg, designer of C#, also known as the creator of the Turbo Pascal compiler and Delphi.

TypeScript is not a completely new language. It’s a superset of JavaScript that mainly adds optional type information to the language via Pascal-like colon notation. Every JavaScript program is also a valid TypeScript program.

The TypeScript compiler tsc takes .ts files and translates them into .js files. The output code does not change a lot and is almost the same code that you would write by hand in JavaScript, but with erased type annotations. It does not add any runtime overhead.

The type system is heavily based on type inference. The compiler tries to infer as much type information as possible by following the flow of types through the code.

TypeScript has interfaces that are very similar to interfaces in Go: A type does not have to declare which interfaces it implements. Interfaces are satisfied implicitly if a type has all the required methods and properties – in short, TypeScript has a structural type system.

Type definitions for existing APIs and libraries such as the browser DOM API, jQuery, AngularJS, Underscore.js, etc. can be added via .d.ts files.
These definition files are very similar to C header files and contain type signatures of the API’s functions. There’s a community maintained repository of .d.ts files called Definitely Typed for almost all popular JavaScript libraries.

TypeScript also enhances JavaScript with functionaliy that is planned for ECMAScript 6, such as classes, inheritance, modules and shorthand lambda expressions. The syntax is the same as the proposed ES6 syntax, and the generated code follows the usual JavaScript patterns.

TypeScript is an open source project under Apache License 2.0. The project even accepts contributions and pull-requests (yes, Microsoft). Microsoft has integrated TypeScript support into Visual Studio 2013, but there is support for other IDEs and editors such as JetBrain’s IDEA or Sublime Text.

Dart

Dart is a JavaScript alternative developed by Google. Two of the main brains behind Dart are Lars Bak and Gilad Bracha. In the early 90s they worked in the Self VM group at Sun. Then they left Sun for LongView Technologies (Animorphic Systems), a company that developed Strongtalk, a statically typed variant of Smalltalk, and later the now-famous HotSpot VM for Java. Sun bought LongView Technologies and made HotSpot Java’s default VM. Bracha co-authored parts of the Java specification, and designed an object-oriented language in the tradition of Self and Smalltalk called Newspeak. At Google, Lars Bak was head developer of the V8 JavaScript engine team.

Unlike TypeScript, Dart is not a JavaScript superset, but a language of its own. It’s a curly-braces-and-semicolons language that aims for familiarity. The object model is very similar to Java: it has classes, inheritance, abstract classes and methods, and an @override annotation. But it also has the usual grab bag of features that “more sugar than Java but similar” languages like C#, Groovy or JetBrain’s Kotlin have:

Lambdas (via the fat arrow =>), mixins, operator overloading, properties (uniform access for getters and setters), string interpolation, multi-line strings (in triple quotes), collection literals, map access via [], default values for arguments, optional arguments.

Like TypeScript, Dart allows optional type annotations. Wrong type annotations do not stop Dart programs from executing, but they produce warnings. It has a simple notion of generics, which are optional as well.

Everything in Dart is an object and every variable can be nullable. There are no visibility modifiers like public or private: identifiers starting with an underscore are private. The “truthiness” rules are simple compared to JavaScript: all values except true are false.

Dart comes with batteries included: it has a standard library offering collections, APIs for asynchronous programming (event streams, futures), a sane HTML/DOM API, removing the need for jQuery, unit testing and support for interoperating with JavaScript. A port of Angular.js to Dart exists as well and is called AngularDart.

Dart supports a CSP-like concurrency model based on isolates – independent worker threads that don’t share memory and can communicate via SendPorts and
ReceivePorts.

However, the Dart language is only one half of the Dart project. The other important half is the Dart VM. Dart can be compiled to JavaScript for compatibility with every browser, but it offers enhanced performance compared to JavaScript when the code is directly executed on the Dart VM.

Dart is an open source project under BSD license. Google provides an Eclipse based IDE for Dart called the “Dart Editor” and Dartium, a special build of the Chromium browser that includes the Dart VM.

Conclusion

TypeScript follows a less radical approach than Dart. It’s a typed superset of JavaScript and existing JavaScript projects can be converted to TypeScript simply by renaming the source files from *.js to *.ts. Type annotations can be added gradually. It would even be simple to switch back from TypeScript to JavaScript, because the generated JavaScript code is extremely close to the original source code.

Dart is a more ambitious project. It comes with a new VM and offers performance improvements. It will be interesting to see if Google is going to ship Chrome with the Dart VM one day.


Centralized project documentation

March 31, 2014

Project documentation is one thing developers do not like to think about but it is necessary for others to use the software. There are several approaches to project documentation where it is either stored in the source code repository and/or some kind of project web page, e.g. in a wiki. It is often hard for different groups of people to find the documentation they need and to maintain it. I want to show an approach to store and maintain the documentation in one place and integrate it in several other locations.

The project documentation (not API documentation, generated by tools like javadoc or Doxygen) should be version controlled and close to the source code. So a directory in the project source tree seems to be a good place. That way the developers or ducumenters can keep it up-to-date with the current source code version. For others it may be hard to access the docs hidden somewhere in the source tree. So we need to integrate them into other tools to become easily accessible by all the people who need them.

Documentation format

We start with markdown as the documentation format because it is easily read and written using a normal text editor. It can be converted to HTML, PDF and other common document formats. The markdown files reside in a directory next to the source tree, named documentation for example. With pegdown there is a nice java library allowing integration of markdown support in your projects.

Integration in your wiki

Often you want to have your project documentation available on a web page, usually a wiki. With confluence you can directly embed markdown files from an URL in your project page using a plugin. This allows you to enrich the general project documentation in the source tree with your organisation specific documentation. The documentation becomes more widely accessible and searchable. The link can be served by a source code browser like gitweb: http://myrepo/git/?p=MyProject.git;a=blob_plain;f=README.md;hb=HEAD and is alsways up-to-date.

Integration in jenkins

Jenkins has a plugin to use markdown as description format. Combined with the project description setter plugin you can use a file from your workspace to display the job description. Short usage instructions or other notes and links can be maintained in the source tree and show up on the jenkins job page.

Integration in Github or Gitlab

Project hosting platforms like Github or your own repository manager, e.g. gitlab also can display markdown-formatted content from your source tree as the project description yielding a basic project page more or less for free.

Conclusion

Using markdown as a basis for your project documentation is a very flexible approach. It stays usable without any tool support and can be integrated and used in various ways using a plethora of tools and converters. Especially if you plan to open source a project it should contain useful documentation in such a widely understood format distributed with your source code.


Should I test this?

March 24, 2014

Writing software is hard, writing correct software is even harder. So everything that helps you writing better or more correct software should be used to your advantage. But does every test help? And does every code to be automatically tested? How do I decide what to test and how?
Given a typical web CRUD application, take a look at the following piece of functionality:
We have a model class Element which has a Type type:

class Element {
  ...
  Type type
  ...
}

The view contains a select tag which lets you choose a type:

...
<g:select name="filterByTypeId" from="${types}" value="${filterByType?.id}">
...

And finally in the controller we filter the list of shown elements via the selected type:

...
Type filterByType = Type.get(params['filterByTypeId'])
return [elements: filterByType ? Element.findAllByType(filterByType) : Element.list(), types: Type.list(), filterByType: filterByType]
...

Now ask yourself: would you write an automatic test for this? A functional / acceptance or some unit / integration tests? Would you really test this automatically or just by hand? And how do you decide this?

Dogma

According to TDD you should test everything, there does not exist any code without a test (first). If you really live by TDD the choice is already made: you test this code. But is this pragmatic? Effecient? Productive? And what about the aspects you forgot to test? The order of the types for example. The user wanted to list them lexicographically or by a priority or numbered. What if this part changes and your test is so coupled that you need to change it, too. There are some TDD enthusiasts out there but if you are more pragmatic there are other criteria to help you decide.

Cost

If you look at the code in question and think: how much effort is it to create the test(s)? Or to run the test? If the feedback cycle is too long you lose track of it. I need a test for the controller, this is the easy part. Then I need to test that the view passes the correct parameter and accepts and shows the correct list.
I also can write an acceptance test but this seems like a big gun for a small bird. In our case it heavily depends on the framework how easy or difficult and costly it is to write tests for our filter. What do you have to mock or to simulate? You also have to take the hidden costs into account: how much does it cost to maintain this test? When the requirement changes? When there are more filter criteria? Or if an element can have more than one type?

Value

Another question you can ask is: what is the value for the customer? How much does he need it to work? What is the cost of an error? What happens when the code in question does not work? The value for the customer is not only determined by the functionality it provides. Software can be seen as giving your users capabilities, to enable them. The capability is implemented by two things: implementation (your functionality) and affordance (the UI). The value is determined by both parts. So you hardly can decide on the value of a functionality alone. What if you need to change the UI (in our case the select tag) to increase the value? How does this effect your tests? Does the user reach his goal if the functionality part is broken? What is when the code is correct but it is slow? Or the UI isn’t visible on your user’s screen?

Personal / Team profile

You could decide what and if to test by looking at your past: your personal or team mistakes. Typical problems and bugs you made. Habits you have. You could test more when the (business or technical) domain or the underlying technology is new for you. You could write only few tests when you know the area you work in but more when it is unknown and you need to explore it. You can write more tests if you work in a dynamic language and few in a static language. Or vice versa.

Area / Type of code

You can write tests for every bug you find to prevent regression. You could write tests only for algorithms or data structures. For certain core parts or for interaction with other systems. Or only for (public) interfaces. The area or type of code can help you decide if to test or not.

Visibility

Also you could take a look at how easy it is to spot a bug when manually invoke the code. Do you or your user see the bug immediately? Is it hidden? In our case you should easily see when the list is not filtered or filtered by the wrong criteria. But what if it is just a rounding error or an error where cause and effect is separated by time or location?

Conclusion

Do you have or use additional criteria? How do you decide? I have to admit that I didn’t and I wouldn’t test the above code because I can easily spot problems in the code and try it out by hand if it works (visibility). If the code grows more complex and I cannot easily see the problem (again visibility) or the value (or cost of an error) for the customer is high I would write one.


How to avoid premature optimization

March 17, 2014

eternityA common quote linked with Donald E. Knuth of TeX fame is “premature optimization is the root of all evil”. While this might sound a bit harsh, it holds a lot of truth.

Performance as an asset

If you consider software performance as an asset, you can determine its characteristics and derive your decisions about whether to work on it from them. For example, you will discover that while a good performance is paramount, there is a certain threshold when further optimizations are worthless from the asset’s point of view. If you happen to develop a game, you only need to draw as much frames as the monitor can handle. If you process sensor data in real time, there is no need for a prolonged pause between data packets, because computers don’t grow tired.
If you treat performance as an asset, you can also apply a worth to every optimization you want to make and contrast it to the cost of the work you expect to have to invest. This divides the possible optimizations into a group of lucrative and a (probably larger) group of unprofitable investments.

Simple rules

Treating performance as an asset gives you the mental tools to make profound decisions about when and what to optimize. But there are also three simple rules you can apply if you don’t want to write a business plan every time you think “if I just change this line, the code will run much smoother”.

First rule: Don’t

The first rule of performance optimization with a tendency to avoid premature optimization is to just don’t care. You ask yourself if a LinkedList is faster than an ArrayList for a given use case? The short (and ignorant) answer is: both will be fast enough. Is it better to explicitly set all references to null after usage? Why bother when the garbage collector won’t slow you down anyway. Following this rule, you deliberately act dumber than you are with the goal to delay action.

There is a disclaimer, though. There are two different kinds of performance optimization: The first one was referenced in the examples above and deals with actual, but rather local code changes. The second and more important type of performance consideration deals with complexity theory (the one with big O notation) and isn’t measured in milliseconds, but in scalability. You don’t want to be ignorant of the latter type because it will always ruin your runtime behaviour regardless of any optimization of the former type if you implement an exponential or even factorial algorithm. You can be ignorant of “real performance tuning”, but should always be aware of the complexity category your algorithm is living in.

Second rule: Not Yet

There will be a moment when you clearly see an opportunity to improve the runtime performance of your code with just this very small (and very clever) modification. This is when you are ready to break the first rule. Now you should adhere to the second rule: If the cost is as marginal as you say and the gain is profound, go for it – but not now. Performance tuning isn’t a time limited sale that you are only offered right now or never. You can make the same change and reap the same advantages next week or next month. You doubt that you will remember the details? Write an issue or insert some code comment about it. You probably have another task on your todo list that is more important than speeding up the functionality at hand.

The goal of the first rule was to delay action, and that’s the goal of the second rule, too. You’ve probably guessed it already: you avoid premature optimization best by not optimizing at all or at least not optimizing too early. You need to be sure about the value of an optimization before you implement it. As a result of the second rule, your code will be enriched with possibilities for performance improvement. And if you actually need to improve your performance, you can orient yourself along these possibilities or find them then. You want to invest in the tuning business as late as possible, for it is highly speculative.

Third rule: Measure

If you cannot hold on to the first two rules, for example when a real performance issue is reported, you need to take action. But as you are going to invest work into performance optimization, you can as well invest it efficiently. In most applications, there is a 90/10 rule in effect, stating that 90 percent of the runtime is spent in just 10 percent of the code. If you don’t know exactly where your performance bottleneck is, find it using a profiler and remember the 90/10 rule. It’s not efficient nor effective to improve the 90 percent of your code that doesn’t matter in regard to performance.

If you have identified the piece of code that most likely slows your application down, you should remember the second part of the third rule: Never make performance optimizations without a meaningful benchmark that you can run beforehand and afterwards. All to often, the clever performance trick you remember from long ago is actually hurting your performance now. A meaningful benchmark will tell you if you did good. To make a benchmark “meaningful”, you really need to read up on benchmarking in your target platform. In Java, for example, you need to know about proper warm-up of the VM and perform enough cycles to not include one-time effects in your numbers. If you’ve written such a benchmark, keep it! Try to fully automate it and let it be the cornerstone of your growing performance test suite. There might come the day when this test/benchmark tells you that your formerly clever optimization is now obsolete due to internal platform changes.

Conclusion

If you follow these three simple rules, you won’t automatically write high performance software. But you will spend your valuable time fixing real performance issues instead of tinkering with your code to no effect. You definitely won’t optimize prematurely and steer clear of this “root of all evil”.


Follow

Get every new post delivered to your Inbox.

Join 76 other followers