Introducing ng-quiz – a JavaScript / Angular quiz: #1 Letter Crush

February 9, 2014

Learning a new language or framework can be quite daunting. It helps me when I have small tasks which motivate me to explore my chosen field further and in a fun and goal driven way. So in the spirit of the Ruby quiz I introduce ng-quiz, a quiz for learning Angularjs. Every month I post a small task which should be solved in a short time and helps you to learn and deepen your Angularjs and JavaScript skills. It will touch different areas and explores other JavaScript libraries. You can post your solutions (as a link to a jsfiddle or github) as a comment. Two weeks later I will update the post with the solutions.

ng-quiz #1: Letter Crush

In the first quiz you will implement a simple game named ‘Letter Crush’. In ‘Letter Crush’ a player tries to find letters forming a word in a random letter ‘soup’. The board consists of nxn quadratic cells containing a random letter.

A	G	H	M	I
T	C	O	U	E
F	E	P	R	Q
K	D	S	A	N
V	L	F	T	Y

The player can enter a word. This word must be in an english dictionary and needs to be found without overlapping. If both criteria are satisified, the word is removed from the board, the letters above the empty cells fall down and the top most empty cells are filled with new random letters.

Example

A	G	H	M	I
T	C	O	U	E
F	E	P	R	Q
K	D	S	A	N
V	L	F	T	Y

If the player enters the word ‘test’ (case insensitive) and since it is a real word the letters are removed from the board.

A	G	H	M	I
 	C	O	U	E
F	 	P	R	Q
K	D	 	A	N
V	L	F	 	Y

Now the letters above fall down.

 	 	 	 	I
A	G	H	M	E
F	C	O	U	Q
K	D	P	R	N
V	L	F	A	Y

The top most empty cells are filled again.

I	W	S	T	I
A	G	H	M	E
F	C	O	U	Q
K	D	P	R	N
V	L	F	A	Y

Now the next turn begins and the player could enter RUN, ACORN, MOURN, THORN or GO or others.

Letter Generation

To not fill the board with garbage letters, the new letters need to be generated taking the relative frequency of letters in the english language.

A B C D E F G
8.167 1.492 2.782 4.253 12.702 2.228 2.015
H I J K L M N
6.094 6.966 0.153 0.772 4.025 2.406 6.749
O P Q R S T U
7.507 1.929 0.095 5.987 6.327 9.056 2.758
V W X Y Z
0.978 2.360 0.150 1.974 0.074

Words

Entering a word could be done via the mouse or the keyboard. To form a valid word the letters need to be direct or diagonal adjacent. Every letter can be used only once.

Score

The player starts with a score of 0. A wrong input reduces the score by 1. A valid word is scored by its length and according to the following fibonacci sequence:

Length 1 2 3 4 5 6 7
Score 1 1 2 3 5 8 13

Optional fun: special fields

The above version of ‘Letter Crush’ is playable but if the task is too small for you or you want to explore it further, you can implement some special fields.

Duplication – lower case letter

A duplication field consists of a lower case letter and doubles the score of the word. It has a frequency of 5 percent.

Wildcard – ?

A wild card can be used as any letter and is represented as a question mark. It should be generated with 0.5 percent.

Extension – +

An extension field just lengthen a word if it is adjacent to the last letter. A plus sign marks the field and is generated with a relative frequency of 1 percent.

Bomb – *

An asterisk shows a bomb and is produced every 0.25 percent. A bomb detonates when the chosen word is adjacent to it. Every letter which is not part of the word but adjacent to the bomb is also removed.

Example:

B	M	A	S
O	R	B	*
N	B	T	B
E	U	O	L

The player chooses the word bomb and the bomb detonates. So after the removal but before the filling it looks like:


	R		
N	B		
E	U	O	L

P.S. if you are new in JavaScript and coming from a Java background you might find our JavaScript for Java developers post helpful.


Testing C++ code with OpenCV dependencies

February 3, 2014

The story:

Pushing for more quality and stability we integrate google test into our existing projects or extend test coverage. One of such cases was the creation of tests to document and verify a bugfix. They called a single function and checked the fields of the returned cv::Scalar.

TEST(ScalarTest, SingleValue) {
  ...
  cv::Scalar actual = target.compute();
  ASSERT_DOUBLE_EQ(90, actual[0]);
  ASSERT_DOUBLE_EQ(0, actual[1]);
  ASSERT_DOUBLE_EQ(0, actual[2]);
  ASSERT_DOUBLE_EQ(0, actual[3]);
}

Because this was the first test using OpenCV, the CMakeLists.txt also had to be modified:

target_link_libraries(
  ...
  ${OpenCV_LIBS}
  ...
)

Unfortunately, the test didn’t run through: it ended either with a core dump or a segmentation fault. The analysis of the called function showed that it used no pointers and all variables were referenced while still in scope. What did gdb say to the segmentation fault?

(gdb) bt
#0  0x00007ffff426bd25 in raise () from /lib64/libc.so.6
#1  0x00007ffff426d1a8 in abort () from /lib64/libc.so.6
#2  0x00007ffff42a9fbb in __libc_message () from /lib64/libc.so.6
#3  0x00007ffff42afb56 in malloc_printerr () from /lib64/libc.so.6
#4  0x00007ffff54d5135 in void std::_Destroy_aux<false>::__destroy<testing::internal::String*>(testing::internal::String*, testing::internal::String*) () from /usr/lib64/libopencv_ts.so.2.4
#5  0x00007ffff54d5168 in std::vector<testing::internal::String, std::allocator<testing::internal::String> >::~vector() ()
from /usr/lib64/libopencv_ts.so.2.4
#6  0x00007ffff426ec4f in __cxa_finalize () from /lib64/libc.so.6
#7  0x00007ffff54a6a33 in ?? () from /usr/lib64/libopencv_ts.so.2.4
#8  0x00007fffffffe110 in ?? ()
#9  0x00007ffff7de9ddf in _dl_fini () from /lib64/ld-linux-x86-64.so.2
Backtrace stopped: frame did not save the PC

Apparently my test had problems at the end of the test, at the time of object destruction. So I started to eliminate every statement until the problem vanished or no statements were left. The result:

#include "gtest/gtest.h"
TEST(DemoTest, FailsBadly) {
  ASSERT_EQ(1, 0);
}

And it still crashed! So the code under test wasn’t the culprit. Another change introduced previously was the addition of OpenCV libs to the linker call. An incompatibility between OpenCV and google test? A quick search spitted out posts from users experiencing the same problems, eventually leading to the entry in OpenCVs bug tracker: http://code.opencv.org/issues/1608 or http://code.opencv.org/issues/3225. The opencv_ts library which appeared in the stack trace, exports symbols that conflict with google test version we link against. Since we didn’t need opencv_ts library, the solution was to clean up our linker dependencies:

Before:

find_package(OpenCV)

 

/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab ../gtest-1.7.0/libgtest.a -lpthread -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab

After:


find_package(OpenCV REQUIRED core highgui)

 

/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_highgui -lopencv_core ../gtest-1.7.0/libgtest.a -lpthread

Lessons learned:

Know what you really want to depend on and explicitly name it. Ignorance or trust in build tools’ black magic is a recipe for blog posts.


Integrating googletest in CMake-based projects and Jenkins

January 27, 2014

In my – admittedly limited – perception unit testing in C++ projects does not seem as widespread as in Java or the dynamic languages like Ruby or Python. Therefore I would like to show how easy it can be to integrate unit testing in a CMake-based project and a continuous integration (CI) server. I will briefly cover why we picked googletest, adding unit testing to the build process and publishing the results.

Why we chose googletest

There are a plethora of unit testing frameworks for C++ making it difficult to choose the right one for your needs. Here are our reasons for googletest:

  • Easy publishing of result because of JUnit-compatible XML output. Many other frameworks need either a Jenkins-plugin or a XSLT-script to make that work.
  • Moderate compiler requirements and cross-platform support. This rules out xUnit++ and to a certain degree boost.test because they need quite modern compilers.
  • Easy to use and integrate. Since our projects use CMake as a build system googletest really shines here. CppUnit fails because of its verbose syntax and manual test registration.
  • No external dependencies. It is recommended to put googletest into your source tree and build it together with your project. This kind of self-containment is really what we love. With many of the other frameworks it is not as easy, CxxTest even requiring a Perl interpreter.

Integrating googletest into CMake project

  1. Putting googletest into your source tree
  2. Adding googletest to your toplevel CMakeLists.txt to build it as part of your project:
    add_subdirectory(gtest-1.7.0)
  3. Adding the directory with your (future) tests to your toplevel CMakeLists.txt:
    add_subdirectory(test)
  4. Creating a CMakeLists.txt for the test executables:
    include_directories(${gtest_SOURCE_DIR}/include)
    set(test_sources
    # files containing the actual tests
    )
    add_executable(sample_tests ${test_sources})
    target_link_libraries(sample_tests gtest_main)
    
  5. Implementing the actual tests like so (@see examples):
    #include "gtest/gtest.h"
    
    TEST(SampleTest, AssertionTrue) {
        ASSERT_EQ(1, 1);
    }
    

Integrating test execution and result publishing in Jenkins

  1. Additional build step with shell execution containing something like:
    cd build_dir && test/sample_tests --gtest_output="xml:testresults.xml"
  2. Activate “Publish JUnit test results” post-build action.

Conclusion

The setup of a unit testing environment for a C++ project is easier than many developers think. Using CMake, googletest and Jenkins makes it very similar to unit testing in Java projects.


Learning JavaScript: Great libraries

January 20, 2014

After taking a look at JavaScript as a language we continue with some interesting and helpful libraries for web development.

underscore – going the functional route

What: Underscore has a lot of utility functions for helping with collections, arrays, objects, functions and even some templating.

How: Just a glimpse at the many, many functions provided (taken from the examples at underscorejs.org)

_.map([1, 2, 3], function(num){ return num * 3; });
=> [3, 6, 9]

_.map({one: 1, two: 2, three: 3}, function(num, key){ return num * 3; });
=> [3, 6, 9]

_.every([true, 1, null, 'yes'], _.identity);
=> false

_.groupBy([1.3, 2.1, 2.4], function(num){ return Math.floor(num); });
=> {1: [1.3], 2: [2.1, 2.4]}

_.escape('Curly, Larry & Moe');
=> "Curly, Larry & Moe"

var compiled = _.template("hello: <%= name %>");
compiled({name: 'moe'});
=> "hello: moe"

Why: JavaScript is a functional language but its standard libraries miss a lot of the functional goodies we are used to from other languages. Here underscore comes to the rescue.

when.js – lightweight concurrency

What: When.js is a lightweight implementation of the promise concurrency model.

How:

var when = require('when');
var rest = require('rest');

when.reduce(when.map(getRemoteNumberList(), times10), sum)
    .done(function(result) {
        console.log(result);
    });

function getRemoteNumberList() {
    // Get a remote array [1, 2, 3, 4, 5]
    return rest('http://example.com/numbers').then(JSON.parse);
}

function sum(x, y) { return x + y; }
function times10(x) {return x * 10; }

Why: Concurrency is hard. Callbacks can get out of hand pretty quickly even more so when they are nested. Promises make writing and reading concurrency code simpler.

require.js – modules (re)loaded

What: Require.js uses a common module format (AMD – Asynchronous Module Definition) and helps you loading them.

How: You include your module loading file as a data-main attribute on the script tag.

<script data-main="js/app.js" src="js/require.js"></script>

Inside your module loading file you define the modules you want to load and where they are located.

requirejs.config({
    baseUrl: 'js/lib',
    paths: {
        app: '../app'
    }
});

requirejs(['jquery', 'canvas', 'app/sub'],
function   ($,        canvas,   sub) {
    //jQuery, canvas and the app/sub module are all
    //loaded and can be used here now.
});

Why: Your scripts need to be in modules, cleanly separated from each other. If you don’t have an asset pipeline want to load your modules asynchronously or just want to manage your modules from your JavaScript require.js is for you.

bower – packages managed

What: Bower lets you define your dependencies.

How: Just define a bower.json file and run bower install

{
	"name": "My project",
	"version": "0.1.0",
	"dependencies": {
		"jquery": "1.9",
		"underscore": "latest",
		"requirejs": "latest"
	}
}

Why: Proper dependency management is a tough nut to crack and it is even harder to be pleasantly used (I am looking at you, maven).

grunt – the worker

What: Grunt is a task runner or build tool much like Java’s Ant.

How: Create a GruntFile and start automating:

module.exports = function(grunt) {

  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),
    uglify: {
      options: {
        banner: '/*! <%= pkg.name %> <%= grunt.template.today("yyyy-mm-dd") %> */\n'
      },
      build: {
        src: 'src/<%= pkg.name %>.js',
        dest: 'build/<%= pkg.name %>.min.js'
      }
    }
  });

  grunt.loadNpmTasks('grunt-contrib-uglify');
  grunt.registerTask('default', ['uglify']);
};

Why: Repetitive tasks need to be automated. If you plan to use grunt and bower consider using yeoman which combines both into a workflow.

d3 – visualizations

What: d3 – data driven documents, a library for manipulating the DOM based on data using HTML, CSS and SVG.

How: There are many, many examples on d3js.org but to get a glimpse on how d3 works, here a small example.

Why: There are tons of chart or visualization libraries out there but d3 takes a more general approach. It defines a way of thinking, a way of manipulating the document based on data and data alone. It treats the DOM as part of your web application. Your visualization is a part of your document and not just a component you can forget about after creating it. This general approach allows you to create arbitrary visualizations not just charts.

last but not least: jQuery

What: jQuery is more of a collection of libraries then a single one, It features AJAX, effects, events, concurrency, DOM manipulation and general utility functions for collections, functions and objects.

How: Here we can just take a very quick overview because jQuery is so big.

// DOM querying and manipulation
$('.cssclass').each(function (index, element) {
  $(element).attr('name') = 'new name';
});

// AJAX calls
$.get('/blog/posts', function(data) {
  $('.posts').html(data);
});

// events
$( document ).ready(function() {
  console.log('DOM loaded');
});

// deferred objects aka promises
$.when(asyncEvent()).then(
  function( status ) {
    // done
  },
  function( status ) {
    // fail
  },
  function( status ) {
    // progress
  }
);

// utilities
var object = $.extend({}, object1, object2);

$('li').each(function(index) {
  console.log(index + ": " + $(this).text());
});

Why: jQuery is almost ubiquitous and this is not by chance. It provides a great foundation and helps in many common scenarios.

Others

There are many more libraries out there and I would appreciate a pointer to a great library. This article can only give a short glimpse so please take a further look at their respective homepages. A word on testing or test runners: these will get an additional blog post.


Our recruitment process

January 13, 2014

huerdenlaufWe are a small company with a focus on delivering high quality software to our customers. Therefore every developer represents a substantial share of our organization. Every time we hire, we need to make sure that the decision in favor or against a new employee is profound. So we established a recruitment process that tries to evaluate and communicate both our requirements and the possibilities of the candidate. Without going into details, this is how we will interact with you if you apply for a job.

First contact

The first contact is usually an application including a curriculum vitae sent to us by the candidate. We will read your application and look for possible matches with our required skill set. If there is a chance to work together in the future, we will answer back with an invitation for a first talk, usually done via telephone.

The first talk is mostly a getting to know each other on a communicative level. We might ask some questions about your curriculum or past jobs, but the main purpose is to establish a mutual understanding. We probably end the talk with an appointment for a first meeting.

First meeting

We really want to know who you are, not only what you can offer on a professional level. Remember that you will represent our company substantially if we hire you? Both sides need to be sure that they understand what they commit to. Because it always is a real commitment for us.

The first meeting will be rather short and kept on a casual level. We don’t want to build up pressure, we don’t want to judge your abilities as a developer, we want to get a first impression of you in person. And you will get a full tour of our company and get to know the whole development team, also as a first impression. If you like what you see, we will make an appointment for a second meeting that will go into the details.

Detailed meeting

The second meeting will be much longer and more stressful than the first meeting. The goal of this meeting is the examination of your professional skills. Most companies use trick questions or “how to approach this?” tasks to challenge your abilities to solve difficult problems and deduce your skills from that. We decided not to do that.

We want to see your performance in a normal work situation – as normal as it can be under the circumstances. So you will have to program a non-trivial assignment, with the help of the whole team. The assignment doesn’t contain any “tricks” or common pitfalls that you can fall into and a lot of different solutions are possible without us wanting to see exactly one (the “best”). If you are an experienced developer, you will feel at ease with the task.

Another important skill of every developer is the quick assessment of existing code in regard of bugs, security risks and bad practices. We have prepared a piece of code riddled with all kinds of quirks and will review it with you. None of us finds them all, too.

We orient our work around a set of core values that are very congruent with the values of the Clean Code Developer Initiative. So it helps tremendously if you are firm with the practices of a clean code developer. But we also want to know if you can convey the ideas and principles behind the actual practices, so you will have to explain some of them to us.

These are the three parts that we want to see of your professional skills:

  • Programming
  • Analysis
  • Introspection

After this meeting, we will have a fairly detailed picture of your abilities and you will know a lot about the level of skill that we require for daily work. If we come to the conclusion that everything matches, we will invite you to the last official step of our recruitment process, the recruitment internship or probationary work.

Internship

In the previous steps of our recruitment process, it was mostly us that examined your skills. Now, after we are sure that you might complete us, it’s time that you get a chance to examine us. So we invite you to accompany us for several days in our normal work. You can team up with whoever you want and join in his (or her) development task. You can ask questions. You can just watch. You can complete your picture of us. You can make sure that you will feel comfortable when joining us.

Welcome aboard

If you’ve seen nothing that scares you during your internship, we will discuss the details of your employment, but that’s a topic for another blog post.

Inspirational source

We don’t hire very often and couldn’t sustain the process for a large number of applicants because the effort required from everyone involved is substantial. But we wanted to make sure that we don’t hire blind and don’t torture our applicants. We compiled our process from a lot of sources, mostly blog posts around the internet and one noteworthy book by Johanna Rothman: “Hiring The Best Knowledge Workers, Techies & Nerds”. That’s exactly what we set out to do!


Prioritizing: order of tasks

January 7, 2014

This is an entry that extends the post on microprojects by two additional prioritizing strategies.

Strategy: Cover your ass

This is the strategy suggested by the previous post. After preparing a list of milestones and their estimates, you pick the next most problematic milestone and work on it. A list of tasks ordered by this strategy helps you to “fail fast”: in less than half of estimated time you will know, whether you will succeed or bust the budget – even when little is known about the concrete implementation or the esimates are off by some amount.

Strategy: Most value first

In lot of projects, this is the strategy used by customers. All features not absolutely necessary to achieve the goal are cut or declared optional. If you look at minesweeper: you can play it without the highscore, the timer, the modifiable field side or even probably without the random component (i.e. make 99 fields), but not without the mines. After you determined that your budget is too small, you know what the customer can live without and if you have the option to cut features, then this is probably the strategy for you.

Strategy: Most painful, when omitted

This is the strategy best applied before the pain is real. In contrast to other two strategies, it does contain hard to quantify criteria like:

  • Quality
  • Security
  • Performance

The cost to implement them is non-linear and not directly visible. The temptation is big to use time and money to create more profitable features instead. They can be prioritized by:

  • probability of occurence
  • damage in the case of occurence
  • implementation cost
  • growth of the above factors with time

This is a lot of work for a single task – most likely you will setup project wide guidelines and default scenarios that will be reviewed by recurring audits.


From ugly to pretty – Three steps is all it takes

December 30, 2013

makeupI hold lectures in software engineering for over a decade now. One major topic is testing, specifically unit tests. Other corner stones are refactorings and code readability. So whenever I have the chance to challenge my students in cross-topic aspects of software development, it’s almost always a source of insight for them and especially for me. But one golden moment holds a special place in my memory. This is the (rather elaborate, sorry) story of this moment.

During a lecture about unit tests with JUnit, my students had the task to develop tests for a bank account class. That’s about as boring as testing can be – the account was related to a customer and had a current balance. The customer can withdraw money, but only some customers can overdraw their account. To spice things up a bit, we also added the mock object framework EasyMock to the mix. While I would recommend other mock frameworks for production usage, the learning curve of EasyMock is just about right for first time exposure in a “sheep dip” fashion.

Our first test dealt with drawing money from an empty account that can be overdrawn:

@Test
public void canWithdrawOnCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(true);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(new Euro(30), cash);
  assertEquals(new Euro(-30), account.balance());
  EasyMock.verify(customer);
}

The second test made sure that this withdrawal behaviour only works for customers with sufficient credit standing. We decided to pay out nothing (0 Euro) if the customer tries to withdraw more money than his account currently holds:

@Test
public void cannotTakeUpCredit() {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(false);
  EasyMock.replay(customer);
  Account account = new Account(customer);
  Euro required = new Euro(30);

  Euro cash = account.withdraw(required);

  assertEquals(Euro.ZERO, cash);
  assertEquals(Euro.ZERO, account.balance());
  EasyMock.verify(customer);
}

As you can tell, a lot of copy and paste was going on in the creation of this test. Just look at the name of the local variable “required” – it’s misleading now. Right up to this point, my main topic was the usage of the mock framework, not perfect code. So I explained the five stages of normalized mock-based unit tests (initialize, train mocks, execute tested code, assert results, verify mocks) and then changed the topic by expressing my displeasure about the duplication and the inferior readability of the code (it even tries to trick you with the “required” variable!). Now it was up to my students to improve our situation (this trick works only a few times for every course before they preventively become even pickier than me). A student accepted the challenge and gave advice:

First step: Extract Method refactoring

The obvious first step was to extract the duplication in its own method and adjust the calls by their parameters. This is an easy refactoring that will almost always improve the situation. Let’s see where it got us. Here is the extracted method:

protected void performWithdrawalTestWith(
    boolean customerCanOverdraw,
    Euro amountOfWithdrawal,
    Euro expectedCash,
    Euro expectedBalance) {
  Customer customer = EasyMock.createMock(Customer.class);
  EasyMock.expect(customer.canOverdraw()).andReturn(customerCanOverdraw);
  EasyMock.replay(customer);
  Account account = new Account(customer);

  Euro cash = account.withdraw(amountOfWithdrawal);

  assertEquals(expectedCash, cash);
  assertEquals(expectedBalance, customer.balance());
  EasyMock.verify(customer);
}

And the two tests, now really concise:

@Test
public void canWithdrawOnCredit() {
  performWithdrawalTestWith(
      true,
      new Euro(30),
      new Euro(30),
      new Euro(-30));
}

 

@Test
public void cannotTakeUpCredit() {
  performWithdrawalTestWith(
      false,
      new Euro(30),
      Euro.ZERO,
      Euro.ZERO);
}

Well, that did resolve the duplication indeed. But the test methods now lacked any readability. They appeared as if somebody had extracted all the semantics out of the code. We were unhappy, but decided to interpret the current code as an intermediate step to the second refactoring:

Second step: Introduce Explaining Variable refactoring

In the second step, the task was to re-introduce the semantics back into the test methods. All parameters were nameless, so that was our angle of attack. By introducing local variables, we gave the parameters meaning again:

@Test
public void canWithdrawOnCredit() {
  boolean canOverdraw = true;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = new Euro(30);
  Euro resultingBalance = new Euro(-30);

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean canOverdraw = false;
  Euro amountOfWithdrawal = new Euro(30);
  Euro payout = Euro.ZERO;
  Euro resultingBalance = Euro.ZERO;

  performWithdrawalTestWith(
      canOverdraw,
      amountOfWithdrawal,
      payout,
      resultingBalance);
}

That brought back the meaning to the test methods, but didn’t improve readability. The code wasn’t intentionally cryptic any more, but still far from being intuitively understandable – and that’s what really readable code should be. If even novices can read your code fluently and grasp the main concepts in the first pass, you’ve created expert code. I challenged the student to further transform the code, without any idea how to carry on myself. My student hesitated, but came up with the decisive refactoring within seconds:

Third step: Rename Variable refactoring

The third step doesn’t change the structure of the code, but its approachability. Instead of naming the local variables after their usage in the extracted method, we name them after their purpose in the test method. A first time reader won’t know about the extracted method (and preferably shouldn’t need to know), so it’s not in the best interest of the reader to foreshadow its details. Instead, we concentrate about telling the reader a coherent story:

@Test
public void canWithdrawOnCredit() {
  boolean aCustomerThatCanOverdraw = true;
  Euro heWithdraws30Euro = new Euro(30);
  Euro receivesTheFullAmount = new Euro(30);
  Euro andIsNow30EuroInTheRed = new Euro(-30);

  performWithdrawalTestWith(
      aCustomerThatCanOverdraw,
      heWithdraws30Euro,
      receivesTheFullAmount,
      andIsNow30EuroInTheRed);
}

 

@Test
public void cannotTakeUpCredit() {
  boolean aCustomerThatCannotOverdraw = false;
  Euro heTriesToWithdraw30Euro = new Euro(30);
  Euro butReceivesNothing = Euro.ZERO;
  Euro andStillHasABalanceOfZero = Euro.ZERO;

  performWithdrawalTestWith(
      aCustomerThatCannotOverdraw,
      heTriesToWithdraw30Euro,
      butReceivesNothing,
      andStillHasABalanceOfZero);
}

If the reader is able to ignore some crude verbalization and special characters, he can read the test out loud and instantly grasp its meaning. The first lines of every test method are a bit confusing, but necessary given Java’s lack of named parameters.

The result might remind you a lot of Behavior Driven Development notation and that’s probably not by chance. In a few minutes during that programming exercise, my students taught themselves to think in scenarios or stories when approaching unit tests. I couldn’t have taught it any better – instead, I got enlightened by this exercise, too.


How to use partial mocks in real life

December 23, 2013

Partial mocks are an advanced feature of modern mocking libraries like mockito. Partial mocks retain the original code of a class only stubbing the methods you specify. If you build your system largely from scratch you most likely will not need to use them. Sometimes there is no easy way around them when working with dependencies not designed for testability. Let us look at an example:

/**
 * Evil dependency we cannot change
 */
public final class CarvedInStone {

    public CarvedInStone() {
        // may do unwanted things
    }

    public int thisHasSideEffects(int i) {
        return 31337;
    }

    // many more methods
}

public class ClassUnderTest {

    public Result computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = new CarvedInStone().thisHasSideEffects(42);
        // more interesting code
        return new Result(intermediateResult * 1337);
    }
}

We want to test the computeSomethingInteresting() method of our ClassUnderTest. Unfortunately we cannot replace CarvedInStone, because it is final and does not implement an interface containing the methods of interest. With a small refactoring and partial mocks we can still test almost the complete class:

public class ClassUnderTest {
    public int computeSomethingInteresting() {
        // some interesting stuff
        int intermediateResult = intermediateResultsFromCarvedInStone(42);
        // more interesting code
        return intermediateResult * 1337;
    }

    protected int intermediateResultsFromCarvedInStone(int input) {
        return new CarvedInStone().thisHasSideEffects(input);
    }
}

We refactored our dependency into a protected method we can use to stub out with our partial mocking to be tested like this:

public class ClassUnderTestTest {
    @Test
    public void interestingComputation() throws Exception {
        ClassUnderTest cut = spy(new ClassUnderTest());
        doReturn(1234).when(cut).intermediateResultsFromCarvedInStone(42);
        assertEquals(1649858, cut.computeSomethingInteresting());
    }
}

Caveat: Do not use the usual when-thenReturn-style:

when(cut.intermediateResultsFromCarvedInStone(42)).thenReturn(1234);

with partial mocks because the real method will get called once!

So the only untested code is a simple delegation. Measures like that refactoring and partial mocking generally serve as a first step and not the destination.

Where to go from here

To go the whole way we would encapsulate all unmockable dependencies into wrapper objects providing the functionality we need here and inject them into our ClassUnderTest. Then we can replace our wrapper(s) easily using regular mocking.

Doing all this can be a lot of work and/or risk depending on the situation so the depicted process serves as an low risk intermediate step for getting as much important code under test as possible.

Note that the wrappers themselves stay largely untestable like our protected delegating method.


JavaScript for Java developers

December 16, 2013

Although JavaScript and Java sound and look similar they are very different in their details and philosophies. Here I try to compare the two languages regardless of their libraries and frameworks. The goal is that you as a Java developer get an understanding of what JavaScript is and how it differs from Java. One hint: you can use jsfiddle.net to try out some of the snippets here or any JavaScript.
Note: right now this document discusses JavaScript 1.4, if enough interest is there I try to update it to a newer version (preferable ES5).

Primitives

Java – char, boolean, byte, short, int, long, float, double
JavaScript – none

Primitives are elements of the language which aren’t objects and therefore have no methods defined on them. JavaScript has no primitives.

Immutable types

Java – String (16bit), Character, Boolean, Byte, Short, Integer, Long, Float, Double, BigDecimal, BigInteger
JavaScript – String (16bit), Number(double, 64bit floating point), Boolean, RegExp

The next special kind of object are immutable objects, objects which represent values and cannot be changed.
JavaScript has four value objects: String (16bit like in Java), Number (64bit floating point like a double in Java), Boolean (like in Java) and RegExp (similar to Java). Java differences the number types further and introduces a Character.
Strings in JavaScript can be in single or double quotes and the sign to escape is the backslash (‘\’) just like in Java.
A regexp can be created via new RegExp or with ‘/’ like:

/a*/

Arrays

Java – special
JavaScript – normal object

Another base type in every language is the array. In Java the array is treated as a special kind of object it has a length property and is the only object which has the bracket ‘[]‘ operator. In Java you create and access an array in the following way:

// creation
String[] empty = new String[2]; // an empty array with length 2
String[] array = new String[] {"1", "2"};

// read
empty[0]; // => null
empty[5]; // => ArrayIndexOutOfBoundsException

// write
empty[0] = "Test"; // empty is now ["Test", null]
empty[2] = "Test";  // => ArrayIndexOutOfBoundsException

JavaScript handles creation and access in a different way:

// creation
var empty = new Array(2); // an empty array with length 2
var array = ["1", "2"];

// read
empty[0]; // => undefined
empty[5]; // => undefined

// write
empty[0] = "Test"; // empty is now ["Test", undefined]
empty[2] = "Test"; // empty is now ["Test", undefined, "Test"]

The reason for the strange patterns is that an array in JavaScript is just an object with the indexes as properties and reading an undefined property returns undefined whereas setting an undefined property creates the property on the object. More on this under objects.

Comments

Java – // and /**/
JavaScript – // and /**/

Both languages allow the line ‘//’ and the block ‘/* */’ comments whereas the line comment is preferred in JavaScript because commenting out a regular expression can lead to syntax errors:

/a*/

Commenting out this regular expression with the block comment would result in

/* /a*/ */

which is a syntax error.

Boolean Truth

Java – true: true, false: false
JavaScript – false: false, null, undefined, ”, 0, NaN, true: all other values

Another stumbling block for Java developers is the handling of expressions in a boolean context. JavaScript not just treats false as false but also defines null, undefined, the empty string, 0, NaN as falsy values. All other values are evaluated to true.

Literals

Java – “, ‘, numbers, booleans
JavaScript – “, ‘, [], {}, /, numbers, booleans

Literals are a short hand for constructing objects inside the language. Java only supports string, number and boolean creation with literals everything else needs a new operator. In JavaScript you can create strings, numbers, booleans, arrays, objects and regular expressions:

"A string";
'Another string';
var number = 5;
var whatif = true;
var array = [];
var object = {};
var regexp = /a*b+/;

Operators

Java – postfix (expr++ expr–), unary (++expr –expr +expr -expr ~ !), multiplicative (* / %), additive (+ -), shift (<> >>>), relational ( = instanceof), equality (== !=), bitwise AND (&), bitwise exclusive OR (^),, bitwise inclusive OR (|), logical AND (&&), logical OR (||), ternary (?:), assignment (= += -= *= /= %= &= ^= |= <>= >>>=)
JavaScript – object creation (new), function call (()), increment/decrement (++ –), unary (+expr -expr ~ !), typeof, void, delete, multiplicative (* / %), additive (+ -), shift (<> >>>), relational ( = in instanceof), equality (== != === !==), bitwise AND (&), bitwise exclusive OR (^),, bitwise inclusive OR (|), logical AND (&&), logical OR (||), ternary (?:), assignment (= += -= *= /= %= &= ^= |= <>= >>>=)

Java and JavaScript have many operators in common. JavaScript has some additional ones. ‘void’ is an operator to return undefined and rarely useful. ‘delete’ removes properties from objects and hence also elements from arrays. ‘in’ tests for a property of an object but does not work for literal strings and numbers.

var string = "A string";
"length" in string // => error
var another = new String('Another string');
"length" in another // => true

The unary operators ‘+’ and ‘-’ try to convert their operands to numbers and if the conversion fails they return NaN:

+'5' // => 5
-'2' // => 2
-'a' // => NaN

Typeof returns the type of its operand as a string. Beware the difference between literal creation and creation via new for numbers and strings.

typeof undefined // => "undefined"
typeof null // => "object"
typeof true // => "boolean"
typeof 5 // => "number"
typeof new Number(5) // => "object"
typeof 'a' // => "string"
typeof new String('a') // => "object"
typeof document // => Implementation-dependent
typeof function() {} // => "function"
typeof {} // => "object"
typeof [] // => "object"

All host environment specific objects like window or the html elements in a browser have implementation dependent return values.
Note that for an array it also returns “object” if you need to distinguish an array you must dig deeper.

Object.prototype.toString.call([]) // => "[object Array]"

The two pairs of equality operators (== != and === !==) behave differently. The shorter ones ‘==’ and ‘!=’ use type coercion which produces strange results and breaks transitivity:

'' == '0' // => false
0 == '' // => true
0 == '0' // => true

‘===’ and ‘!==’ works as expected if both operands are of the same type and have the same value they are true. The same value means either they are the same object or if they are a literal string, a literal number or a literal boolean have the same value regardless of length or precision.

5 === 5 // => true
5 === 5.0 // => true
'a' === "a" // => true
5 === '5' // => false
[5] === [5] // => false
new Number(5) === new Number(5) // => false
var a = new Number(5);
a === a  // => true
false === false // => true

Declaration

Java – type
JavaScript – var

Since JavaScript is a dynamically typed language you do not specify types when declaring parameters, fields or local variables you just use var:

var a = new Number(5);

Scope

Java – block
JavaScript – function

Scope is a common pitfall in JavaScript. Scope defines the code area in which a variable is valid and defined. Java has block scope which means a variable is defined and valid inside any block.

int a = 2;
int b = 1;
if (a > b) {
	int number = 5;
}
// no number defined here

JavaScript on the other hand has function scope which can lead to some confusion for developers coming from block scoped languages.

var f = function() {
  var a = 2;
  var b = 1;
  if (a > b) {
	var number = 5;
  }
  alert(number); // number is valid here
};
// but not here

One thing to remember is that closures have a reference not a copy of their variables from an outer scope.

for (var i = 0; i < 3; i++) {
  setTimeout(function() {
    i; // => always 3
  }, 200);
}

How can you fix this? You need to add a wrapper function and pass the values you need.

for (var i = 0; i < 3; i++) {
  (function(i) {
    setTimeout(function() {
      i; // => 0, 1, 2
    }, 200);
  })(i);
}

Statements

Java – conditional (switch, if/else), loop (while, do/while, for), branch (return, break, continue), exception (throw, try/catch/finally)
JavaScript – conditional (switch (uses ===), if/else), loop (while, do/while, for, for in (beware of protoype chain)), branch (break, continue, return), exception (throw, try/catch/finally), with

The statements which can be used in Java and JavaScript are largely the same but since JavaScript is dynamically typed you can use them with any types. See the section about boolean truth for the statements which need an expression to evaluate to false or true. Switch uses the ‘===’ operator to match the cases and has the same fall through pitfall like Java. ‘For in’ iterates over the names of all properties of an object including those which are inherited via the prototype chain. ‘With’ can be used to shorten the access to objects.

with (object) {
  a = b
}

The problem here is you don’t know from looking at the code if a and/or b is a property of object or a global variable. Because of this ambiguity ‘with’ should be avoided

Object creation

Java – new
JavaScript – new or functional creation / module pattern

In Java you just declare your class

public class Person {
  private final String name;
  
  public Person(String name) {
    this.name = name;
  }
  
  public String getName() {
    return this.name;
  }
}

and instantiate it via new.

Person john = new Person("John");

In JavaScript there is no class keyword but you can create objects via ‘{}’ or ‘new’. Let’s take a look at the functional approach first. The so called module pattern supports encapsulation (read: private members).

var person = function(name) {
  var private_name = name;
  return {
    get_name: function() {
      return private_name;
    }
  };
};

Now person holds a reference to a factory method and calling it will create a new person.

var john = person('John');

Another more classical and familiar way is to use ‘new’.

var Person = function(name) {
  this.name = name;
};

Person.prototype.get_name = function() {
  return this.name;
};

var john = new Person('John');

But what happens when we leave out the new?

var john = Person('John'); // bad idea!

Now this is bound to window (the global context) and a name property is defined on window but we can avoid this:

var Person = function(name) {
  if (!(this instanceof Person)) {
    return new Person(name);
  }
  this.name = name;
};

Now you can call Person with or without new and both behave the same. If you don’t want to repeat this for every class you can use the following pattern (adapted from John Resig to make it ES5 strict compatible).

// adapted from makeClass - By John Resig (MIT Licensed) - http://ejohn.org/blog/simple-class-instantiation/
var makeClass = function() {
  var internal = false;
  var create = function(args) {
    if (this instanceof create) {
      if (typeof this.init == "function") {
        this.init.apply(this, internal ? args : arguments);
      }
    } else {
      internal = true;
      return new create(arguments);
    }
  };
  return create;
};

This creates a function which can create classes. You can use it similar to the classical pattern.

var Person = makeClass();
Person.prototype.init = function(name) {
  this.name = name;
};
Person.prototype.get_name = function() {
  return this.name;
};

var john = new Person('John');

But name is now a public member of Person what if we want it to be private? If we take another look at the functional pattern above we can use the same mechanism.

var Person = function(name) {
  if (!(this instanceof Person)) {
    return new Person(name);
  }
  var private_name = name;
  this.get_name = function() {
    return private_name;
  };
  this.set_name = function(new_name) {
    private_name = new_name;
  };
};

Now name is also a private member of the Person class. Using makeClass you can achieve it in the following way.

var Person = makeClass();
Person.prototype.init = function(name) {
  var private_name = name;
  this.get_name = function() {
    return private_name;
  };
};

var john = new Person('John');

Encapsulation

Java – visibility modifiers (public, package, protected, private)
JavaScript – public or private (via closures)

As we have seen in the previous section we can have private variables and also methods via the encapsulation of a closure. All other variables and members are public.

Accessing properties

Java – .
JavaScript – . or []

Besides the dot you can also use an object like a hash.

var a = {b: 1};
a.b = 3;
a['b'] = 5;

Accessing non existing properties

Java – prevented by the compiler
JavaScript – get returns undefined, set creates

In Java accessing a property or method of an object which does not exists is prevented by the compiler. In JavaScript the following compiles and runs fine.

var a = {};
a.b;
a.b = 5;

When you access non existing members of an object you get undefined in return. Setting the non existing property creates it on the object.

Invocation and this

Java – method
JavaScript – method, function, constructor, apply

JavaScript knows four kinds of invocations: method, function, constructor and apply. A function on an object is called method and calling it will bound this to the object.

var john = {
  name: "John",
  get_name: function() {
    return this.name; // => this is bound to john
  }
};
john.get_name(); // => John

But there is a potential pitfall: it doesn’t matter which method you call but how! This problem can be worked around with the apply/call pattern below.

var john = {
  name: "John",
  get_name: function() {
    return this.name; // => this is bound to the global context
  }
};
var fn = john.get_name;
fn(); // => NOT John

A function which is not a property of an object is just a function and this is bound to the global context (in a browser the global context is the window object).

var get_name = function() {
  return this.name; // this is bound to the global context
};
get_name();

Calling a function with ‘new’ constructs a new object and bounds this to it.

var Person = function(name) {
  this.name = name; // => this is bound to john
};
var john = new Person("John");
john.name; // => John

JavaScript is a functional language (some call it even Lisp in C’s clothing) and therefore functions have methods, too. ‘Apply’ and ‘call’ are both methods to call a function with binding ‘this’ explicit.

var john = {
  name: "John"
};
var get_name = function() {
  return this.name; // this is bound to the john
};
get_name.apply(john); // => John
get_name.call(john); // => John

The difference between ‘apply’ and ‘call’ is just how they take their additional parameters: ‘apply’ needs an array whereas ‘call’ takes them explicitly.

var john = {
  name: "John"
};
var set_name = function(name) {
  this.name = name; // this is bound to the john
};
set_name.apply(john, ["Jack"]); // => Jack
set_name.call(john, "John"); // => John

Variable arguments

Java – …
JavaScript – arguments

In Java you can use variable argument lists via ‘…’. In JavaScript you do not need to declare them. All parameters of a function call are available via arguments regardless of what parameters are declared.

var sum = function() {
  var result = 0;
  for (var i = 0; i < arguments.length; i++) {
    result += arguments[i];
  }
  return result;
};
sum(1); // => 1
sum(1, 2); // => 3

Also arguments looks like an array it isn’t one and if you need an array of arguments you can use slice to convert it.

var array = Array().slice.call(arguments);

Inheritance

Java – extends, implements
JavaScript – prototype chain

Java can easily inherit types or implementation via implements or extends. JavaScript has no classes and uses another approach called the prototype chain. If you want to create a new object User which inherits from Person you use the prototype attribute.

var Person = function(name) {
  this.name = name;
};

var User = function(username) {
  Person.call(this, username); // emulating call to super
  this.username = username;
};

User.prototype = new Person();

var john = new User('John');
john.name; // => John
john.username; // => John

If I left something out or got something wrong please leave a comment. Also if you think a topic discussed here should be explored in more depth feel free to comment.


Got issues? Treat them like micro-projects

December 9, 2013

Waterfall_modelEvery professional software developer organizes his work in some separable work tasks. These tasks are called issues and often managed in an issue tracker like Bugzilla or JIRA. In bigger teams, there is a separate project role for assigning and supervising work on the issue level, namely the project manager. But below the level of a single issue, external interference would be micro-management, a state that every sane manager tries to avoid at all costs.

Underneath the radar

But what if a developer isn’t that proficient with self-management? He will struggle on a daily basis, but underneath the radar of good project management. And there is nearly no good literature that deals specifically with this short-range management habits. A good developer will naturally exhibit all traits of a good project manager and apply these traits to every aspect of his work. But to become a good developer, most people (myself included!) need to go through a phase of bad project management and learn from their mistakes (provided they are able to recognize and reflect on them).

An exhaustive framework for issue processing

This blog entry outlines a complete set of rules to handle an work task (issue) like a little project. The resulting process is meant for the novice developer who hasn’t established his successful work routine yet. It is exhaustive, in the sense that it will cover all the relevant aspects and in the sense that it contains too much management overhead to be efficient in the long run. It should serve as a starting point to adopt the habits. After a while, you will probably adjust and improve it on your own.

A set of core values

The Schneide standard issue process was designed to promote a set of core values that our developers should adhere to. The philosophy of the value set itself contains enough details to provide another blog entry, so here are the values in descending order without further discussion:

  • Reliability: Your commitments need to be trustworthy
  • Communication: You should notify openly of changes and problems
  • Efficiency: Your work needs to make progress after all

As self-evident as these three values seem to be, we often discuss problems that are directly linked to these values.

The standard issue process

The aforementioned rules consist of five steps in a process that need to be worked on in their given order. Lets have a look:

  1. Orientation
  2. Assessment
  3. Development
  4. Feedback
  5. Termination

Steps three and four (development and feedback) actually happen in a loop with fixed iteration time.

Step 1: Orientation phase

In this phase, you need to get accustomed to the issue at hand as quickly as possible. Read all information carefully and try to build a mental model of what’s asked of you. Try to answer the following questions:

  • Do I understand the requirements?
  • Does my mental model make sense? Can I explain why the requirements are necessary?
  • Are there aspects missing or not sufficiently specified?

The result of this phase should be the assignment of the issue to you. If you don’t feel up to the task or unfamiliar with the requirements (e.g. they don’t make sense in your eyes), don’t accept the issue. This is your first and last chance to bail out without breaking a commitment.

Step 2: Assessment phase

You have been assigned to work on the issue, so now you need a plan. Evaluate your mental model and research the existing code for provisions and obstacles. Try to answer the following questions:

  • Where are the risks?
  • How can I partition the work into intermediate steps?

The result of this phase should be a series of observable milestones and a personal estimate of work effort. If you can’t divide the issue and your estimate exceeds a few hours of work, you should ask for help. Communicate your milestones and estimates by writing them down in the issue tracker.

Step 3: Development phase

You have a series of milestones and their estimates. Now it’s time to dive into programming. This is the moment when most self-management effort ends, because the developer never “zooms out” again until he is done or hopelessly stuck. You need periodic breaks to assess your progress and reflect on your work so far. Try to work for an hour (set up an alarm!) and continue with the next step (you will come back here!). Try to answer the following questions:

  • What is the most risky milestone/detail?
  • How long will the milestone take?

The result of this phase should be a milestone list constantly reordered for risk. We suggest a “cover your ass” strategy for novices by tackling the riskiest aspects first. After each period of work (when your alarm clock sets off), you should make a commit to the repository and run all the tests.

Step 4: Feedback phase

After you’ve done an hour of work, it’s time to back off and reflect. You should evaluate the new information you’ve gathered. Try to answer the following questions:

  • Is my estimate still accurate?
  • Have I encountered unforeseen problems or game-changing information?
  • What crucial details were discovered just yet?

The result of this phase should be an interim report to your manager and to your future self. A comment in the issue tracker is sufficient if everything is still on track. Your manager wants to know about your problems. Call him directly and tell him honestly. The documentation for your future self should be in the issue tracker, the project wiki or the source code. Imagine you have to repeat the work in a year. Write down everything you would want written down.
If your issue isn’t done yet, return to step three and begin another development iteration.

Step 5: Termination phase

Congratulations! You’ve done it. Your work is finished and your estimation probably holds true (otherwise, you would have reported problems in the feedback phases). But you aren’t done yet! Take your time to produce proper closure. Try to answer the following questions:

  • Is the documentation complete and comprehensible?
  • Have you thought about all necessary integration work like update scripts or user manual changes?

The result of this phase should be a merge to the master branch in the repository and complete documentation. When you leave this step, there should be no necessity to ever return to the task. Assume that your changes are immediately published to production. We are talking “going gold” here.

Recapitulation

That’s the whole process. Five steps with typical questions and “artifacts”. It’s a lot of overhead for a change that takes just a few minutes, but can be a lifesaver for any task that exceeds an hour (the timebox of step three). The main differences to “direct action” processes are the assessment and feedback phases. Both are mainly about self-observation and introspection, the most important ingredient of efficient learning. You might not appreciate at first what these phases reveal about yourself, but try to see it this way: The revelations set a low bar that you won’t fall short of ever again – guaranteed.


Follow

Get every new post delivered to your Inbox.

Join 64 other followers