Blog harvest, December 2009

December 3, 2009

Today’s blog harvest spans a lot of topics that i’ve found noteworthy in the last weeks. As an added bonus, there’s a watchworthy video link at the end. I hope you enjoy reading the articles as much as I did. If you have thoughts one the articles, feel free to comment them here.

This was the article side of this harvesting. Let’s have some fun by watching a video and relieving our conscience:

  • Living with 1000 Open Source Projects – It might get crowded on your disk! Nic Williams shares his secrets of mastering open source heavy lifting. The video runs a short half hour and has its funniest minute between 11:20 and 12:20. Brilliant!
  • The Bad Code Offset – Guilty of writing bad code? Well, remember the last entry of the list above? You’ve probably created a new job. If not, you can find absolution by buying some “Bad Code Offsets”. Think of it as the Carbon offset of the software industry.

SSD and (One)-touch Backup solution

November 30, 2009

As explained a while ago we (developers) get an annual creativity budget. This time I decided to improve my notebook working experience and reliability by introducing two new items:

  1. A fast SSD replacing the conventional relatively slow 2,5″ hard disk
  2. An one-touch backup solution which in fact is a no touch solution

The SSD is a X25-m from Intel with 160Gb and the backup solution is a Seagate Replica with 500Gb disk space. Although there are recurring problems with the firmware and toolbox software the Intel SSD seemed to be the best choice price/performance/reliability wise. To be on a safer side data wise we paired it with the backup solution. Let me first explain the migration which went really smooth and was the first stress test for the backup system. The steps were the following:

  1. Backup the existing system with the replica which does not require any user interaction after the client backup software is automatically installed
  2. replace the original harddisk with the SSD
  3. reboot the system with the recovery CD of the replica solution and restore the backed up system
  4. reboot the recovered system from the SSD

The whole process went really smooth and only took some hours of data copying. There were no hickups whatsoever. After booting from the SSD my system was exactly like before, so the replica already proved that it really works even in the worst case of a complete drive loss.

The performance of the whole system is noticable better especially at system and application startup as you would expect.

Conclusion

The backup solution is so damn easy to use that I would recommend it to all people running Windows and caring about the data on their system. To keep your backup up to date just plug the external hard drive in a free USB port and continue working. You don’t have to do any configuration and other hassles which often end any effort of deploying a working backup solution. This is even more true for private people who do not have the knowledge to fiddle with system details. So go for a “one touch backup” if you do not have some working solution in use already!

A modern SSD can really improve your working experience especially on notebooks where hard disk performance is far worse than in an workstation environment. So older hardware can get new life and make your life easier and more productive.


Follow-up to our Dev Brunch November 2009

November 29, 2009

Today we held our Dev Brunch meeting for November 2009. It was the last possible date for this month, but we were affected by absences nonetheless. This is the follow-up posting for this rather small gathering, summarizing the topics and providing additional information.

The Dev Brunch

If you want to know more about the meaning of the term “Dev Brunch” or how we realize it, have a look at the follow-up posting of October’s brunch. This time, no notebook was needed.

The November 2009 Dev Brunch

The topics of this session were:

  • Object Calisthenics by example – Experiences gained while programming a small project following the Object Calisthenics rules while practicing Test Driven Development, too.
  • Object Calisthenics inspected – Observations and insights gained when explaining Object Calisthenics to several teams, programmers and student courses.

As you can immediately see, the meeting was small, but surprisingly consistent. We didn’t agree upon the topic beforehands, but it was a perfect match. Everybody who missed this brunch definitely missed some very interesting first-hand experiences on Object Calisthenics, too. To ease this lack a bit, let me rephrase the content a bit.

Object Calisthenics

You might have heard about Object Calisthenics before, on this blog or other resources on the net. Perhaps you’ve read the original article, which is highly advised. In short, Object Calisthenics are a set of inspiring, if not irritating programming rules that should lead to better programming style through excercise. You should consult the links above for specifics.

Object Calisthenics by example

When applying the rules to a domain class model, some new techniques arose to compensate the “train wreck line”-programming style (see rule 4) and to introduce first class collections (rule eight) and avoid getters and setters (rule 9). This techniques included the use of the Visitor design pattern, which wasn’t the author’s first choice beforehands. Test Driven Development alone wouldn’t have led to this solution, but the solution works well for the given use case.

The author softened some rules for his example and found valid explanations for doing so. This might be the content of an additional blog posting that still needs to be written. It will be announced in the comments when published.

Test Driven Development and Object Calisthenics do not interfere with each other. They both aim for better code and design, but through different means. They could be regarded as complements in a programmer’s toolbox.

Object Calisthenics inspected

When teaching the nine rules, some effects occurred repeatedly. The first observation was that the rules follow a dramatic composition that orders them from “most obvious and immediate code improvement” to “hardest to achieve code improvement” and in the same order from “easiest to acknowledge” to “most controversial”. At the end of the list, the audience rioted most of the time. But if you reject the last few rules, you’ve silently agreed to the first ones, the ones with the greatest potential for immediate improvement.

Another observation is that the rules stick. Even if you reject them on first notion, it creeps into your thinking, whispering that “it might be possible right now with this code“. It’s a learning catalyst for those of us that aren’t born as programming super-heros. To speak in terms Kent Beck coined: Object Calisthenics provide some handy practices that might eventually lead to a better understanding of their underlying principles. Even beginners can follow the practices and review their code on compliance. When they fully get to know the principles (like Law Of Demeter, for example), they are already halfway there.

The third observation was that most experienced programmers intuitively revealed the principles behind the rules before I could even try to explain. Some even found very interesting associations with other principles that weren’t so obvious.

At last, Object Calisthenics, if performed as a group exercise, can be a team solder. You can rant over code together without regrets – the rules were made elsewhere. And you can discuss different solutions without feeling pointless – fulfilling the rules is the common goal for a short time.

The Dev Brunch retrospected

This brunch was small both in attendee and topic count. That created a very productive discussion. We’ll try to grow the insights gained today into additional blog entries. Stay tuned.


Open Source Love Day November 2009

November 25, 2009

Today, we celebrated our third Open Source Love Day (OSLD). It was slightly degraded by a company-wide illness outbreak, but we tried to make the best out of the situation.

Open Source Love Days are our way to show our appreciation and care to the Open Source software ecosystem. Our work wouldn’t be as fun or just not possible without professional Open Source software. You can read more about our motivation and specifica in our first OSLD blog posting.

You can participate at our OSLD by using the features we’ve built today:

  • We think git is an enrichment to the SCM market, as is Eclipse to the IDE market. Improving the quality of Eclipse’s git plugin is the next logical step. The ability to diff the content of two revisions in EGit was committed today. As a bonus, the name of the committer shows up in the right manner, too. See the screenshots to get the idea:

When developing EGit, we were already using it to pull the sources. Unfortunately, the repository URL changed bigtimes since our checkout without us noticing. This got us into trouble trying to follow the contributor guide. The command line version of git isn’t that communicative yet. But after all, this is a great time to learn about the real world problems when using git. The EGit contributor guide itself is a fantastic way for a project to show initial appreciation to volunteer efforts. Thanks for caring, guys! If you are interested to review our changes yourself, fetch the patch.

  • Another part of today’s work was on the KDevelop project. We tried to fix some outstanding little features or bugs, whatever is on the list of KDevelop 4. But we spent our day fixing our development machines instead. The Ubuntu linux operating system (8.10) was way too old to get useful results and KDE needs to be up-to-date to develop KDevelop. Besides our sluggishness to keep our virtual machines on the bleeding edge, the checkout experience of KDevelop was rather sleek. What bothers us a bit is the ominous entanglement between KDevelop and KDE. It seems you can’t have one without the other and need to master both to make a stand.
  • As a third part, we wanted to contribute to the TANGO project (not the useful icon collection, but the useful control system). They migrated their main repository from CVS to SVN lately, but the migration seems unfinished still. At least, the migration effort lacks public documentation for the occasional contributor. That’s a real showstopper, because you never get beyond the very first step: setting up a working project. We won’t give up and email the project leads on this topic, but it didn’t fit into this OSLD.

What were our lessons learnt today?

  • Just having a possibility to view or download the source code doesn’t make an Open Source project. The key to success is the ability of complete strangers to hop in and perform useful work. Having terse, but accurate documentation helps a lot. The EGit contributor guide is a good example of a single document that makes the difference. If you own an open source project and want to attract occassional contributors (like us), write such a document and watch us (and others) drop you a patch. That said, we come to belief that the person that writes technical documentation for the developers is one of the most important roles on a project. Perhaps we join some projects in the future to fill that role.

To sum it up, this OSLD was limited from the beginning by developer availibility. With lacking documentation, we nearly grinded to a halt. We look forward to our next OSLD in December.


We are software tailors

November 23, 2009

Our company is called Softwareschneiderei (which is German for software tailoring). This name describes our intention to write bespoken software, software that fits people perfectly. Over time different additional metaphors from the tailor’s world came around: seams/tucks which describe places in software systems where cuts can be made and testing can be done. Tailoring is a craftsmanship so an apprenticeship model and the pride in our work exists.
This describes the mentoring and bespoken software development we do. But besides that we do a lot of bug fixing, improvement of existing software which was written by others and evaluation of other people’s code. Thanks to a piece from Jason Fried (thanx Jason!) those other parts fit perfectly into our vision as software tailors: we iron/press (fix bugs, improve the code), we trim and cut (remove bottlenecks and unwanted functionality or extend the software to use other systems) and we measure (analyze, inspect and evaluate systems).


Speed up your buildbox, Part III: Memory

November 16, 2009

© Friedberg - Fotolia.comIn the first and second part of our effort to speed up our buildbox, we replaced the harddisk with a RAM disk and swapped in a bigger CPU. This brought the build time down from 03:30 minutes to 02:00 minutes.

Boosting the memory

When we began the journey, we wanted to undercut the 02:00 minutes threshold. The last component that directly impacts performance of our box was the memory. We started out with 4 GB of DDR2-800 modules. To have a feeling for the effects, we upgraded to 4 GB of DDR2-1066 first and then added another 4 GB, resulting in 8 GB of RAM. We expected the performance gain to be small, but noticeable. The RAM disk, for example, is directly affected by memory speed.

As much, but faster

The first upgrade brought the first surprise: Upgrading from DDR2-800 to DDR2-1066 modules didn’t change anything. It’s not that the mainboard or CPU doesn’t support the faster RAM, it just seems to be fast enough, despite the data bus clock rate. Our build process still took 02:00 minutes, reproducible and without exception.

Filling all the banks

The mainboard can load up to 16 GB of RAM, but our budget just allowed to buy 8 GB of DDR2-1066 RAM. We installed it and ran the same 32 bit Ubuntu Linux as before. The build process took 02:00 minutes, which was expected now.

Changing to 64bit

We changed to boot harddisk, installed a 64 bit Ubuntu Linux and ran the build again. Still 02:00 minutes. The switch to 64 bit wasn’t a big deal with Java, but some of the included native libraries complained about the change. Recompiling them solved the issue.

Finally reaching the target

As a last measure, we increased the maximum memory of the build JVM to the biggest value it would accept. This was -Xmx2600m, a surplus of 600 MB to the original setting. This sped up the build process by five seconds, it took 01:55 minutes now.

Conclusion and perspective

We’ve reached our anticipated target of less than two minutes build time. We exceeded our original budget of 500 EUR, but bought some parts that finally weren’t used in the build box, but elsewhere. The two parts that made the whole difference were the CPU and some more memory to spend it on the RAM disk.

If you want to speed up your single build box, aim for the CPU/RAM combo and try to install a RAM disk to perform all the work on.

This leads me to the perspective of the next part of the series: If you plugged in the most expensive CPU and enormous amounts of RAM to speed up your buildbox, you still aren’t done. You should invest some time to look into distributed builds. Hudson as our continuous integration server provides nearly instant “build slave” support. With this feature, you can set up a whole build farm to further increase your build throughput.

Stay tuned for “Part IV: Beyond the box”


CMake Builder Plugin Reloaded

November 9, 2009

A few months ago I set out to build my first hudson plugin. It was an interesting, sometimes difficult journey which came to a good end with the CMake Builder Plugin, a build tool which can be used to build cmake projects with hudson. The feature set of this first version was somewhat limited since I applied the scratch-my-own-itch approach – which by the time meant only support for GNU Make under Linux.

As expected, it wasn’t long until feature requests and enhancement suggestions came up in the comments of my corresponding blog post. So in order to make the plugin more widely useable I used our second  Open Source Love Day to add some nice little features.

Update: I used our latest OSLD to make the plugin behave in master/slave setups. Check it out!

Let’s take a walk through the configuration of version 1.0 :

Path to cmake executable

1. As in the first version you have to set the path to the cmake executable if it’s not already in the current PATH.

2. The build configuration starts as in the first version with Source Directory, Build Directory and Install Directory.

CMake Builder Configuration Page

3. The Build Type can now be selected more conveniently by a combo box.

4. If Clean Build is checked, the Build Dir gets deleted on every build

Advanced Configuration Page

5. The advanced configuration part starts with Makefile Generator parameter which can be used to utilize the corresponding cmake feature.

6. The next two parameters Make Command and Install Command can be used if make tools other than GNU Make should be used

7. Parameter Preload Script can be used to point to a suitable cmake pre-load script file. This gets added to the cmake call as parameter of the -C switch.

8. Other CMake Arguments can be used to set arbitrary additional cmake parameters.

The cmake call will then be build like this:

/path/to/cmake  \
   -C </path/to/preload/script/if/given   \
   -G <Makefile Generator>  \
   -DCMAKE_INSTALL_PREFIX=<Install Dir> \
   -DCMAKE_BUILD_TYPE=<Build Type>  \
   <Other CMake Args>  \
   <Source Dir>

After that, the given Make and Install Commands are used to build and install the project.

With all these new configuration elements, the CMake Builder Plugin should now be applicable in nearly every project context. If it is still not useable in your particular setting, please let me know. Needless to say, feedback of any kind is always appreciated.


Follow

Get every new post delivered to your Inbox.

Join 88 other followers