Software development is code organization

The biggest problem in developing and maintaining software is understanding code. Software developers should get good training in crafting code which can be understood. To make sense of the mess we need to organize it.

In 2000 Edgar Dijkstra wrote about our problems organizing and designing software:

I would therefore like to posit that computing’s central challenge, “How not to make a mess of it”, has not been met. On the contrary, most of our systems are much more complicated than can be considered healthy, and are too messy and chaotic to be used in comfort and confidence.

Our code bases get so big and complicated today that we cannot comprehend them all at once. Back in the days of UNIX technical constraints led to smaller code. But the computer is not the limiting factor anymore. We are. Our mind cannot comprehend what we create. Brian Kernigham wrote:

Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?

Writing code that we (or other developers) can understand is crucial. But why do we fail?

Divide and lose

Usually the first argument when tackling code is to decouple it. Make it clean. Use clean code principles like DRY, SOLID, KISS, YAGNI and what other acronyms you know. These really help to decouple. But they are missing the point. They are the how not the why.
Take a look at your codebase and tell me where are the classes which constitute a subdomain or a specific feature? In which project or part do they live?
Normally you cannot. We only know how to divide code by technical aspects. But features and changes often come from the domain, not from the technology.
How can we understand our creations when we cannot understand its structure? Its architecture? How can we understand something we do not see.

But it does work

The next argument is not much better. Our code might work now. But what if a bug is found or a new feature is about to be implemented? Do you understand the code and its structure? Even weeks, months or years later? Working code is good but you can only change code reliably that you understand.

KISS

Write simple code. Write simple and small methods. Write cohesive classes. The dream of components. But the whole is more than the sum of its parts. You can write simple classes but the communication and threading issues between them can be very complex. Even if the interfaces are sound and simple. Understanding a simple class can be easy in isolation. But understanding a system of simple classes can be difficult and complex. Things are complex. Domains are complex. We cannot ignore that.

Code as an interface

When writing code we have to take the reader and the domain into account. Treat code as an interface. An interface to the system and the domain. It is an opinionated view of the world. The computer does not care about the code we use. Just like the printer who prints our favorite book. But the reader does.
This isn’t just nice thinking, understanding code is key to successfully crafting and changing software.

Assumptions – how to find, track and eliminate them

Assumptions can kill a project. Like a house built on sand we don’t know when and where it will collapse.
The problem with assumptions is that they disguise as truths. We believe them. They are the project’s reality. Just like the matrix.
Assumptions are shortcuts. Guesses at reality. We cannot fully grasp reality, so we assume. But we can find evidence for our decisions. For this we need to uncover the assumptions, assess their risk and gather evidence. But how do we know what we assume?

Find assumptions

Watch your language

‘I think’, ‘In my opinion’, ‘should be’, ‘roughly’, ‘circa’ are all clues for assumptions. Decisions need to be based on evidence. When we use vague language or personal opinions to describe our project we need to pause. Under this lurks insecurity and assumptions.
Another red flag are metaphors. Metaphors might be great to present, paint a picture in our head or describe a vision. But in decision making they are too abstract and meaningless. We may use them to describe our strategy but when we need to design and implement we need borders that constrain our decisions. Metaphors usually cover only some aspects of the project and vice versa. There’s a mismatch. We need concrete language without ambiguity.

Be dumb

We know so much that we think others have the same experience, education, view point, familiarity, proficiency and imprinting. We know so little that we think the other way is also true. We transfer. We assume. Dare to ask dumb questions. Adopt a beginner’s mind. Challenge traditions and common beliefs.
We take age old decisions for granted. They were made by people smarter than us, so they must be right. Don’t do this. Question them. Even the obvious ones.
In the book ‘Hidden in plain sight’ Jan Chipchase enters a typical cafe where people sit and talk, drink coffee and typing on their laptops. The question he poses: should the coffeshop owner sell diapers? So that everybody can continue what they do without the need to go to the bathroom. This question challenges our cultural and imprinted beliefs. And this is good.

Be curious

Ask: why? We need to get to the root of the problem. Dig deeper. Often under layers of reasoning and thoughtful decisions lies an assumption. The chain is only so strong as its weakest link. If we started with an assumption, the reasoning building on it is also assumed. Children often ask why and don’t stop even when we think it is all said and logical. So when we find the root, we need to continue to ask: is this really the root? Why is it the way it is.
Another question we need to ask repeatedly is: what if? What if: our target audience changes? we try to follow the opposite of the goals of our project? what if the technology changes?

Change perspectives

We see what we want to see. Seeing is an active process. We can stretch our thinking only so far. To stretch it even further we need to change roles. For just some hours do the work our users do. Feel their pains. Their highs and lows.
Or adopt the role of the browser. Good interfaces are conversations. Play a dialog with your user. Be the browser.
Only by embracing constraints of other perspectives we can force ourselves to stretch. In this way we find things which are assumed by us because of our view of the world.

Track them

After we have collected the assumptions we need to track them to later prove or disprove them. For this a simple spreadsheet or table is sufficient. This learning plan consists of 5 columns (taken from Leah Buley’s The UX Team of one):

  • the assumption: what we believe is true
  • the certainty: a 3 or 5 point scale showing how sure we are that we are right
  • notes: additional notes of why we think the assumption is right or wrong
  • the evidence: results which we collected to support this assumption
  • the research: things we can do to collect further evidence

Eliminate them

Now that we know what we assume and with which certainty we think we are right, we can start to collect further information to support or disprove our claims. In short: We research. Research can take many different forms. But all forms are there to gain further insights. Some basic forms we use to bring light into the darkness of uncertainty are:

  • Stakeholder interviews
  • (Contextual) user interviews
  • Heuristic evaluation
  • Prototyping
  • Market research

Other methods we don’t use (yet) include:

  • A/B tests (paired with analytics)
  • User tests

The point behind all these methods is to build a chain of reasoning. Everything in our software needs a reason to exist. The users and the stakeholders are the primary sources of insight. But also our experience, the human psychology and common patterns or conventions help us to decide which way to go.
Not only the method of collecting is important but also how the results are documented. We should present the essential information in a way that it is easy to get a glimpse of it just by looking at the respective documents. On the other side we should all keep this pragmatic and not go overboard. Our goal is to get insight and not build a proof of the system.

Recap of the Schneide Dev Brunch 2015-06-14

brunch64-borderedA week ago, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. So if you bring a software-related topic along with your food, everyone has something to share. The brunch was well attented this time with enough stuff to talk about. We probably digressed a little bit more than usual. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Distributed Secret Sharing

Ever thought about an online safe deposit box? A place where you can store a secret (like a password or a certificate) and have authorized others access it too, while nobody else can retrieve it even if the storage system containing the secret itself is compromised? Two participants of the Dev Brunch have developed DUSE, a secure secret sharing application that provides exactly what the name suggests: You can store your secrets without any cleartext transmission and have others read them, too. The application uses the Shamir’s Secret Sharing algorithm to implement the secret distribution.

The application itself is in a pre-production stage, but already really interesting and quite powerful. We discussed different security aspects and security levels intensily and concluded that DUSE is probably worth a thorough security audit.

Television series

A main topic of this brunch, even when it doesn’t relate as strong with software engineering, were good television series. Some mentioned series were “The Wire”, a rather non-dramatic police investigation series with unparalleled realism, the way too short science-fiction and western crossover “Firefly” that first featured the CGI-zoom shot and “True Detective”, a mystery-crime mini-series with similar realism attempts as The Wire. “Hannibal” was mentioned as a good psychological crime series with a drift into the darker regions of humankind (even darker than True Detective). And finally, while not as good, “Silicon Valley”, a series about the start-up mentality that predominates the, well, silicon valley. Every participant that has been to the silicon valley confirms that all series characters are stereotypes that can be met in reality, too. The key phrase is “to make the world a better place”.

Review of the cooperative study at DHBW

A nice coincidence of this Dev Brunch was that most participants studied at the Baden-Württemberg Cooperative State University (DHBW) in Karlsruhe, so we could exchange a lot of insights and opinions about this particular form of education. The DHBW is an unique mix of on-the-job training and academic lectures. Some participants gathered their bachelor’s degree at the DHBW and continued to study for their master’s degree at the Karlsruhe Institute of Technology (KIT). They reported unisono that the whole structure of studies changes a lot at the university. And, as could be expected from a master’s study, the level of intellectual requirement raises from advanced infotainment to thorough examination of knowledge. The school of thought that transpires all lectures also changes from “works in practice” to “needs to work according to theory, too”. We concluded that both forms of study are worth attending, even if for different audiences.

Job Interview

A short report of a successful job interview at an upcoming german start-up revealed that they use the Data Munging kata to test their applicants. It’s good to see that interviews for developer positions are more and more based on actual work exhibits and less on talk and self-marketing skills. The usage of well-known katas is both beneficial and disadvantageous. The applicant and the company benefit from realistical tasks, while the number of katas is limited, so you could still optimize your appearance for the interview.

Following personalities

We talked a bit about the notion of “following” distinguished personalities in different fields like politics or economics. To “follow” doesn’t mean buying every product or idolizing the person, but to inspect and analyze his or her behaviour, strategies and long-term goals. By doing this, you can learn from their success or failure and decide for yourself if you want to mimick the strategy or aim for similar goals. You just need to be sure to get the whole story. Oftentimes, a big personality maintains a public persona that might not exhibit all the necessary traits to fully understand why things played out as they did. The german business magazine Brand Eins or the english Offscreen magazine are good starting points to hear about inspiring personalities.

How much money is enough?

Another non-technical question that got quite a discussion going was the simple question “when did/do you know how much money is enough?”. I cannot repeat all the answers here, but there were a lot of thoughtful aspects. My personal definition is that you have enough money as soon as it gets replaced as the scarcest resource. The resource “time” (as in livetime) is a common candidate to be the replacement. As soon as you don’t think in money, but in time, you have enough money for your personal needs.

Basic mechanics of company steering

In the end, we talked about start-ups and business management. When asked, I explained the basic mechanics of my attempt to steer the Softwareschneiderei. I won’t go into much detail here, but things like the “estimated time of death” (ETOD) are very important instruments that give everyone in the company a quick feeling about our standings. As a company in the project business, we also have to consider the difference between “gliding flight projects” (with continuous reward) and “saw tooth projects” with delayed reward. This requires fundamentally different steering procedures as any product manufacturer would do. The key characteristic of my steering instruments is that they are directly understandable and highly visible.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Declare war on your software

If we believe Robert Greene, life is dominated by fierce war – and he does not only refer to obvious events such as World War II or the Gulf Wars, but also to politics, jobs and even the daily interactions with your significant other.

The book

Left aside whether or not his notion corresponds to reality, it is indeed possible to apply many of the strategies traditionally employed in warfare to other fields including software development. In his book The 33 Strategies of War, Robert Greene explains his extended conception on the term war, which is not restricted to military conflicts, and describes various methods that may be utilized not only to win a battle, but also to gain advantage in everyday life. His advice is backed by detailed historic examples originating from famous military leaders like Sun Tsu, influential politicans like Franklin D. Roosevelt and even successful movie directors like Alfred Hitchcock.

Examples

While it is clear that Greene’s methods are applicable to diplomacy and politics, their application in the field of software development may seem slightly odd. Hence, I will give two specific examples from the book to explain my view.

The Grand Strategy

Alexander the Great became king of Macedon at the young age of twenty, and one of his first actions was to propose a crusade against Persia, the Greek’s nemesis. He was warned that the Persian navy was strong in the Mediterranean Sea and that he should strengthen the Greek navy so as to attack the Persians both by land and by sea. Nevertheless, he boldly set off with an army of 35,000 Greeks and marched straight into Asia Minor – and in the first encounter, he inflicted a devastating defeat on the Persians.

Now, his advisors were delighted and urged him to head into the heart of Persia. However, instead of delivering the finishing blow, he turned south, conquering some cities here and there, leading his army through Phoenicia into Egypt – and by taking Persia’s major ports, he disabled them from using their fleet. Furthermore, the Egyptians hated the Persians and welcomed Alexander, so that he was free to use their wealth of grain in order to feed his army.

Still, he did not move against the Persian king, Darius, but started to engage in politics. By building on the Persion government system, changing merely its unpopular characteristics, he was able to stabilize the captured regions and to consolidate his power. It was not before 331 B. C., two years after the start of his campaign, that he finally marched on the main Persian force.

While Alexander might have been able to defeat Darius right from the start, this success would probably not have lasted for a long time. Without taking the time to bring the conquered regions under control, his empire could easily have collapsed. Besides, the time worked in his favor: Cut off from the Egyptian wealth and the subdued cities, the Persian realm faltered.

One of Greene’s strongest points is the notion of the Grand Strategy: If you engage in a battle which does not serve a major purpose, its outcome is meaningless. Like Alexander, whose actions were all targeted on establishing a Macedonian empire, it is crucial to focus on the big picture.

It is easy to see that these guidelines are not only useful in warfare, but rather in any kind of project work – including software projects. While one has to tackle the main tasks at some point, it is important to approach it reasoned, not rashly. If anaction is not directed towards the aim of the project, one will be distracted and endager its execution by wasting resources.

The Samurai Musashi

Miyamoto Musashi, a renowned warrior and duellist, lived in Japan during the late 16th and the early 17th century. Once, he was challenged by Matashichiro, another samurai whose father and brother had already been killed by Musashi. In spite of the warning of friends that it might be a trap, he decided to oppose his enemy, however, he did prepare himself.

For his previous duels, he had arrived exorbitantly late, making his opponents lose their temper and, hence, the control over the fight. Instead, this time he appeared at the scene hours before the agreed time, hid behind some bushes and waited. And indeed, Matashichiro arrived with a small troop to ambush Musashi – but using the element of surprise, he could defeat them all.

Some time later, another warrior caught Musashi’s interest. Shishido Baiken used a kusarigama, a chain-sickle, to fight and had been undefeated so far. The chain-sickle seemed to be superior to swords: The chain offered greater range and could bind an enemy’s weapon, whereupon the sickle would deal the finishing blow. But even Baiken was thrown off his guard; Musashi showed up armed with a shortsword along with the traditional katana – and this allowed him to counter the kusarigama.

A further remarkable opponent of Musashi was the samurai Sasaki Ganryu, who wore a nodachi, a sword longer than the usual katanas. Again, Musashi changed his tactics: He faced Ganryu with an oar he had turned into a weapon. Exploiting the unmatched range of the oar, he could easily win the fight.

The characteristic that distinguished Musashi from his adversaries most was not his skill, but that he excelled at adapting his actions to his surroundings. Even though he was an outstanding swordsman, he did not hesitate to follow different paths, if necessary. Education and training facilitate becoming successful, but one has to keep an open mind to change.

Relating to software development, it does not mean that we have to start afresh all the time we begin a new project. Nevertheless, it is dangerous if one clings to outdated technologies and procedures, sometimes may be helpful to regard a situation like a child, without any assumptions. In this manner, it is probably possible to learn along the way.

Summary

Greene’s book is a very interesting read and even though in my view one should take its content with a pinch of salt, it is a nice opportunity to broaden one’s horizon. The book contains far more than I addressed in this article and I think most of its findings are indeed in one way or another applicable to everyday life.

VB.NET for Java Developers – Updated Cheat Sheet

The BASIC programming language (originally invented at Dartmouth College in 1964) and Microsoft share a long history together. Microsoft basically started their business with the licensing of their BASIC interpreter (Altair BASIC), initially developed by Paul Allan and Bill Gates. Various dialects of Microsoft’s BASIC implementation were installed in the ROMs of many home computers like the Apple II (Applesoft BASIC) or the Commodore 64 (Commodore BASIC) during the 1970s and 1980s. A whole generation of programmers discovered their interest for computer programming through BASIC before moving on to greater knowledge.

BASIC was also shipped with Microsoft’s successful disk operating system (MS-DOS) for the IBM PC and compatibles. Early versions were BASICA and GW-BASIC. Original BASIC code was based on line numbers and typically lots of GOTO statements, resulting in what was often referred to as “spaghetti code”. Starting with MS-DOS 5.0 GW-BASIC was replaced by QBasic (a stripped down version of Microsoft QuickBasic). It was backwards compatible to GW-BASIC and introduced structured programming. Line numbers and GOTOs were no longer necessary.

When Windows became popular Microsoft introduced Visual Basic, which included a form designer for easy creation of GUI applications. They even released one version of Visual Basic for DOS, which allowed the creation of GUI-like textual user interfaces.

Visual Basic.NET

The current generation of Microsoft’s Basic is Visual Basic.NET. It’s the .NET based successor to Visual Basic 6.0, which is nowadays known as “Visual Basic Classic”.

Feature-wise VB.NET is mostly equivalent to C#, including full support for object-oriented programming, interfaces, generics, lambdas, operator overloading, custom value types, extension methods, LINQ and access to the full functionality of the .NET framework. The differences are mostly at the syntax level. It has almost nothing in common with the original BASIC anymore.

Updated Cheat Sheet for Java developers

A couple of years ago we published a VB.NET cheat sheet for Java developers on this blog. The cheat sheet uses Java as the reference language, because today Java is a lingua franca that is understood by most contemporary programmers. Now we present an updated version of this cheat sheet, which takes into account recent developments like Java 8:

The web is for documents

The web is intended to help a person find and understand relevant information. The primary container of information is the document. Therefore web applications should be centered around a document metaphor, not an app one.

In 1990 Tim Berners-Lee and Robert Cailliau wrote a proposal for what we call the web today:

HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will.

The web is a linked information system. Bret Victor states:

Information software serves the human urge to learn. A person uses information software to construct and manipulate a model that is internal to the mind — a mental representation of information.

The web is built around information. More information than we can handle. What we need to make sense of it all is understanding. The power of technology can be used to transfer and gain understanding. Understanding needs to be a first class citizen. The applications we build must be centered around it.

One way to foster understanding is to interact, to play with information. Technology can simulate a system of information so that we can form hypotheses and ask questions. Bret Victor coined the term “explorable explanations” to describe such systems.

I believe the web is perfectly suited for building explorable explanations.

The web’s container for information is the document. A document combines different forms of media (text, images, video, …) to a whole. Fortunately for us the web does not stop here. With scripting we have the possibility to interact and manipulate the information in order to gain further insight.

Most of the tools we need to create for understanding are already at our hands. What we need is a fundamental change in focus. Right now (a large part of) the web industry tries to play catch up with native. Whole frameworks try to mimic native applications like this is a virtue. Current developments want to abstract the document as far away as possible. This is not what the web was intended for. Why build an application which tries so hard to recreate a native feeling in something other than the native platform itself? Web applications should be built on the strength of the web. We should not chase a foreign metaphor.
Right now the web seems to be torn. Torn between the print era of passive documents and the shiny new world of native applications. But the web has the capability to do so much more. To concentrate on its purpose, to fill the niche. A massive niche. Understanding is a core endeavor of mankind. To quote Stephen Anderson and Karl Fast in introducing their upcoming book From Information to Understanding:

In all areas of life, we are surrounded by understanding problems.

Doug Engelbart shares a similar vision for the purpose of the personal computer per se:

By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.

The web is ready. The tools are ready. But are we?

Where to start: foundations

Easy maintenance, not easy production

It is said that source code gets read a hundred times more often than it gets written. My experience confirms this circumstance, which leads to a principle of economic source code modifications: The first modification is almost for free, it’s the later ones that run up a bill. In order to achieve a low TCO (total cost of ownership), it’s sound advice to plan (and develop) for easy maintenance instead of easy production. This principle even has a fancy name: Keep It Simple, Stupid (KISS).

Origin of KISS

According to the english Wikipedia on the topic, the principle’s name was coined by an aircraft engineer named Kelly Johnson, who also designed the fastest jet plane of all time, the SR-71 “Blackbird”. The aircraft reached speeds of over Mach 3 and had an unmatched defensive armament: enough thrust to evade any confrontation. It would simply fly higher and/or faster than anything launched against it, like interceptor fighters or anti-air missiles. The Blackbird used construction material that was specifically invented for this plane and very expensive, so it definitely was no easy production. Sadly for this blog post, it wasn’t particularly easy to maintain, either. It usually leaked so much fuel that it had to be refueled directly after takeoff. The leaks were pragmatically designed that way and would seal themselves in flight. It’s quite an interesting plane.

Easy maintenance

Johnson once alledegly gave his engineers a bunch of tools and required that the aircraft under design must be repairable with only these tools, by an average mechanic in the field under combat conditions (e.g. stressed, exhausted and narrow timeframe). This is a wonderful concept that I regularly apply to my projects: Imagine that you are on-site with your customer, the most important functionality of your project just broke, resulting in a disastrous standstill of the whole facility and all you have is your sourcecode and the vi editor (or vim, if you’re non-hardcore like me). No internet, no IDE, no extensive documentation. Could you make meaningful changes to your project under these conditions? What needs to be changed to make your life easier in such an extreme situation? What restrictions would these changes impose on your daily work? How much effort/damage/resources would these changes cost? Is it easier to anticipate maintenance or to trust to luck that it won’t be necessary?

Easy production

A while ago, I reviewed the code of a map tool. The user viewed a map and could click on it to mark a certain location. The geo coordinates of this location would be used as an input for further computations. The map was restricted to a fixed area, so the developer wrote the code in the easiest way possible: He chose well-known geographic landmarks on the map and determined their geo coordinates and pixel locations. map-with-referenceThose were the reference points in the code that every click would be related to. Using easy mathematic (rule of three), each point on the map could be calculated. Clever trick and totally working! The code practically wrote itself and the reference points only needed to be determined once.

Until the map was changed. The code contained some obscure constants that described the original landmarks, but the whole concept of arbitrary reference points was alien to the next developer. He was used to the classic concept of two reference points: top left and bottom right, with their respective geo coordinates and pixel locations. What seemed like a quick task turned into a stressful reclaiming of the clever trick during production.

In this example, the production (initial development) was easy, straight-forward and natural, but not easily reproducible during the maintenance phase (subsequent modification). The algorithm used a clever approach, but this cleverness isn’t necessarily available “under combat conditions”.

Go the extra mile

Most machines are designed so that wearing parts can be easily replaced. Think of batteries in electronic gagdets (well, at least before the gadget’s estimated lifetime was lower than the average battery life) or light bulbs in cars (well, at least before LED headlights were considered cool). Thing is, engineers usually know the wear effects their designs have to endure. There is no “usual wear effect” on software, due to lack of natural forces like gravitation. Everything could change, so it’s better to be prepared for all sorts of change. That’s the theory, but it’s not economically sound to develop software that is prepared for any circumstance. Pragmatic reasoning calls for compromise, like supporting changes that

  • are likely to happen (this needs to be grounded in domain knowledge and should be documented in the code)
  • are not expensive to arrange beforehands (the popular “low-hanging fruits”)
  • are expensive to implement afterwards

The last aspect might be a bit of a surprise, but it’s the exact aspect that came to play in the example above: To recreate the knowledge about the clever trick of landmark choices needed more time than implementing the classic interpolation taking the edge points.

So if you see a possible (and not totally unlikely) change that will be expensive to implement once the intimate knowledge of the code you have during the initial development, go the extra mile and prepare for it right now. Leave a comment, implement some kind of extension mechanism or just mind the code seams. In our example, slightly complicating the initial development led to a dramatically less clever and more accessible code.

Conclusion

You should consider writing your code for easy maintenance, even if that means additional effort during the initial implementation. Imagine that you yourself have to do the future change, stressed, over-worked and under time pressure, without any recollection about your thoughts today, lacking your familiar tools while your customer waits impatiently by your side. With proper preparation, even this scenario is feasible.