Universal skills every software developer can benefit from

Disclaimer: I develop software. Professionally for almost 15 years. These are some skills that helped and help me and I think they could help any software developer.

Debug

I cannot tell you how many times debugging saved me. I debug with print statements, with IDEs, with command line debuggers and with my brain. Understanding how a system works is crucial. Which parts are connected and which are not. Asking what if. And asking what happened.

Profile

If things go slow, I need to know why. Users and stakeholders expect a certain speed. And rightly so. But beware: if you optimize for one scenario, others might suffer. Profiling and optimizations are a thing of priority: which tasks should be fast and which can be slow.

Sketch

If I work with or for others, understanding each other and the models and concepts they use is essential. Sketching helps me to illustrate my view, my understanding of their view and the misunderstandings between us. Even when not communicating with others I can communicate with myself. Sketching a model of what is in my head or what I plan helps me reason about it. You don’t need to be a master artist, simple shapes like lines, rectangles, circles and arrows get you a long way.

Concepts (domain and technical)

Everybody thinks in concepts and models. May they be from a technical or a user domain. In my daily work I need to understand, to develop, to extract and to communicate concepts. Concepts come from very different places: code has concepts, domains have concepts, our profession has concepts and all kinds of people have concepts. Concepts form the base for my communication.

Budgeting

Time is limited. Concentration is limited. Constraints in a project help me to focus. To be pragmatic. But they also push me to plan and to estimate. I need to develop a notion of how long a feature takes, how important it is, how risky.

Evaluate

In my work I constantly evaluate. From small scale: which implementation is better, to large scale: which technology, which architecture should I use. To evaluate, I need to know the goals and the criteria. My experience helps me and hinders me. I know that no evaluation can be objective. Every one has his personal favorites (and dislikes). Some things can only be seen afterwards. So I have to remind me that I don’t take too long with evaluation and start using it.

Talk

With other developers I can talk in IT lingo. With designers I need to use words from design. With users and stakeholders I speak so that they understand. My job is not only writing code. My job is to explain my job to others. If they do not understand me, it is my fault, not theirs. I do not need to bother them with every detail but sometimes they are the only ones who can decide. I need to tell them what their options are and what the consequences for each of them is – in their words.

Plan and prepare

They are two kinds of people: the ones who like to prepare and the ones who like to improvise. I am in the middle. Some things can be prepared and planned. It is useful when you can move work ahead of time or when you have (more) options when you need to improvise. Don’t overplan. Remember: a plan is there to be changed.

Improvise

During my career I face new situations every now and then. I cannot plan for them or I didn’t. When I am in a client meeting, in a demo presentation, at the production system and something goes not as planned, I need to do something. Sometimes now. It helps me to have a emergency mode. In these situations I focus on what I have: my brain and my voice and on what I can (maybe) get: help from others, a pencil and a paper, time. And sometimes I need to say: I am sorry.

Lead / own

If I work on an issue, I need to own it. If I lead a project, I need to own it. My career: I need to own it. I am the one who is responsible. That does not mean I have to perform the work myself. I also need to know when I am not the right person for the job. But I need to decide. The work, the project, the career is not a boat which drifts in a giant ocean, I need to take the paddle and use it.

Collaborate

I work not alone. I have teammates. I have clients. My goal is to work with them to a common goal. For this I need to collaborate. To delegate. To talk and to ask. To lead and to follow.

Define goals

Goals are measurable. I can ask: did I reach that goal? And answer with yes or no. Often we define something like: I want to get better at X as a goal. But that isn’t a goal. Think of a goal as a destination, not a direction. It is important to strike the balance between focusing too little or too much on our goals. But without even knowing what the goals are we just wander around.

Reflect and how to get feedback

I confess: I cannot live without feedback for too long. Am I on my way to the goal? Is this code any good? Was this the right decision? Do I make progress? Does it work? Reflection and feedback are stepping stones for me. A base from which I can move to higher mountains.

Ask

Asking is hard. Asking for help is hard. Asking for things you don’t (and maybe should) know is hard. But it helps immensely in learning. Be curious. Asking so that the other understands your question and you get the answers you need, needs practice. So: feel free to ask :)

Be(a)ware of Laziness

Let’s assume we have a simple JavaScript “class” called Module. Each instance of the class has a name, a start() method and a stop() method to manage its lifecycle:

function Module(name) {
    this.name = name;
    console.log("Creating " + this.name);
}
Module.prototype.start = function() {
    console.log("Starting " + this.name);
};
Module.prototype.stop = function() {
    console.log("Stopping " + this.name);
};

We want to create a couple of instances with the names “a”, “b” and “c”. At the beginning of the program we want to start each module, and at the end of the program we want to stop each module. For the creation of the instances we use a map() function call on the names array:

var names = ["a", "b", "c"];
var modules = names.map(function(name) {
    return new Module(name);
});
modules.forEach(function(module) {
    module.start();
});
// do something
modules.forEach(function(module) {
    module.stop();
});

The output is as intended:

Creating a
Creating b
Creating c
Starting a
Starting b
Starting c
Stopping a
Stopping b
Stopping c

Now we want to port this code to C#. The definition of the class is straight-forward:

class Module
{
    private readonly String name;

    public Module(string name)
    {
        this.name = name;
        Console.WriteLine("Creating " + name);
    }

    public void Start()
    {
        Console.WriteLine("Starting " + name);
    }

    public void Stop()
    {
        Console.WriteLine("Stopping " + name);
    }
}

The map() function is called Select() in .NET:

var names = new List<string>{"a", "b", "c"};
var modules = names.Select(
                 name => new Module(name));

foreach (var module in modules)
{
    module.Start();
}

foreach (var module in modules)
{
    module.Stop();
}

But when we run this program, we get a completely different output:

Creating a
Starting a
Creating b
Starting b
Creating c
Starting c
Creating a
Stopping a
Creating b
Stopping b
Creating c
Stopping c

Each module is created twice, and the creation calls are interleaved with the start() and stop() calls.

What has happened?

The answer is that .NET’s Select() method does lazy evaluation. It does not return a new list with the mapped elements. It returns an IEnumerable instead, which evaluates each mapping operation only when needed. This is a very useful concept. It allows for the chaining of multiple operations without creating an intermediate list each time. It also allows for operations on infinite sequences.

But in our case it’s not what we want. The stopped instances are not the same as the started instances.

How can we fix it?

By appending a .ToList() call after the .Select() call:

var modules = names.Select(
        name => new Module(name)).ToList();

Now the IEnumerable gets evaluated and collected into a list before the assignment to the modules variable.

So be aware of whether your programming language or framework uses lazy or eager evaluation for functional collection operations to avoid running into subtle bugs. Other examples of tools based on the concept of lazy evaluation are the Java stream API or the Haskell programming language. Some languages support both, for example Ruby since version 2.0:

range.collect { |x| x*x }
range.lazy.collect { |x| x*x }

Drawing Graphs with Circular Vertices

Graph drawing is a common algorithmical problem applicable to many fields besides software; common examples include the analysis of social networks or the visualization of biological graphs. In this article, I describe a simple extension to a standard force-directed algorithm that allows to draw graphs with circular vertices.

Force-directed algorithms or spring embedders place vertices by assigning forces according to the edges connecting the vertices. These algorithms are intuitive, able to yield solutions of high quality in a reasonable amount of time and can be applied to most kinds of graphs. For conciseness, most fundamentals will be omitted, a basic knowledge of these kind of algorithms is presumed.

Original algorithm

We employ a standard force-directed algorithm as a basis, that is, the algorithm of Fruchterman and Reingold. It assumes vertices to be point-shaped and defines two forces for influencing vertices: An attractive force fattr that pulls connected vertices towards each other and a repulsive force frep that disperses the vertices by repelling them from each other. The absolute value of the forces can be computed as follows:

  • fattr(u, v) = k2 / distance(u, v)
  • frep(u, v) = distance(u, v)2k

The directions of the forces are determined from the positions of the vertices given as two-dimensional vectors; for two vertices, the direction of repulsion and attraction is inverse. The complete force affecting a vertex v is computed by adding the repulsive forces for all other vertices and the attractive forces for all connected vertices together. As shown in the following figure, k describes the distance between two connected vertices whose attractive and repulsive forces are in equilibrium.

original-forces

The factor k is a constant and usually chosen according to the area of the drawing. If the distance between two vertices shrinks towards zero, the repulsive force grows infinitely. Similarly, for two connected vertices the attractive force grows with the distance between them. More information about the original algorithm can be found in the paper “Graph Drawing by Force-directed Placement“.

This approach works fine for point-shaped vertices, however, it cannot deal with two-dimensional vertices: It cannot ensure that vertices do not overlap, but only prevents their centers from touching. Next, I will present a slight modification of the algorithm in order to enable the handling of circular vertices.

Modified algorithm

Our goal is to ensure a minimum distance between all vertices so that their borders can neither touch nor overlap. Additionally, in contrast to the original algorithm where the distance between vertices depends on a constant, the distance of vertices should be determined according to the size of the vertices, that is, smaller vertices may be placed more near to each other than larger vertices.

As illustrated in the next figure, one way to meet the first requirement is to adjust the force functions. If the repulsive force grows infinitely when dwindling to a certain distance, this distance poses a lower bound for the distance of two vertices.

redefined-forcesFurthermore, in order to fulfill the second requirement, the minimum and preferred distance functions are parameterized with the radii of the vertices as follows:

  • dmin(u, v) = ru + rv + bmin · (cmin min(ru, rv) + cmax max(ru, rv))
  • dpref(u, v) = ru + rv + bpref · (cmin min(ru, rv) + cmax max(ru, rv))

The addition of the radii ru and rv ensures that vertices corresponding to the minimum distance never overlap. The constant bx controls the size of the buffer between two vertices subject to their radii as shown in the figure below. Finally, the constants cmin and cmax weigh the influence of the radii; this allows us to place a small vertex more near to a large vertex than another large vertex. The factors bx, cmin and cmax should be positive and cmin and cmax should add up to one.

preferred-distanceOn this basis the force functions can be redefined. Let dactual(u, v) be the actual distance between two vertices u and v and let dX(u, v) be the distance normalized by the minimum distance:

  • dactual(u, v) = |pos(v) – pos(u)|
  • dX(u, v) = dX(u, v) – dmin(u, v)

This results in the following force functions, which comply with the criteria specified before:

frep fattr

Finally, this leads us to the pseudocode of the complete algorithm for layouting a graph G = (V, E):

procedure Layout(G = (V, E))
    initialize temperature t
    for i = 1 to n do
        for each v in V do
            disp(v) = 0
            for each u in V do
                if u ≠ v then
                    Δ = pos(v) - pos(u)
                    disp(v) = disp(v) + Δ/|Δ| · f_rep(u, v)
        for each (u, v) in E do
            Δ = pos(v) - pos(u)
            disp(v) = disp(v) - Δ/|Δ| · f_attr(u, v)
            disp(u) = disp(u) + Δ/|Δ| · f_attr(u, v)
        for each v in V do
            pos(v) = pos(v) + disp(v)/|disp(v)| · min(|disp(v)|, t)
        t = cool(t)

Applications

The drawings produced by this algorithm are not perfect, however, it is possible to employ it in a more complex context. For example, we used it to visualize the structure of software; an example can be seen in the figure below. The techniques behind this other layout algorithm are described in detail here.

software-project

A small example of domain analysis

One thing I’ve learned a lot about in recent years is domain analysis and domain modeling. Every once in a while, an isolated piece of code or a separable concept shows me just how much I’ve missed out all the years before. A few weeks ago, I came across such an example and want to share the experience and insight. It’s a story about domain exploration with heightened degree of difficulty – another programmer had analyzed it before and written code that I should replace. But first, let’s talk about the domain.

The domain

04250The project consisted of a machine control software that receives commands and alters the state of a complex electronic circuitry accordingly. The circuitry consists of several digital-to-analog converters (DAC), among other parts. We will concentrate on the DACs in this story. In case you don’t know what a DAC is, let me explain. Imagine a little integrated circuit (IC), the black bug-like electronic parts on a circuit board. On one side, you provide it a digital number in binary representation and on the other side, you’ll get an analog voltage that represents your number. Let’s say you drive a 8-bit DAC and give it a digital zero, the output will be zero volt. If you give the same DAC the number 255, it will output the maximum possible voltage. This voltage is given by the “reference voltage” pin and is usually tied to 5 V in traditional TTL logic circuits. If you drive a 12-bit DAC, the zero will still yield 0 V, while the 255 will now only yield about 0,3 V because the maximum digital number is now 4095. So the resolution of a DAC, given in bits, is a big deal for the driver.

DAC0800How exactly you have to provide that digital number, what additional signals need to be set or cleared to really get the analog voltage is up to the specific type of DAC. So this is the part of behaviour that should be encapsulated inside a DAC class. The rest of the software should only be able to change the digital number using a method on a particular DAC object. That’s our modeling task.

The original implementation

My job was not to develop the machine control software from scratch, but re-engineer it from existing sources. The code is written in plain C by an electronics technician, and it really shows. For our DAC driver, there was a function that took one argument – an integer value that will be written to the DAC. If the client code was lazy enough to not check the bounds of the DAC, you would see all kinds of overflow effects. It worked, but only if the client code knew about the resolution of the DAC and checked the bounds. One task the machine control software needed to do was to translate the command parameters that were given in millivolts to the correct integer number to feed it into the DAC and receive the desired millivolts at the analog output pin. This calculation, albeit not very complicated, was duplicated all over the place.


writeDAC(int value);

My original translation

One primary aspect when doing re-engineering work is not to assume too much and don’t change too many places at once. So my first translation was a method on the DAC objects requiring the exact integer value that should be written. The method would internally check for the valid value range because the object knows about the DAC resolution, while the client code should subsequently lose this knowledge. The original code translated nicely to this new structure and worked correctly, but I wasn’t happy with it. To provide the correct integer value, the client code needs to know about the DAC resolution and perform the calculation from millivolts to DAC value. Even if you centralize the calculation, there are still calls from everywhere to it.


dac.write(int value);

My first relevation

When I finally had translated all existing code, I knew that every single call to the DAC got their parameter in millivolts, but needed to set the DAC integer. Now I knew that the client code never cared about DAC integers at all, it cared about millivolts. If you find such a revelation, act on it – even just to see where it might lead you to. I acted and replaced the integer parameter of the write method on the DAC object with a voltage parameter. I created the Voltage domain type and had it expose factory methods to be easily created from millivolts that were represented by integers in the commands that the machine control software received. Now the client code only needed to create a Voltage object and pass it to the DAC to have that voltage show up at the analog output pin. The whole calculation and checking part happened inside the DAC object, where it belongs.


dac.write(Voltage required);

This version of the code was easy to read, easy to reason about and worked like a charm. It went into production and could be the end of the story.

The second insight

But the customer had other plans. He replaced parts of the original circuitry and upgraded most of the DACs on the way. Now there was only one type of DAC, but with additional amplifier functionality for some output pins (a typical DAC has several output pins that can be controlled by a pin address that is provided alongside the digital number). The code needed to drive the DACs, that were bound to 5 V reference voltage, but some channels would be amplified to double the voltage, providing a voltage range from 0 V to 10 V. If you want to set one of those channels to 5 V output voltage, you need to write half the maximum number to it. If the DAC has 12-bit resolution, you need to write 2047 (or 2048, depending on your rounding strategy) to it. Writing 4095 would yield 10 V on those channels.

Because the amplification isn’t part of the DAC itself, the DAC code shouldn’t know about it. This knowledge should be placed in a wrapper layer around the DAC objects, taking the voltage parameters from the client code and changing it according to the amplification of the channel. The client code would want to write 10 V, pass it to the wrapper layer that knows about the amplification and reduces it to 5 V, passing this to the DAC object that transforms it to the maximum reference voltage (5 V) that subsequently gets amplified to 10 V. This sounded so weird that I decided to review my domain analysis.

It dawned on me that the DAC domain never really cared about millivolts or voltages. Sure, the output will be a specific voltage, but it will be relative to your input in relation to the maximum value. The output voltage has the same percentage of the maximum value as the input value. It’s all about ratios. The DAC should always demand a percentage from the client code, not a voltage. This way, you can actually give it the ratio of anything and it will express this ratio as a voltage compared to the reference voltage. The DAC is defined by its core characteristics and the wrapper layer performs the translation from required voltage to percentage. In case of amplification, it is accounted for in this translation – the DAC never needs to know.


dac.write(Percentage required);

Expressiveness of the new concept

Now we can really describe in code what actually happens: A command arrives, requiring us to set a DAC channel to 8 volt. We create the voltage object for 8 volt and pass it on to the DAC wrapper layer. The layer knows about the 2x amplification and the reference voltage. It calculates that 8 volt will be 80% of the maximum DAC value (80% of 5 V being 4 V before and 8 V after amplification) and passes this information to the DAC object. The DAC object, being the only one to know its resolution, sets 0.8 * maximum_DAC_value to the required register and everything works.

The new concept of percentages decouples the voltage information from the DAC resolution information and keeps both informations where they belong. In fact, the DAC chip never really knows about the reference voltage, either – it’s the circuit around it that knows.

Conclusion

While it is easy to see why the first version with voltages as parameters has its charms, it isn’t modeling the reality accurately and therefor falls short when flexibility is required. The first version ties DAC resolution and reference voltage together when in fact the DAC chip only knows the resolution. You can operate the chip with any reference voltage within a valid range. By decoupling those informations and moving the knowledge about reference voltages outside the DAC object, I modeled the reality more accurate and every requirement finds its natural place. This “natural place finding” is what makes a good model useful for reasoning. In our case, the natural place for the reference voltage was outside the DAC in the wrapper layer. Finding a real name for the wrapper layer was easy, I called it “circuit board”.

Domain analysis is all about having the right abstractions for your model. Your model is suitable for your task when everything fits and falls into place nearly automatically. When names needn’t be found but kind of obtrude themselves from the real domain. The right model (for the given task) feels good and transports a lot of domain knowledge. And domain knowledge is the most treasurable knowledge for any developer.

Object slicing – breaking polymorphic objects in C++

C++ has one pitfall called “object slicing” alien to most programmers using other object-oriented languages like Java. Object slicing (fruit ninja-style) occurs in various scenarios when copying an instance of a derived class to a variable with the type of (one of) its base class(es), e.g.:

#include <iostream>

// we use structs for brevity
struct Base
{
  Base() {}
  virtual void doSomething()
  {
    std::cout << "All your Base are belong to us!\n";
  }
};

struct Derived : public Base
{
  Derived() : Base() {}
  virtual void doSomething() override
  {
    std::cout << "I am derived!\n";
  }
};

static void performTask(Base b)
{
  b.doSomething();
}

int main()
{
  Derived derived;
  // here all evidence that derived was used to initialise base is lost
  performTask(derived); // will print "All your Base are belong to us!"
}

Many explanations of object slicing deal with the fact, that only parts of the fields of derived classes will be copied on assignment or if a polymorphic object is passed to a function by value. Usually this is ok because most of the time only the static type of the Base class is used further on. Of course you can construct scenarios where this becomes a problem.

I ran into the problem with virtual functions that are sliced off of polymorphic objects, too. That can be hard to track down if you are not aware of the issue. Sadly, I do not know of any compilers that issue warnings or errors when passing/copying polymorphic objects by value.

The fix is easy in most cases: Use naked pointers, smart pointers or references to pass your polymorphic objects around. But it can be really hard to track the issue down. So try to define conventions and coding styles that minimise the risk of sliced objects. Do not avoid using and passing values around just out of fear! Values provide many benefits in correctness and readability and even may improve performance when used with concrete classes.

Edit: Removed excess parameters in contruction of derived. Thx @LorToso for the comment and the hint at resharper C++!

Software development is code organization

The biggest problem in developing and maintaining software is understanding code. Software developers should get good training in crafting code which can be understood. To make sense of the mess we need to organize it.

In 2000 Edsger Dijkstra wrote about our problems organizing and designing software:

I would therefore like to posit that computing’s central challenge, “How not to make a mess of it”, has not been met. On the contrary, most of our systems are much more complicated than can be considered healthy, and are too messy and chaotic to be used in comfort and confidence.

Our code bases get so big and complicated today that we cannot comprehend them all at once. Back in the days of UNIX technical constraints led to smaller code. But the computer is not the limiting factor anymore. We are. Our mind cannot comprehend what we create. Brian Kernighan wrote:

Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?

Writing code that we (or other developers) can understand is crucial. But why do we fail?

Divide and lose

Usually the first argument when tackling code is to decouple it. Make it clean. Use clean code principles like DRY, SOLID, KISS, YAGNI and what other acronyms you know. These really help to decouple. But they are missing the point. They are the how not the why.
Take a look at your codebase and tell me where are the classes which constitute a subdomain or a specific feature? In which project or part do they live?
Normally you cannot. We only know how to divide code by technical aspects. But features and changes often come from the domain, not from the technology.
How can we understand our creations when we cannot understand its structure? Its architecture? How can we understand something we do not see.

But it does work

The next argument is not much better. Our code might work now. But what if a bug is found or a new feature is about to be implemented? Do you understand the code and its structure? Even weeks, months or years later? Working code is good but you can only change code reliably that you understand.

KISS

Write simple code. Write simple and small methods. Write cohesive classes. The dream of components. But the whole is more than the sum of its parts. You can write simple classes but the communication and threading issues between them can be very complex. Even if the interfaces are sound and simple. Understanding a simple class can be easy in isolation. But understanding a system of simple classes can be difficult and complex. Things are complex. Domains are complex. We cannot ignore that.

Code as an interface

When writing code we have to take the reader and the domain into account. Treat code as an interface. An interface to the system and the domain. It is an opinionated view of the world. The computer does not care about the code we use. Just like the printer who prints our favorite book. But the reader does.
This isn’t just nice thinking, understanding code is key to successfully crafting and changing software.

Assumptions – how to find, track and eliminate them

Assumptions can kill a project. Like a house built on sand we don’t know when and where it will collapse.
The problem with assumptions is that they disguise as truths. We believe them. They are the project’s reality. Just like the matrix.
Assumptions are shortcuts. Guesses at reality. We cannot fully grasp reality, so we assume. But we can find evidence for our decisions. For this we need to uncover the assumptions, assess their risk and gather evidence. But how do we know what we assume?

Find assumptions

Watch your language

‘I think’, ‘In my opinion’, ‘should be’, ‘roughly’, ‘circa’ are all clues for assumptions. Decisions need to be based on evidence. When we use vague language or personal opinions to describe our project we need to pause. Under this lurks insecurity and assumptions.
Another red flag are metaphors. Metaphors might be great to present, paint a picture in our head or describe a vision. But in decision making they are too abstract and meaningless. We may use them to describe our strategy but when we need to design and implement we need borders that constrain our decisions. Metaphors usually cover only some aspects of the project and vice versa. There’s a mismatch. We need concrete language without ambiguity.

Be dumb

We know so much that we think others have the same experience, education, view point, familiarity, proficiency and imprinting. We know so little that we think the other way is also true. We transfer. We assume. Dare to ask dumb questions. Adopt a beginner’s mind. Challenge traditions and common beliefs.
We take age old decisions for granted. They were made by people smarter than us, so they must be right. Don’t do this. Question them. Even the obvious ones.
In the book ‘Hidden in plain sight’ Jan Chipchase enters a typical cafe where people sit and talk, drink coffee and typing on their laptops. The question he poses: should the coffeshop owner sell diapers? So that everybody can continue what they do without the need to go to the bathroom. This question challenges our cultural and imprinted beliefs. And this is good.

Be curious

Ask: why? We need to get to the root of the problem. Dig deeper. Often under layers of reasoning and thoughtful decisions lies an assumption. The chain is only so strong as its weakest link. If we started with an assumption, the reasoning building on it is also assumed. Children often ask why and don’t stop even when we think it is all said and logical. So when we find the root, we need to continue to ask: is this really the root? Why is it the way it is.
Another question we need to ask repeatedly is: what if? What if: our target audience changes? we try to follow the opposite of the goals of our project? what if the technology changes?

Change perspectives

We see what we want to see. Seeing is an active process. We can stretch our thinking only so far. To stretch it even further we need to change roles. For just some hours do the work our users do. Feel their pains. Their highs and lows.
Or adopt the role of the browser. Good interfaces are conversations. Play a dialog with your user. Be the browser.
Only by embracing constraints of other perspectives we can force ourselves to stretch. In this way we find things which are assumed by us because of our view of the world.

Track them

After we have collected the assumptions we need to track them to later prove or disprove them. For this a simple spreadsheet or table is sufficient. This learning plan consists of 5 columns (taken from Leah Buley’s The UX Team of one):

  • the assumption: what we believe is true
  • the certainty: a 3 or 5 point scale showing how sure we are that we are right
  • notes: additional notes of why we think the assumption is right or wrong
  • the evidence: results which we collected to support this assumption
  • the research: things we can do to collect further evidence

Eliminate them

Now that we know what we assume and with which certainty we think we are right, we can start to collect further information to support or disprove our claims. In short: We research. Research can take many different forms. But all forms are there to gain further insights. Some basic forms we use to bring light into the darkness of uncertainty are:

  • Stakeholder interviews
  • (Contextual) user interviews
  • Heuristic evaluation
  • Prototyping
  • Market research

Other methods we don’t use (yet) include:

  • A/B tests (paired with analytics)
  • User tests

The point behind all these methods is to build a chain of reasoning. Everything in our software needs a reason to exist. The users and the stakeholders are the primary sources of insight. But also our experience, the human psychology and common patterns or conventions help us to decide which way to go.
Not only the method of collecting is important but also how the results are documented. We should present the essential information in a way that it is easy to get a glimpse of it just by looking at the respective documents. On the other side we should all keep this pragmatic and not go overboard. Our goal is to get insight and not build a proof of the system.