Introduction to Debian packaging

In former posts I wrote about packaging your software as RPM packages for a variety of use cases. The other big binary packaging system on Linux systems is DEB for Debian, Ubuntu and friends. Both serve the purpose of convenient distribution, installation and update of binary software artifacts. They define their dependencies and describe what the package provides.

How do you provide your software as DEB packages?

The master guide to debian packaging can be found at https://www.debian.org/doc/manuals/maint-guide/. It is a very comprehensive guide spanning many pages and providing loads of information. Building a usable package of your software for your clients can be a matter of minutes if you know what to do. So I want to show you the basic steps, for refinement you will need to consult the guide or other resources specific to the software you want to package. For example there are several guides specific to packaging of software written in Python. I find it hard to determine what the current and recommended way to debian packages for python is because there are differing guides over the last 10 years or so. You may of course say “Just use pip for python”🙂

Basic tools

The basic tools for building debian packages are debhelper and dh-make. To check the resulting package you will need lintian in addition. You can install them and basic software build tools using:

  sudo apt-get install build-essential dh-make debhelper lintian

The python packaging system uses make and several scripts under the hood to help building the binary packages out of source tarballs.

Your first package

First you need a tar-archive of the software you want to package. With python setuptools you could use python setup.py sdist to generate the tarball. Then you run dh_make on it to generate metadata and the package build environment for your package. Now you have to edit the metadata files, namely control, copyright, changelog. Finally you run dpkg-buildpackage to generate the package itself.  Here is an example of the necessary commands:

mkdir hello-deb-1.0
cd hello-deb-1.0
dh_make -f ../hello-deb-1.0.tar.gz
# edit deb metadata files
vi debian/control
vi debian/copyright
vi debian/changelog
dpkg-buildpackage -us -uc
lintian -i -I --show-overrides hello-deb_1.0-1_amd64.changes

The control file roughly resembles RPMs SPEC file. Package name, description, version and dependency information belong there. Note that debian is very strict when it comes to naming of packages, so make sure you use the pattern ${name}-${version}.tar.gz for the archive and that it extracts into a corresponding directory without the extension, e.g. ${name}-${version}.

If everything went ok several files were generated in your base directory:

  • The package itself as .deb file
  • A file containing the changelog and checksums between package versions and revisions ending with .changes
  • A file with the package description ending with .dsc
  • A tarball with the original sources renamed according to debian convention hello-deb_1.0.orig.tar.gz (note the underscore!)

Going from here

Of course there is a lot more to the tooling and workflow when maintaining debian packages. In future posts I will explore additional means for improving and updating your packages like the quilt patch management tool, signing the package, symlinking, scripts for pre- and post-installation and so forth.

Getting better at programming without coding

Almost two decades ago one of the programming books was published that had a big impact on my thinking as a software engineer: the pragmatic programmer. Most of the tips and practices are still fundamental to my work. If you haven’t read it, give it a try.
Over the years I refined some practices and began to get a renewed focus on additional topics. One of the most important topics of the original tips and of my profession is to care and think about my craft.
In this post I collected a list of tips and practices which helped and still help me in my daily work.

Think about production

Since I develop software to be used, thinking early about the production environment is key.

Deploy as early as possible
Deployment should be a non event. Create an automatic deployment process to keep it that way and deploy as early as possible to remove the risk from unpleasant surprises.

Master should always be deployable
Whether you use master or another branch, you need a branch which could always be deployed without risk.

Self containment
Package (as many as possible of) your dependencies into your deployment. Keep the surprises of missing repositories or dependencies to a minimum of none.

Use real data in development
Real data has characteristics, gaps and inconsistencies you cannot imagine. During development use real data to experience problems before they get into production.

No data loss
Deploying should not result in a loss of data. Your application should shutdown gracefully. Often deployment deletes the directory or uses a fresh place. No files or state in memory should be used as persistence by the application. Applications should be stateless processes.

Rollback
If anything goes wrong or the new deployed application has a serious bug you need to revert it to the last version. Make rollback a requirement.

No user interruption
Users work with your application. Even if they do not lose data or their current work when you deploy, they do not like surprises.

Separate one off tasks
Software should be running and available to the user. Do not delay startup with one off admin tasks like migration, cache warm-up or search index creation. Make your application start in seconds.

Manage your runs
Problems, performance degradation and bugs should be visible. Monitor your key metrics, log important things and detect problems in the application’s data. Make it easy to combine, search and graph your recordings.

Make it easy to reproduce
When a bug occurs or your user has a problem, you need to follow the steps how the system arrived at its current state. Store their actions so that they can be easily replayed.

Think about users

Software is used by people. In order to craft successful applications I have to consider what these people need.

No requirements, just jobs
Users use the software to get stuff done. Features and requirements confuse solutions with problems. Understand in what situation the user is and what he needs to get to his goal. This is the job you need to support.

Work with the user
In order to help the user with your software I need to relate to his situation. Observing, listening, talking to and working along him helps you see his struggles and where software can help.

Speak their language
Users think and speak in their domain. Not in the domain of software. When you want to help them, you and the user interface of your software needs to speak like a user, not like a software.

Value does not come from effort
The most important things your software does are not the ones which need the most effort. Your users value things which help them the most. Find these.

Think about modeling

A model is at the core of your software. Every system has a state. How you divide and manage this state is crucial to evolving and understanding your creation.

Use the language of the domain
In your core you model concepts from the user’s domain. Name them accordingly and reasoning about them and with the users is easier.

Everything has one purpose
Divide your model by the purpose of its parts.

Separate read from write
You won’t get the model right from the start. It is easier to evolve the model if read and write operations have their own model. You can even have different read models for different use cases. (see also CQRS and Turning the database inside out)

Different parts evolve at different speeds
Not all parts of a model are equal. Some stand still, some change frequently. Some are specified, about some others you learn step by step. Some need to be constant, some need to be experimented with. Separating parts by its changing speed will help you deal with change.

Favor immutability
State is hard. State is needed. Isolating state helps you understand a running system. Isolating state helps you remove coupling.

Keep it small
Reasoning about a large system is complicated. Keep effects at bay and models small. Separating and isolating things gives you a chance to overview the whole system.

Think about approaches

Getting to all this is a journey.

When thinking use all three dimensions
Constraining yourself to a computer screen for thinking deprives you of one of your best thinking tools: spatial reasoning. Use whiteboards, walls, paper and more to remove the boundaries from your thoughts.

Crazy 8
Usually you think in your old ways. Getting out of your (mental) box is not easy. Crazy 8 is a method to create 8 solutions (sketches for UI) in a very short time frame.

Suspend judgement
As a programmer you are fast to assess proposals and solutions. Don’t do that. Learn to suspend your judgement. Some good ideas are not so obvious, you will kill them with your judgement.

Get out
Thinking long and hard about a problem can put you into blindfold mode. After stating the problem, get out. Take a walk. Do not actively think or talk about the problem. This simulates the “shower effect”: getting the best ideas when you do not actively think about the problem.

Assume nothing
Assumptions bear risks. They can make your project fail. Approach your project with what is certain. Choose your direction to explore and find your assumptions. Each assumption is an obstacle, an question that needs an answer. Ask your users. Design hypotheses and experiments to proof them. (see From agile to UX for a detailed approach)

Pre-mortem
Another way to find blind spots in your thinking is to frame for failure. Construct a scenario in which your project is failed. Then reason about what made it fail. Where are your biggest risks? (see How to map your fears for details)

MVA – Minimum, valuable action
Every step, every experiment should be as lightweight as possible. Do not craft a beautiful prototype if a sketch would suffice. Choose the most efficient method to get further to your goal.

Put it into a time box
When you need to experiment, constrain it. Define a time in which you want to have an answer. You do not need to go the whole way to get an impression.

Three natural resources of information technology

Disclaimer: English is not my native language, so it is possible that my terminology is a little skewed in this blog entry. If you have a suggestion for better words, please let me know.

IT Currencies

Everybody working in the vast field of information technology knows about the three constraints in the project management triangle, namely

  • Cost
  • Scope
  • Schedule

We can translate these constraints in the three currencies of business:

  • Money
  • Effort
  • Time

You can virtually achieve anything in IT if you are willing to spend lots of these three currencies (except solving the Halting problem and similar decision problems).

IT Commodities

You surely also know about the three upscaling commodities of our profession:

  • processing power (think CPU)
  • memory (think RAM or HDD)
  • bandwidth (think network throughput)

If you are willing to invest more business currency, you’ll get more of these commodities. You’ll invest mostly money or time, albeit Moore’s Law seems to dwindle, so spending time, as in waiting for the next generation of computers, is not the superior deal it used to be.

There are three downscaling commodities, too:

  • latency (think caches or parallelism)
  • physical size (think USB sticks the size of a fingernail)
  • energy consumption (think Rasperry PCs that are powered over USB)

These commmodities are getting reduced with every new generation of computers. What once was a super-computer is now a 30$ Mini-PC. I vividly remember my university announcing their latest piece of technology during my first year of study: a computer with 1 GHz CPU, 1 GB RAM and 1 TB HDD. This machine was used by all students concurrently. Today, my phone provides more power and fits into my pocket.

IT natural resources

By Stepanovas (Stapanov Alexander). Timestamp at the bottom right was removed by Michiel Sikma in 2006. - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=350061Now, with currency and commodity defined, let me introduce you to the concept of natural resources of IT. Natural resources are things that have value to the business and need to be harvested instead of being just bought. While you shouldn’t envision material natural resources now, it helps to introduce the concept of a natural resource: You may buy a whole mountain, but the iron ore in it (a major natural resource for the industry) still needs to be mined. Raw iron ore is the starting point for many processing steps, each one refining the input material and producing output material of higher value. There are countless different materials in the world, but only a handful of major natural resources. Whoever was the first to drill a hole in the ground and get crude oil back was a rich man. Keep this imagery in mind when we talk about the natural resources of IT, but please forget about the aspect of mass. Our natural resources don’t have a mass. They do keep a location, though.

Data or information: You’ve already guessed this. The oil of IT is data. Data can be labeled as crude oil, information is refined data then. Just imagine you drill a hole in the ground and it spits out random facts. You could just record the facts and build a knowledge database out of it. If you want to give your hole in the ground a name, you could name it Facebook, Twitter or something alike. Data is processed and turned into information, information is combined and aggregated to give us more valueable information, just like the iron ore example beforehands. In the old days, data was provided by human effort (aka typing). In the era of the internet of things, most data is provided by sensors all over the world. And while data still maintains a location (but increasingly fuzzy in the era of cloud computing), it has no mass. This means it can be copied without cost, a feature no material natural resource can offer. Data as a natural resource of IT is so widely known, it even gave IT its first name: electronic data processing.

Source code: The fabric all software is made of is another natural resource of IT. You could argue that code is just data, but I think its whole processing pipeline is so remarkely different from data, it should be discussed seperately. Source code is still harvested from hand, by humans typing words into a text editor like it’s 1980. Source code is refined to running software programs, a process that got fully automated in the recent years. The software is then used to gather data, distill information out of data or, well, entertain us. Source code is a rare natural resource, because it needs to be harvested by highly skilled workers in a delicate process called programming. The number of programmers worldwide doubles every five years, but the demand for software rises even faster. All the while, we still haven’t figured out to maintain an acceptable quality level. If source code is the equivalent to gold (rare, valueable, sought-after), it most often comes mixed with all kinds of scrap metal.

Random numbers: The raw material of anything cryptographic are random numbers. They might be seen as data, too, but again, I think their unique properties require a separate examination. Random numbers need to be truly random. The higher the randomness, the higher the quality of this natural resource. A lot of random numbers we use (or consume) today are really just pseudorandom numbers, obtained from an ultimately deterministic generator. We rely on this second-grade material because the harvesting speed for real random numbers is pitiful slow and cannot satisfy our need of random numbers. Imagine again that you drill a hole in the ground and it spits out random numbers. You’re going to get rich, because random numbers are the crude oil of cryptography and therefore of every serious data transfer today. If you think about sources of randomness, radioactive decay or cosmic radiation are very high on the list. The RANDOM.ORG service uses atmospheric noise, as if the weather in Ireland would provide much noise – it will rain tomorrow, too. A speciality of random numbers is that they can only be used once to provide their full value. Nobody wants to use second-hand random numbers because they lose their randomness once they are known (much like you can’t bet on last week’s sport events). So while we can still say that random numbers have no mass, they are more similar to their material counterparts in that they can’t be copied and are affected by decay over time.

What now?

This blog post was meant to inspire and to share a question: are those all natural resources in the field of IT? I thought long about it but could only find derived products like blockchain blocks that ultimately rely on brute-forcing one-way functions like hashes in the wrong way. To mine a bitcoin, for example, the most prominent implementation of a blockchain, you only need commodities like processing power and some time. There is nothing inherently “unique” about a blockchain block. Nice choice of terminology with “mining bitcoins”, though.

So, the question goes to you: Can you think of another natural resource of IT? Please leave a comment if you do.

Generating an Icosphere in C++

If you want to render a sphere in 3D, for example in OpenGL or DirectX, it is often a good idea to use a subdivided icosahedron. That often works better than the “UVSphere”, which means simply tesselating a sphere by longitude and latitude. The triangles in an icosphere are a lot more evenly distributed over the final sphere. Unfortunately, the easiest way, it seems, is to generate such a sphere is to do that in a 3D editing program. But to load that into your application requires a 3D file format parser. That’s a lot of overhead if you really need just the sphere, so doing it programmatically is preferable.

At this point, many people will just settle for the UVSphere since it is easy to generate programmatically. Especially since generating the sphere as an indexed mesh without vertex-duplicates further complicates the problem. But it is actually not much harder to generate the icosphere!
Here I’ll show some C++ code that does just that.

C++ Implementation

We start with a hard-coded indexed-mesh representation of the icosahedron:

struct Triangle
{
  Index vertex[3];
};

using TriangleList=std::vector<Triangle>;
using VertexList=std::vector<v3>;

namespace icosahedron
{
const float X=.525731112119133606f;
const float Z=.850650808352039932f;
const float N=0.f;

static const VertexList vertices=
{
  {-X,N,Z}, {X,N,Z}, {-X,N,-Z}, {X,N,-Z},
  {N,Z,X}, {N,Z,-X}, {N,-Z,X}, {N,-Z,-X},
  {Z,X,N}, {-Z,X, N}, {Z,-X,N}, {-Z,-X, N}
};

static const TriangleList triangles=
{
  {0,4,1},{0,9,4},{9,5,4},{4,5,8},{4,8,1},
  {8,10,1},{8,3,10},{5,3,8},{5,2,3},{2,7,3},
  {7,10,3},{7,6,10},{7,11,6},{11,0,6},{0,1,6},
  {6,1,10},{9,0,11},{9,11,2},{9,2,5},{7,2,11}
};
}

icosahedron
Now we iteratively replace each triangle in this icosahedron by four new triangles:

subdivision

Each edge in the old model is subdivided and the resulting vertex is moved on to the unit sphere by normalization. The key here is to not duplicate the newly created vertices. This is done by keeping a lookup of the edge to the new vertex it generates. Note that the orientation of the edge does not matter here, so we need to normalize the edge direction for the lookup. We do this by forcing the lower index first. Here’s the code that either creates or reused the vertex for a single edge:

using Lookup=std::map<std::pair<Index, Index>, Index>;

Index vertex_for_edge(Lookup& lookup,
  VertexList& vertices, Index first, Index second)
{
  Lookup::key_type key(first, second);
  if (key.first>key.second)
    std::swap(key.first, key.second);

  auto inserted=lookup.insert({key, vertices.size()});
  if (inserted.second)
  {
    auto& edge0=vertices[first];
    auto& edge1=vertices[second];
    auto point=normalize(edge0+edge1);
    vertices.push_back(point);
  }

  return inserted.first->second;
}

Now you just need to do this for all the edges of all the triangles in the model from the previous interation:

TriangleList subdivide(VertexList& vertices,
  TriangleList triangles)
{
  Lookup lookup;
  TriangleList result;

  for (auto&& each:triangles)
  {
    std::array<Index, 3> mid;
    for (int edge=0; edge<3; ++edge)
    {
      mid[edge]=vertex_for_edge(lookup, vertices,
        each.vertex[edge], each.vertex[(edge+1)%3]);
    }

    result.push_back({each.vertex[0], mid[0], mid[2]});
    result.push_back({each.vertex[1], mid[1], mid[0]});
    result.push_back({each.vertex[2], mid[2], mid[1]});
    result.push_back({mid[0], mid[1], mid[2]});
  }

  return result;
}

using IndexedMesh=std::pair<VertexList, TriangleList>;

IndexedMesh make_icosphere(int subdivisions)
{
  VertexList vertices=icosahedron::vertices;
  TriangleList triangles=icosahedron::triangles;

  for (int i=0; i<subdivisions; ++i)
  {
    triangles=subdivide(vertices, triangles);
  }

  return{vertices, triangles};
}

There you go, a customly subdivided icosphere!
icosphere

Performance

Of course, this implementation is not the most runtime-efficient way to get the icosphere. But it is decent and very simple. Its performance depends mainly on the type of lookup used. I used a map instead of an unordered_map here for brevity, only because there’s no premade hash function for a std::pair of indices. In pratice, you would almost always use a hash-map or some kind of spatial structure, such as a grid, which makes this method a lot tougher to compete with. And certainly feasible for most applications!

The general pattern

The lookup-or-create pattern used in this code is very useful when creating indexed-meshes programmatically. I’m certainly not the only one who discovered it, but I think it needs to be more widely known. For example, I’ve used it when extracting voxel-membranes and isosurfaces from volumes. It works very well whenever you are creating your vertices from some well-defined parameters. Usually, it’s some tuple that describes the edge you are creating the vertex on. This is the case with marching cubes or marching tetrahedrons. It can, however, also be grid coordinates if you sparsely generate vertices on a grid, for example when meshing heightmaps.

ECMAScript 6 is coming

ECMAScript is the standardized specification of the scripting language that is the basis for JavaScript implementations in web browsers.

ECMAScript 5, which was released in 2009, is in widespread use today. Six years later in 2015 ECMAScript 6 was released, which introduced a lot of new features, especially additions to the syntax.

However, browser support for ES 6 needed some time to catch up.
If you look at the compatibility table for ES 6 support in modern browsers you can see that the current versions of the main browsers now support ES 6. This means that soon you will be able to write ES 6 code without the necessity of a ES6-to-ES5 cross-compiler like Babel.

The two features of ES 6 that will change the way JavaScript code will be written are in my opinion: the new class syntax and the new lambda syntax with lexical scope for ‘this’.

Class syntax

Up to now, if you wanted to define a new type with a constructor and methods on it, you would write something like this:

var MyClass = function(a, b) {
	this.a = a;
	this.b = b;
};
MyClass.prototype.methodA = function() {
	// ...
};
MyClass.prototype.methodB = function() {
	// ...
};
MyClass.prototype.methodC = function() {
	// ...
};

ES 6 introduces syntax support for this pattern with the class keyword, which makes class definitions shorter and more similar to class definitions in other object-oriented programming languages:

class MyClass {
	constructor(a, b) {
		this.a = a;
		this.b = b;
	}
	methodA() {
		// ...
	}
	methodB() {
		// ...
	}
	methodC() {
		// ...
	}
}

Arrow functions and lexical this

The biggest annoyance in JavaScript were the scoping rules of this. You always had to remember to rebind this if you wanted to use it inside an anonymous function:

var self = this;
this.numbers.forEach(function(n) {
    self.doSomething(n);
});

An alternative was to use the bind method on an anonymous function:

this.numbers.forEach(function(n) {
    this.doSomething(n);
}.bind(this));

ES 6 introduces a new syntax for lambdas via the “fat arrow” syntax:

this.numbers.forEach((n) => this.doSomething(n));

These arrow functions do not only preserve the binding of this but are also shorter. Especially if the function body consists of only a single expression. In this case the curly braces and and the return keyword can be omitted:

// ES 5
numbers.filter(function(n) { return isPrime(n); });
// ES 6
numbers.filter((n) => isPrime(n));

Wide-spread support for ECMAScript 6 is just around the corner, it’s time to familiarize yourself with it if you haven’t done so yet. A nice overview of the new features can be found at es6-features.org.

4 questions you need to ask yourself constantly while programming

Most of today’s general purpose progamming languagues come with plethora of features. Often there are different levels of abstractions and intended use cases. Some features are primarily for library designers, others ease implementation of domain specific languages and application developers use mostly another feature set.

Some language communities are discussing “language profiles / levels” to ban certain potentionally harmful constructs. The typical audience like application programmers does not need them but removing them from the language would limit its usefulness in other cases. Examples are Scala levels (a bit dated), the Google C++ Style Guide or Profiles in the C++ Core Guidelines.

In the wild

When reading other peoples code I often see novice code dealing with low-level threading. Or they go over board with templates, reflection or meta programming.

I have even seen custom ClassLoaders in Java written by normal application programmers. People are using threads when workers, tasks, actors or other more high-level abstractions would fit much better.

Especially novices seem to be unable to recognize their limits and to stay off of inappropriate and potentially dangerous features.

How do you decide what is appropriate in your situation?

Well, that is a difficult question. If you find the task at hand seems hard you should probably take a step back because:

There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors.

-Jeff Atwood

Then ask yourself some simple questions:

  1. Someone must have done it before. Have I searched thoroughly for hints or solutions?
  2. Is there a (better) library, data structure or abstraction?
  3. Do I really have to do this? There must be a better/easier way!
  4. What do I gain using feature/library/tool X and what are its costs? What about the alternatives?

Conclusion

You need some experience to recognize that you are on the wrong path, solving problems you would not even have if doing the right thing in the first place.

Experience is what you got by not having it when you needed it.

-Author Unknown

Try to know and admit your limits – there is nothing wrong with struggling to get things working but it helps to frequently check your direction by taking a step back and reflecting.

From Agile to UX: a change in perspective

Usually a project starts with a client sending us a list of requirements in varying levels of detail. In my early days I started with finding the most efficient way to fulfill these requirements with written software.

Over time and with increased experience I broke down the large requirements into smaller ones. With every step I tried to get feedback from the client if my solution matched his imagination.

Step by step I refined this iterative process by developing more efficiently, getting earlier feedback, testing and asking questions for more detail about the constraints of the solution. Sometimes I identified some parts that weren’t needed in the solution.

In the journey to getting more agile I even re-framed the initial question from ‘how can I get more efficiently to a satisfying solution’ to ‘which minimal solution brings the most value to the customer’.

This was the first change of perspective from the process of solving to a process of value. But a problem still persisted: the solution was based on my assumptions of what I believe that brings value to the customer.
The only feedback I would get was that the customer accepted the solution or stated some improvements. But for this feedback I had to implement the whole solution in software.

The clash of worlds: agile and UX

Diving into the UX and product management world my view of software development is questioned at its foundation. Agile software development and development projects in general start with a fatal assumption: the goal is to bring value through software that fulfills the requirements. But value is not created by software or by satisfying any requirements. For the user value is created by helping him getting his jobs done, helping him solving his problems in his context.

This might sound harsh but software is not an end in itself but rather one way to help users achieve their goals.
On top of that requirements lack the reasons why the user needs them (which jobs they help the user do) and in which situation the user might need them (the context).
In order to account this I need to change my focus away from defining the solution (the requirements) to exploring the users, their problems and their context.

The problem with finding problems

As a software developer I am skilled in finding solutions. Because of this I have some difficulties in finding the problems without proposing (even subconsciously) a solution right away. If you are like me while talking with a client or user I tend to imagine the possible solutions in my head. On top of that missing details that are filled by my experience or my assumptions. The problem is that assumptions are difficult to spot. One way is to look at the language. A repeatable way is to use a process for finding them.

The product kata

Recently I stumbled upon an excellent blog post by Melissa Perri, a product manager and UX designer. In this post she describes a way named ‘the product kata’.

The product kata starts with defining the direction: the problem, job or task I want to address.
After a listening tour with the different stakeholders (including clients and users), the requirements list and a contextual observation of the current solution I can at least give a rough description of the problem.

These steps help me to get a basic understanding of the domain and the current situation. In my old way of doing things I would now rush towards the solution. I would identify the next step(s) bearing the most value for the client and along the way remove the shortcomings of the current solution. But wait. The product kata proposes a different way.

A different way

Let’s use an example from a real project. Say the client needs a way to check incoming values measured by some sensors. The old solution plots these values over time in a chart. It lacks important contextual information, has no notion of what ‘checking the values’ mean and therefore cannot document the check result. Since the process of checking the values is central to the business we need to put it first and foremost. Following the product kata I define the direction as ‘check the sensor values’.

Direction: check the sensor values

To see if I reached the goal the kata needs a target condition which I define as ‘the user should be able to check the sensor values and record the check result’.

Target condition: the user should be able to check the sensor values and record the check result

Currently the user isn’t able to check anything. So the next step of the kata is to look at the current condition. If the current condition matches the target condition I am done. The current condition in my example is that the user cannot check the sensor values in the right way.

Current condition: the user cannot check the values in the right way

The first obstacle to achieving the target condition is that I don’t know what the right way is. Since the old solution lacks some important information to fulfill the check my first obstacle I want to address is to find out what information does the user need.

Obstacle: what additional information (besides the values themselves) does the user need

Since the product kata originates from lean product management I need to find an efficient step which addresses this obstacle. In my case I choose to make a simple paper sketch of a chart and interview the user about which data they needed in the past.

Step: paper sketch of chart (to frame the discussion) and interview about information needed in the past

I expect to collect a list with all the information needed.

Expected: a list of past data sources which helped the user in his check process

After doing this I learned what information should be displayed and which information (from the old solution) was not needed in the past.

Learned: two list of things: what was needed, what not

Now I repeat the kata from the start. The current condition still not matches the target condition. My next obstacle is that we do not know from the vast resources of information that is needed and the possible actions during the check which are related, form a group or are the most important ones. So my next step is to do a card sort with the users to take a peek into their mental model. I expect to find out about the priorities and grouping of the possible information and actions.

After I gathered and condensed the information from the card sorts, my next obstacle is to find out more about the journey of the user during the check and the struggles he has. From my earlier contextual observation I have the current user journey and the struggles along the way. Armed with the insights from the earlier steps I can now create a design which maps the user journey and addresses the struggles with the data and the actions according to the mental model of the user.
For this I develop a (prototypical) implementation in software and test them with a group of users. I expect to verify or find problems regarding the match of the mental model of the user and my solution.
This process of the product kata is repeated until the current condition meets the target condition.

Why this is needed

What changed for me is that I do not rush towards solving the problem but first build a solid understanding by learning more about the users, their jobs and contexts in a directed way. The product kata helps me to frame my thoughts towards the users and away from the solution and my assumptions about it. It helps me structure my discovery process and the progress of the project.
My old way started from requirements and old solutions improving what was done before. The problem with this approach is that assumptions and decisions that were made in the past are not questioned. The old way of doing things may be an inspiration but it is not a good starting point.
Requirements by the client are predefined solutions: they frame the solution space. If the client is a very good project manager this might work but if I am responsible for leading the project this can lead to a disaster.
The agile way of developing software is great at executing. But without guidance and a way of learning about the users and their problems, agile software development is lost.