How do I start a project

On my quest to build better software for people and their needs I try to move my current agile project approach to a more user centered and outcome oriented one.

This starts right at the beginning of a project. After getting the go from the client I start with meeting the project leads on the client side, the ones who will make decisions and control the way of the project.
I like to take an assumption driven process or learning focussed one to ask questions and clear my assumptions on my way.
The first questions I have are:

  • who will use the software
  • who will be affected by the software/project
  • what are their goals/expected outcomes, what if they could choose only one
  • what do they expect from the software
  • what will happen if the project stalls or even fails

The people using the software aka the users are one of the main focus during the project but also the people who get benefits from the software without directly using it are really important and should not be neglected. These can be the people responsible for operating the software or managers getting reports from actual users. I keep them in mind so that other parts which are often missed during a user centered approach are considered.
All these people have some expectations how the software will affect them, some even have goals or need something to come out of the project. These outcomes cover a great range: from measureable business goals like increasing revenue or retention rate, to personal benefits like visibility. It is important to get a rough priority, I use a narrowing question like ‘what if you could choose only one’.
Besides from goals and outcomes people have also imaginations how the software will be used by them, in which context and how often.
These are the positive effects of the project and the software but all is not sunshine, so I also look at what will happen if the project is delayed, stops or even *shudder* fails. These are the risks that I need to consider and may be even plan for.
All these questions help me frame the project from the end. I know what goals to aim for and in which direction the journey goes.
This is my first step to build a shared understanding among the project participants. The steps to learn about what picture they have in mind. My questions and their answers help me to clarify the direction. After that I need to plan the first phase. For this I have to clear my mind and start with a beginner’s mind to find my hidden assumptions. Every assumption I or other have need to be called out explicitly. I have to capture it and formulate a corresponding learning step.

But this is a topic for another post…

The definition of done

From large to small, from projects to issues, a team needs to define when they are considered done.
This decision differs from team to team, some have steps to done, others just one state. Even the words used in your issue tracker reflect your choices: what does ‘fixed’ mean, what is ‘closed’ used for…
Even some practices like test driven development define a state of done: the code is done if all tests are green and it is refactored.

What’s your definition of done?

Let’s take a look at some examples:

  • tests are green and code is refactored
  • QA says ok
  • customer/stakeholder/product owner accepts the issue
  • developer thinks the code reflects the description in the issue
  • a predefined spec, maybe even with an acceptance test, is fulfilled
  • no bugs were found while clicking through
  • the code is merged with the master branch
  • the continuous integration tool has found no errors

The problem with this ‘definition of done’s is that either they look for an external person to accept by their opinion/guideline or concentrate on some output. But the people needing the software do not want the software in its own regard. They want to reach a goal through the software. The software is a mean to an end: their goals. Without defining the goals and needs beforehand you are either doomed to guess them and are at the mercy of arbitariness (from your point of view) or concentrate on some measurable output like code, tests or a completed feature.

Defining what the user wants to do with this new feature or project should be the first thing in a project right after the initial introductions. Who will use the app or the feature? (the intended audience, the users) What do they expect from it? (the benefits) What goal do they want to reach?
With this questions and answers you have a target. After completing the issues or project you can see if the target has been reached, if the goals are met. It might be the same with an acceptance process from a stakeholder but here you know the target beforehand not after.

About API astonishments

Nowadays we developers tend to stand on the shoulders of giants: We put powerful building-blocks from different libraries together to build something worth man-years in hours. Or we fill-in the missing pieces in a framework infrastructure to create a complete application in just a few days.

While it is great to have such tools in the form of application programmer interfaces (API) at your disposal it is hard to build high quality APIs. There are many examples for widely used APIs, good and bad. What does “bad API” mean? It depends on your view point:

Bad API for the API user

For the application programmer a bad API means things like:

  • Simple tasks/use cases are complicated
  • Complex tasks are impossible or require patching
  • Easy to misuse producing bugs

A very simple real life example of such an API is a C++ camera API I had to use in a project. Our users were able to change the area of interest (AOI) of the picture to produce images consisting of only a part of full resolution images. Our application did crash or not work as expected without obvious reasons. It took many hours of debugging to spot the subtle API misuse that could be verified be reading the documentation:

The value of camera.Width.GetMax() changed instead of being constant! The reason is that AOI was meant and not the sensor resolution width. The full resolution width we actually wanted is obtained by calling camera.WidthMax.GetValue(). This kind of naming makes the properties almost undistinguishable and communicates nothing of the implications. Terms like AOI or sensor width or full resolution just do not appear in this part of the API.

Small things like the example above may really hurt productivity and user experience of an API.

Bad API for the API programmer

API programmers can easily produce APIs that are bad for themselves because they take away too much freedom away resulting in:

  • Frequent breaking changes
  • API rewrites
  • Unimplementable features
  • Confusing, not fitting interfaces

Design your interfaces small and focused. Use types in the interface that leave as much freedom as possible without hurting usability (see Iterable vs. Collection vs. List vs. ArrayList for example). Try to build composable and extendable types because adding types or methods is less of a problem than changing them.

Conclusion

Developers should put extra care in interfaces they want to publish for others to use. Once the API is out there breaking it means angry users. Be aware that good API design is hard and necessary for a painless evolution of an API. Consider reading books like “Practical API Design” or “Build APIs You Won’t Hate” if you want to target a wider audience.

Evolvability of Code: Uniform Access Principle

Most programmers like freedom. So there are many means of hiding implementations in modern programming languages, e.g. interfaces in Java, header files in C/C++ and visibility modifiers like private and protected in most object-oriented languages. Even your ordinary functions or public class interface gives you the freedom to change the implementation without needing to touch the clients. Evolvability in this sense means you can change and refine your implementations without requiring others, namely clients of your code, to change.

Changing the class interface or function signatures within a project is often possible and feasible, at least if you have access to all client code and use powerful refactoring tools. If you published your code as a library or do not want to break all client code or forcing them to adapt to your changes you have to consider your interface code to be fixed. This takes away some of your precious freedom. So you have to design your interfaces carefully with evolability in mind.

Some programming languages implement the uniform access principle (UAP) that eases evolvability in that it allows you to migrate from public attributes to properties/method calls without changing the clients: Read and write access to the attribute uses the same syntax as invoking corresponding methods. For clarification an example in Python where you may start with a class like:

class Person(object):
  def __init__(self, name, age):
    self.name = name
    self.age = age

Using the above class is trivial as follows

>>> pete = Person("pete", 32)
>>> print pete.age
32
# a year has passed
>>> pete.age = 33
>>> print pete.age
33

Now if the age is not a plain value anymore but needs checking, like always being greater zero or is calculated based on some calendar you can turn it to a property like so:

class Person(object):
  def __init__(self, name, age):
    self.name = name
    self._age = age

  @property
  def age(self):
    return self._age

  @age.setter
  def age(self, new_age):
    if new_age < 0:
      raise ValueError("Age under 0 is not possible")
    self._age = new_age

Now the nice thing is: The above client code still works without changes!

Scala uses a similar and quite concise mechanism for implementing the UAP wheres .NET provides some special syntax for properties but still migration from public fields easily possible.

So in languages supporting the UAP you can start really simple with public attributes holding the plain value without worrying about some potential future. If you later need more sophisticated stuff like caching, computation of the value, validation or even remote retrieval you can add it using language features without touching or bothering clients.

Unfortunately some powerful and widespread languages like Java and C++ lack support for UAP. Changing a public field to a more complex property means the introduction of getter and setter methods and changing all clients. Therefore you see, especially in Java, many data classes littered with trivial getter and setter pairs doing nothing interesting and introducing unnecessary bloat to maintain the evolvability of the code.

Why I’m not using C++ unnamed namespaces anymore

Well okay, actually I’m still using them, but I thought the absolute would make for a better headline. But I do not use them nearly as much as I used to. Almost exactly a year ago, I even described them as an integral part of my unit design. Nowadays, most units I write do not have an unnamed namespace at all.

What’s so great about unnamed namespaces?

Back when I still used them, my code would usually evolve gradually through a few different “stages of visibility”. The first of these stages was the unnamed-namespace. Later stages would either be a free-function or a private/public member-function.

Lets say I identify a bit of code that I could reuse. I refactor it into a separate function. Since that bit of code is only used in that compile unit, it makes sense to put this function into an unnamed namespace that is only visible in the implementation of that unit.

Okay great, now we have reusability within this one compile unit, and we didn’t even have to recompile any of the units clients. Also, we can just “Hack away” on this code. It’s very local and exists solely to provide for our implementation needs. We can cobble it together without worrying that anyone else might ever have to use it.

This all feels pretty great at first. You are writing smaller functions and classes after all.

Whole class hierarchies are defined this way. Invisible to all but yourself. Protected and sheltered from the ugly world of external clients.

What’s so bad about unnamed namespaces?

However, there are two sides to this coin. Over time, one of two things usually happens:

1. The code is never needed again outside of the unit. Forgotten by all but the compiler, it exists happily in its seclusion.
2. The code is needed elsewhere.

Guess which one happens more often. The code is needed elsewhere. After all, that is usually the reason we refactored it into a function in the first place. Its reusability. When this is the case, one of these scenarios usually happes:

1. People forgot about it, and solve the problem again.
2. People never learned about it, and solve the problem again.
3. People know about it, and copy-and-paste the code to solve their problem.
4. People know about it and make the function more widely available to call it directly.

Except for the last, that’s a pretty grim outlook. The first two cases are usually the result of the bad discoverability. If you haven’t worked with that code extensively, it is pretty certain that you do not even know that is exists.

The third is often a consequence of the fact that this function was not initially written for reuse. This can mean that it cannot be called from the outside because it cannot be accessed. But often, there’s some small dependency to the exact place where it’s defined. People came to this function because they want to solve another problem, not to figure out how to make this function visible to them. Call it lazyness or pragmatism, but they now have a case for just copying it. It happens and shouldn’t be incentivised.

A Bug? In my code?

Now imagine you don’t care much about such noble long term code quality concerns as code duplication. After all, deduplication just increases coupling, right?

But you do care about satisfied customers, possibly because your job depends on it. One of your customers provides you with a crash dump and the stacktrace clearly points to your hidden and protected function. Since you’re a good developer, you decide to reproduce the crash in a unit test.

Only that does not work. The function is not accessible to your test. You first need to refactor the code to actually make it testable. That’s a terrible situation to be in.

What to do instead.

There’s really only two choices. Either make it a public function of your unit immediatly, or move it to another unit.

For functional units, its usually not a problem to just make them public. At least as long as the function does not access any global data.

For class units, there is a decision to make, but it is simple. Will using preserve all class invariants? If so, you can move it or make it a public function. But if not, you absolutely should move it to another unit. Often, this actually helps with deciding for what to create a new class!

Note that private and protected functions suffer many of the same drawbacks as functions in unnamed-namespaces. Sometimes, either of these options is a valid shortcut. But if you can, please, avoid them.

4 questions you need to ask yourself constantly while programming

Most of today’s general purpose progamming languagues come with plethora of features. Often there are different levels of abstractions and intended use cases. Some features are primarily for library designers, others ease implementation of domain specific languages and application developers use mostly another feature set.

Some language communities are discussing “language profiles / levels” to ban certain potentionally harmful constructs. The typical audience like application programmers does not need them but removing them from the language would limit its usefulness in other cases. Examples are Scala levels (a bit dated), the Google C++ Style Guide or Profiles in the C++ Core Guidelines.

In the wild

When reading other peoples code I often see novice code dealing with low-level threading. Or they go over board with templates, reflection or meta programming.

I have even seen custom ClassLoaders in Java written by normal application programmers. People are using threads when workers, tasks, actors or other more high-level abstractions would fit much better.

Especially novices seem to be unable to recognize their limits and to stay off of inappropriate and potentially dangerous features.

How do you decide what is appropriate in your situation?

Well, that is a difficult question. If you find the task at hand seems hard you should probably take a step back because:

There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors.

-Jeff Atwood

Then ask yourself some simple questions:

  1. Someone must have done it before. Have I searched thoroughly for hints or solutions?
  2. Is there a (better) library, data structure or abstraction?
  3. Do I really have to do this? There must be a better/easier way!
  4. What do I gain using feature/library/tool X and what are its costs? What about the alternatives?

Conclusion

You need some experience to recognize that you are on the wrong path, solving problems you would not even have if doing the right thing in the first place.

Experience is what you got by not having it when you needed it.

-Author Unknown

Try to know and admit your limits – there is nothing wrong with struggling to get things working but it helps to frequently check your direction by taking a step back and reflecting.

The rule of additive changes

Change is in the nature of software development. Most difficult aspects of the craft revolve around dealing with change. How does one keep software extensible? How do you adapt to new business requirements?

With experience comes the intuition that some kind of changes are more volatile than other changes. For example, it is often safer to add a new function or type to an application than change an existing one.

This is because adding something new means that it is not already strongly connected to the rest of the application. Or at least that’s the assumption. You have yet to decide how the new component interacts with the rest of the application. Usually this is done by a, preferably small, incision in the innards of your software. The first change, the adding, should not break anything. If anything, the small incision should be the only dangerous aspect of the change.

This is as very important concept: adding should not break things! This is so important, I want to give it a name:

The Rule of Additive Changes

Adding something to a well-designed software system should not break existing functionality. Exceptions should be thoroughly documented and communicated.

Systems should always be designed and tought so that the rule of additive changes holds. Failure to do so will lead to confusing surprises in the best cases, and well hidden bugs in worse cases.

The rule is nothing new, however: it’s a foundation, an axiom, to many other rules, such as the Liskov Substitution Principle:

Inheritance

Quoting from Wikipedia:

“If S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program”

This relies on subtyping as an additive change: S works at least as good as any T, so it is an extension, an addition. You should therefore design your systems in a way that the Liskov Substition Principle, and therefore the rule of additive changes, both hold: An addition of a new type in a hierarchy cannot break anything.

Whitelists vs. Blacklists

Blacklists will often violate the rule of additive changes. Once you add a new element to the domain, the domain behind the blacklist will change as well, while the domain behind a whitelist will be unaffected. Ultimately, both can be what you want, but usually, the more contained change will break less – and you can still change the whitelist explicitly later!

Note that systems that filter classes from a hierarchy via RTTI or, even more subtle, via ask-interfaces, are blacklists. Those systems can break easily when new types are introduces to a hierarchy. Extra care needs to be taken to make sure the rule of addition holds for these systems.

Introspection and Reflection

Without introspection and reflection, programs cannot know when you are adding a new type or a new function. However, with introspection, they can. Any additive change can also be an incision point. Therefore, you need to be extra careful when designing systems that use introspection: They should not break existing functionality for adding something.

For example, adding a function to enable a specific new functionality is okay. A common case of this would be to adding a function to a controller in a web-framework to add a new action. This will not inferfere with existing functionality, so it is fine.

On the other hand, adding a member to a controller should not disable or change functionality. Adding a special member for “filtering” or some kind of security setting falls into this category. You think you’re merely adding something, but in fact you are modifying. A system that relies on such behavior therefore violates the rule of additive changes. Decorating the member is a much better alternative, as that makes it clear that you are indeed modifying something, which might break existing functionality.

Not all languages or frameworks provide this possibility though. In that case, the only alternative is good communication and documentation!

Refactoring

Many engineers implicitly assume that the rule of additive changes holds. In his book “Working Effectively With Legacy Code”, Micheal Feathers proposes the sprout and wrap techniques to change legacy software. The underlying technique is the same for both: formulating a potentially breaking change as mostly additive, with only a small incision point. In the presence of systems that do not follow the rule of additive changes, such risk minimization does not work at all. For example, adding additional function can break a system that relies heavily on introspection – which goes against all intuition.

Conclusion

This rule is not a new concept. It is something that many programmers have in their head already, but possibly fractured into lots of smaller guidelines. But it is one overarching concept and it needs a name to be accessible as such. For me, that makes things a lot clearer when reasoning about systems at large.