Evolution of programming languages

Programming languages evolve over time. They get new language features and their standard library is extended. Sounds great, doesn’t it? We all know not going forward means your go backward.

But I observe very different approaches looking at several programming ecosystems we are using.

Featuritis

Java and especially C# added more and more “me too” features release after release making relatively lean languages quite complex multi-paradigm languages. They started object oriented and added generics, functional programming features and declarative programming (LINQ in C#) and different UI toolkits (AWT, Swing, JavaFx in Java; Winforms, WPF in C#) to the mix.

Often the new language features add their own set of quirks because they are an afterthought and not carefully enough designed.

For me, this lack of focus makes said language less attractive than more current approaches like Kotlin or Go.

In addition, deprecation often has no effect (see Java) where 20 year old code and style still works which increases the burden further . While it is great from a business perspektive in that your effort to maintain compatibility is low it does not help your code base. Different styles and old ways of doing something tend to remain forever.

Revolution

In Grails (I know, it is not a programming language, but I has its own ecosystem) we see more of a revolution. The core concept as a full stack framework stays the same but significant components are changed quite rapidly. We have seen many changes in technology like jetty to tomcat, ivy to maven, selenium-rc to geb, gant to gradle and the list goes on.

This causes many, sometimes subtle, changes in behaviour that are a real pain when maintaining larger applications over many years.

Framework updates are often a time-consuming hassle but if you can afford it your code base benefits and will eventually become cleaner.

Clean(er) evolution

I really like the evolution in C++. It was relatively slow – many will argue too slow – in the past but it has picked up pace in the last few years. The goal is clearly stated and only features that support it make it in:

  • Make C++ a better language for systems programming and library building
  • Make C++ easier to teach and learn
  • Zero-Cost abstractions
  • better Tool-support

If you cannot make it zero-cost your chances are slim to get your feature in…

C at its core did not change much at all and remained focused on its merits. The upgrades mostly contained convenience features like line comments, additional data type definitions and multithreading.

Honest evolution – breaking backwards compatibility

In Python we have something I would call “honest evolution”. Python 3 introduced some breaking changes with the goal of a cleaner and more consistent language. Python 2 and 3 are incompatible so the distinction in the version number is fair. I like this approach of moving forward as it clearly communicates what is happening and gets rid of the sins in the past.

The downside is that many systems still come with both, a Python 2 and a Python 3 interpreter and accompanying libraries. Fortunately there are some options and tools for your code to mitigate some of the incompatibilities, like the __future__ module and python-six.

At some point in the future (expected in 2020) there will only support for Python 3. So start making the switch.

Advertisements

Quick and dirty is a skill

Being clean coders we build our software based on quality and reflect on how we do it. We set internal standards in code and UIs, we write tests, we polish.
But there are times when all this focus on quality is obstructive. Times when we need to learn something. For example: at a start of a project when fundamental questions like is it feasible, how should this interaction work, what’s the right order of steps are unanswered, learning needs to be as cheap as possible.
Here quick and dirty is important. The problem is our ego. We want to polish it, we want to build real software with a sound structure. But quality takes time. The problem is quality is not important when answering the fundamental project questions, learning is. May be a mockup in Powerpoint is enough? (not even writing code? ugh). A simple sketch on a piece of paper. Or maybe just a quick demo hacked together in an afternoon.
I know these suggestions may insult our pride. But we need to focus on what’s important: sometimes that’s quality, sometimes that’s speed.
Decades ago when I started coding, quick and dirty wasn’t a problem. Everything I wrote was quick and dirty. I was learning all the time. Over time I got better at developing software, structuring applications and building robust systems. But quick and dirty was lost along the way.
When you write something for the purpose of learning it can happen that you are wrong and all the code has to be thrown away. If it was just 2 hours patching something together that’s okay, but what if you spent a whole week? Just like writing quality software, quick and dirty is a skill in itself and as with other skills we need to practice it.
But beware this is not only a problem at the start of a project: often as developers we tend to overthink something, we plan for every possible outcome, imagine scenarios with weirdly acting users or systems. This is the time to stop and implement something to learn. To get feedback. Not to overanalyse or overdesign. Just release something and test it with real users, it doesn’t need to be part of the software in production, just use a demo or a staging environment. But if you need to learn something, focus on that, not on quality.

The project jugglers

By Usien (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia CommonsIn this blog post, I will shine some light on a feature of our company that is often met with disbelief: How five developers can work on twenty projects at once without being stressed. I try to use the metaphor of a juggler, though I know nothing about juggling other than it can be done. I cannot hold more than one object in the air at any given time and even that is an optimistic estimate. But I’ve seen jugglers keep six to eight objects flying with seemingly no effort. We do this with software development projects.

A layman theory of juggling

A good metaphor can be applied from start to finish. I’ve probably chosen a bad metaphor, but it gives the right initial impression: Every developer at our company leads several projects at once. He (or she) keeps the projects alive and in the “green zone”, the ratio of remaining budget, time and scope (read: work left to be done) that promises little to no trouble in the foreseeable future.

In order to juggle without visible effort, you probably need to practice a lot. You probably drop your objects a lot. You probably need to watch the objects fly in the beginning.

In our case, we needed a lot of practice to reach our level of confidence. We lead development projects for up to 17 years now. Each developer finishes between four and seven projects per year. That’s up to a hundred projects to gain experience from. But we couldn’t drop (read: fail) a lot of projects, because it directly hurt our bottom line. Just imagine you want to learn how to juggle, but all you got are expensive ming vases that you bought from your own money, without insurance. That’s how it feels to “experiment” with projects. So we play it safe and only accept projects we know we can handle. And we watch our projects fly, very very closely. In fact, we have a dedicated position, the “project manager”, with the one duty to periodically ask a bunch of questions to assure that the project is still in the green zone.

Draw the trajectory

Every object that you can juggle has its own characteristics of how it behaves once it is in the air. A good juggler can feel its trajectory and grab it in just the right moment before it would fall out of reach. The trick to keep a software development project in the green zone is the same: Get a hold on it before it ventures too far in an unfortunate direction, which is the natural tendency of all projects. The project lead has to periodically apply effort to keep the project afloat. But when is the right time to invest this effort? Spend it too soon and it has only minimal effect. Spend it too often and you’ll exhaust your power and your budget. But because every project has its own trajectory, too, and you can’t afford to let it slip, you should make the trajectory as visible as possible. You should draw it!

We’ve experimented with a lot of tools and visualizations. The one setup that works best for us is a low technology, high visibility approach. The project lead takes one half of a whiteboard and draws a classic burn-down chart or a variation (we often use a simple vertical progress bar). This chart is hand-drawn and rather crude, but big enough that everybody can see it. It is updated at least every time the project manager comes around to ask his or her questions. One of the questions actually is: “Is this chart up to date?”. The remaining budget of your project needs to be available at a glance, from across the hall. The project lead needs to “feel” this budget. And if it makes him or her nervous, it’s high time to determine the remaining scope and calendar time of the project once again.

In doing this positioning by triangulation on a regular schedule, the project lead draws the trajectory of the project for all three axes and can probably interpolate its future course. He can then apply effort to nudge (or yank, if things got worse fast) the project back on track.

Without the visibly drawn trajectory, your project lead is like a juggler in the dark, tossing unknown objects in the air and hoping that they’ll fall in place somehow.

Know your limit

As the complete noob to juggling that I am, I imagine that jugglers have a secret dress code like martial artists (watch their belts!), where other jugglers can read how many objects they can hold up at once. Something like buttons on the vest or the length of a scarf. So the beginning 3-objects juggler bows in awe to the master 12-juggler, who himself is star-struck by the mighty 18-juggler that happens to attend the same meet-up.

In our company, this “dress code” would be based on the number of projects you are leading. And just like with the jugglers, it is important that you know your current limit. There is no use in over-extending yourself, if you accidentally let one project slip, the impact is big enough that you’ll fail your other projects, too. Just like the juggler loses his or her rhythm, you’ll lose your “flow”.

The most important part of juggling many projects is that you always juggle one less than you are capable of. You need reaction time if one of them topples over. A good juggler can “rescue” the situation with subtle speedup or extra movements because the delay between necessary actions allows for it. A good project lead has emergency reserves to spend without compromising other projects.

There is nothing wrong to start with two projects and add more later on when you are more confident. But don’t start with only one. You can only form habits of resource sharing if you share from the beginning. Even I can pose as a competent 1-juggler, but the lowest bar to juggling has to be two objects.

Box your time

Again, I know nothing about juggling. But from a mathematical viewpoint, juggling is “just” an exercise in timeboxing. If you have four objects in the air, in an arc that requires one second to go around, you’ll be able to spend a quarter second (250 ms) of attention to each object on each rotation. The master 12-juggler from above can only afford 1/12th or 80 milliseconds for each object. If he takes longer for one object, the next one will suffer. If he has no time reserves, a jam will build up and ultimately break the routine.

So, as a project lead, you need to apply timeboxes on all of your projects. They don’t need to be of equal size, but small enough that you can multiplex between your projects fast and often enough. A time box is a fixed-size amount of time that you allot to a specific task. The juggler uses a time box to put the next falling object flying back up. In our lunch break, we allot 60 minutes to food, beverages and some amount of walking. And if the process of eating takes longer than usual (I’m a chronic slow-eater, I can’t help it)? Then I have to go back partially hungry because I’ll end my lunch break on time. That’s the most important characteristic of a time box: You either succeed “in time” or interrupt or even cancel the task. The juggler will drop the one iffy object instead of risking a complete breakdown of the arc. You need to let your problematic task go (for the moment) instead of spreading the problem onto your other projects, too.

We’ve found that the amount of “one workday” is the most natural and easiest to manage time box. So we try our best to partition our week in the granularity of days and not our days in the granularity of hours. One aspect that helps tremendously is to have different physical locations for different projects. So you can be physically present “in the project” or “too far away at the moment” from the project. You plan your work week in locations as much as you plan it in project time boxes. The correlation of workdays, locations and projects is so strong that it doesn’t even seem to be timeboxing or project multiplexing. You just happen to be in the right place to work on project X for today. This is how you can juggle up to five projects without having to compromise all that much (provided you have a five-day work week).

If you can’t physically relocate your work, at least try to have a fixed schedule for your projects, like the “project A monday” or the “project X friday”. This might also mean to postpone emerging issues with project X until next friday. You need to build up skills to negotiate these delays with your customers. If your customers can dictate your schedule, you’ll get torn to shreds in no time. It’s friday or no day for issues on project X – at least as a good start for heavy bargaining. But that’s a whole topic for another blog post. Please leave a comment if you are interested to hear more about it.

Stuff your box

The “one workday” time box has a strong implication: Every little thing you do for a project takes one day. That doesn’t mean you should work for five minutes on project A, completing the task, and then stare into the air and twiddle your thumbs. It means that you should accumulate enough tasks for project A that you can spend the better half of the day on the known tasks and the remaining time on the unknown problems that arise on the way. In the evening, you should be able to finish your work for project A with a feeling of closure. You can put project A aside until next week (or whenever your next cycle is). You can concentrate on project B tomorrow and project C the next day. Both projects didn’t bother you today (well, perhaps a bit, but you only acknowledged some e-mails and deferred any real thoughts on it until you enter their timebox).

Perpetual closure

The feeling of closure at the end of a successful work day is the most important thing that keeps you composed. You’ve done your thinking for project A this week and will think of project B tomorrow. But now, you can rest.

This must be the feeling that the juggler experiences with each object that goes up again. It is out of sight and only needs attention after it has nearly completed its arc again. And now for the next object, one at a time…

 

A Tale of Two Languages

Recently, I presented my mysteriously titled talk “A Tale of Two Languages” at our local C++ user group. Before the talk, I was not really sure whether it would resonate with my audience. But it did, and helped to engage people in a healthy discussion about how to use C++.

Essentially, the talk was about how I am using two different modes or dialects of C++ to write and maintain applications. The title suggests two languages – and it sure can be thought of that way, but for now I’m using the word “modes” to distinguish it from the term programming languages.

You write the application in one mode, while keeping the style relatively easy. In the other mode, you make sure that you can write easy and efficient code in the other, while leveraging the full power of C++.

I call the first application mode and the second library mode.

Library mode

As I said before, this the power mode. One of C++’s design paradigms is self-extension. You are extending the language from the language itself. It’s a very powerful mechanism, the same one that drives the standard library. It’s also why C++ does not have the need for a built-in string type.

This power is a bit of a double-edged sword. On the one hand, it allows you to adapt the language specifically for your needs, for example with application specific value types. For a 3D application, a well designed 2d vector or point type will make your code easier and probably faster. On the other hand, a badly designed type on this level will haunt your application for years to come. I have seen both.

That’s a simple example though. More powerful primitives, such as domain-specific-language like constructs also belong into this mode. In general, things in this mode are less discoverable and less maintainable, but they strive to improve discoverability, efficiency and maintainability on the other side. As a consequence, this code needs more stringent documentation and specification.

Application mode

This is the mode that you use to write the majority of your application. Application mode is all about agility and leveraging opportunities. You intentionally restrict yourself in order to keep your development speed up. Simplicity trumps most other qualities in this mode. If you need another quality to be the defining factor, for example because you need some code to be a little more complex in order to run faster, you should put it into library mode instead.

Unlike code in library mode, this code needs to speak for itself. Therefore, documentation is usually nothing but a duplication.

One important aspect is that this code should be devoid of all subtleties.

Parameter passing and its consequences

That last bit is especially uncommon in C++, where most decisions are really a catch-22. Hence the resulting code hints at the struggles endured while writing it.

For example, to write an efficient function in C++, you need to decide whether to pass each parameter by value, or by reference, or by a pointer. The decision on which to use depends largely on your implementation, i.e. what you are doing with the parameter after it was passed to the function. That usually couples your implementation too tightly to its interface and degrades programmer productivity by giving too many options.

Using a shared-ownership smart-pointer such as std::shared_ptr by default is a good middle ground here. It does the right thing most of the time and is not to far off at most other times. Many other mainstream languages, such as python, go this route. Some frameworks, such as Qt, use that semantic as well.
Like const-correctness, passing all parameters in a std::shared_ptr is viral. Object thus passed need to be created on a the stack, preferably with std::make_shared. You will also store those smart pointers in other objects, so shared_ptr will have quite a lot of screen space. Therefore I usually make an alias:

template using Ptr = std::shared_ptr;

If it’s going to be the default, it should not clutter your code. Since objects are transported in a Ptr by default, they usually do not even need a copy constructor or other “value-like” semantics. These objects are less about maintaining invariants, and more about implementing abstract interfaces and bundling functionality in maintainable chunks. I usually use boost::noncopyable to mark them, though Herb Sutter’s Metaclasses proposal could make this even nicer.
Note that you can still promote them to value types in library mode, should the need arise. But they will become more costly to maintain.

Other simplifications

There are plenty of other things to avoid in application mode. Writing templated types makes your code inherently non-local and dependent on a type that can be anywhere. Note that instantiation of templates from library mode is fine – at that point, all the facts are known.

Another thing that makes your code non-local, and therefore unfitting for application mode, is overloading. Especially in the presence of ADL. For example, which functions are in your actual overload set depends on which headers you include and which using-directives and declarations are active. Sometimes, that is desirable. But not in application mode.

Resolution

Since using this “two modes” approach, I have found that my productivity is much higher – even in older code that went through a lot of evolution. The code does not actually get a lot slower, even with all the smart pointers. In fact, I am sure that I could only optimize a few cases because the design in application mode is a lot more flexible, and the structure more visible.

Text editing tricks

Multiple cursors

Recently I was in a pair programming session, when I noticed that there were three cursors blinking in the editor window of the IDE. I initially assumed it was a bug in the IDE (which is not that uncommon), but it acutally turned out it was a feature. In IntelliJ based IDEs you can place multiple cursors in the editor window, start typing and the typed text gets inserted at all of these positions. To achieve this press Shift+Alt while clicking the mouse to position the cursor carets.

When I start using a new IDE I usually look up and memorize the shortcuts for refactoring operations like “rename” or “extract method” or other essential operations like “quick fix”.

But of course there are many other neat tricks for advanced text editing in most IDEs and programming editors. Here are some of them, in this case for IntelliJ based IDEs.

Rectangular selection

Rectangular selection

This one comes in handy when you have to select a column of text, for example a common prefix of keys in a properties file. I initially knew this type of selection from Vim, but many other programming editors allow it too. For a rectangular selection in IntelliJ based IDEs press Ctrl+Shift+Alt (Shift+Alt+Cmd on Mac) and drag the mouse pointer.

Multiselection

Similar to the multiple cursors feature mentioned at the beginning of this post, you can also select multiple parts of a text at once and then cut, copy or delete them. To achieve this multi-selection press Shift+Alt while selecting text.

Multiple selction

Extending selection

By pressing Ctrl+W repeatedly within a fragment of code the selection will progressively extend. First the current expression under the cursor is selected, then the surrounding code block, then the code block surrounding this code block, etc.

Extending selection

Conclusion

Take some time and look up some of the more advanced text editing capabilities of your IDE or text editor. Adopt them if you find them helpful and share them with your colleagues, for example in a pair programming session.

How do I start a project

On my quest to build better software for people and their needs I try to move my current agile project approach to a more user centered and outcome oriented one.

This starts right at the beginning of a project. After getting the go from the client I start with meeting the project leads on the client side, the ones who will make decisions and control the way of the project.
I like to take an assumption driven process or learning focussed one to ask questions and clear my assumptions on my way.
The first questions I have are:

  • who will use the software
  • who will be affected by the software/project
  • what are their goals/expected outcomes, what if they could choose only one
  • what do they expect from the software
  • what will happen if the project stalls or even fails

The people using the software aka the users are one of the main focus during the project but also the people who get benefits from the software without directly using it are really important and should not be neglected. These can be the people responsible for operating the software or managers getting reports from actual users. I keep them in mind so that other parts which are often missed during a user centered approach are considered.
All these people have some expectations how the software will affect them, some even have goals or need something to come out of the project. These outcomes cover a great range: from measureable business goals like increasing revenue or retention rate, to personal benefits like visibility. It is important to get a rough priority, I use a narrowing question like ‘what if you could choose only one’.
Besides from goals and outcomes people have also imaginations how the software will be used by them, in which context and how often.
These are the positive effects of the project and the software but all is not sunshine, so I also look at what will happen if the project is delayed, stops or even *shudder* fails. These are the risks that I need to consider and may be even plan for.
All these questions help me frame the project from the end. I know what goals to aim for and in which direction the journey goes.
This is my first step to build a shared understanding among the project participants. The steps to learn about what picture they have in mind. My questions and their answers help me to clarify the direction. After that I need to plan the first phase. For this I have to clear my mind and start with a beginner’s mind to find my hidden assumptions. Every assumption I or other have need to be called out explicitly. I have to capture it and formulate a corresponding learning step.

But this is a topic for another post…

On the usefulness of volatile memory

I have a strange hobby: Raising ants as pets. Not literally pets, because I don’t give them names and they probably don’t know I even exist, but as tended animals in a controlled and restricted environment, the formicarium. This doesn’t sound too strange if you know three additional things about me: I was fascinated by ants since kindergarten, I play dwarf fortress for fun and I’m interested in computers.

My main colony is a lasius niger nest with about 1500 individuals. Lasius niger or black garden ant is a very common and easy to raise ant that is small enough to not require too much space but big enough to be observed and tracked.

I could tell you hours and hours of ant facts, but let’s concentrate on the topic of this blog post: An ant colony can be perceived as one big organism. Each individual ant has a clear role in the hierarchy:

  • The queen ant is the sole egg layer. In an established colony, she won’t do anything else. She will be fed, cleaned and protected by her workers. She was born a queen and cannot be replaced. If she dies, the whole colony will slowly fade away because nobody produces new ants anymore. Worker ants don’t know if they still have a queen and don’t care anyway. If a queen dies, she will be fed to the remaining larvas as soon as her scent disappears.
  • The male ants are born, fed and kicked out of the nest, preferably when the young queens fly out to find a mate. They won’t do anything else and have a lifespan of days.
  • The worker ants are all sterile females and do all the work. Literally all the work. They are born workers and spend every moment of their lives contributing to the hive or just sitting around scrounging food. Other workers are too busy to judge them.

But keep in mind that ants only have short-range communication means (scent, antennae drumming and ground vibrations), so no single ant has complete overview about the situation. It wouldn’t have the brains to process that information anyway. Ants have limited capability to remember things but no long-time memory. It isn’t necessary for the single worker ant to store any information. The information is stored in the hive – literally.

If you abstract some details away, you can also perceive and describe an ant colony as a computer or at least as a massive parallel problem solver. The algorithms are ingrained in the worker ants and are executed ruthlessly. The problems are food, water, enemies, cleaning and nurture. If the environment (the problem space) is suitable for the algorithms, the colony will thrive. Else the colony will ultimately fail. An ant colony at the side of a busy road will lose many workers in sudden “enemy” attacks, while a colony in the middle of a meadow might find less insects killed by cars. Not a single ant in each colony is aware of these relations. But they have found a way to share information: They mark their position (and therefore their way) by special scent fluid. They use their scents to label the environment for other ants.

If you ever encountered an avid user of printed sticker labels, this is what ants do, too: They put a sticker label on everything they come in contact with. If you seem to be food, they label you as food and rush back to the hive, laying out a “food in this direction” lane. If you seem dead and inedible, they mark you as garbage and them or some other ant will pick you up and follow the garbage line (finding the garbage line often requires them to return near their hive so it seems they mistook the garbage for food). The garbage area is marked with its own scent, preferably at a cliff. My ants love to throw things down a cliff! If you didn’t get the relationship to dwarf fortress yet, it should be clear by now. There are countless more examples, but you get the idea. The environment is not only an area, it is the long-term memory of the colony. The colony’s “memory brain” is scent on dirt, stones and sticks. The ant colony has outsourced its complete knowledge about the surrounding into the surrounding itself.

The ants have invented something I would call an “inverse cloud”. The cloud in IT is a concept of data storage that exists mostly independent of physical location and provides access to that data from virtually anywhere. The ants’ inverse cloud is a concept of data storage that is tightly coupled to a physical location and provides access to that data only if you are in the immediate vincinity. If you remove a physical data storage part in the IT cloud, it gets replaced by other parts that contain mirrored copies of the data. The cloud never forgets. If you remove a physical data storage part in the ants’ inverse cloud, it is forgotten immediately. Ants accept their environment as it is right now and never look back.

Now think about what happens when it rains and all the scents are washed away.

After every rain, the ant colony enters a “new level”, a fresh environment to be discovered and labeled. They probably never grow old to rediscover the same cliff again and again. This is what happens to a computer when the power is lost. It loses its working memory. But keep in mind that the colony retains some memories: the hive is underground and often rain-proof by overlaying stones or plants. So the colony remembers that it is currently in the hive, because it always smells like hive. The colony remembers the queen chamber because it always smells like queen. The colony remembers the brood chambers because they always smell like teen spirit (SCNR).

In my formicarium, it never rains. The ants get enough water by drinking troughs, but the marked lanes are never erased. This leads to all sorts of silly situations that the ants don’t even recognize as such. For example, a strong “food here” scent lane exists long after the food is gone. So a lot of enthusiastic workers run around searching in the target area. And remember that the hive smell never fades? My ants have assimilated area after area as “hive” after enough ants have marked it. So now they react excessively to disturbances because they think they are defending the hive – and therefore the queen! – but are ant miles away from the colony. Even better, a lot of young ants that usually never leave the hive until they are older wander out into the open (“it’s still the hive, just less dark”) and panic as soon as they encounter something that shouldn’t be in a hive. The panic spreads by, you’ve guessed it already, alarm scent and soon hundreds of battle-ready ants are running around frantically without any one of them knowing why.

So, to speak in computer terms, the main memory never gets erased and the caches never flushed. A little error can spread like a wildfire (like the panic example above) and cause disadvantages like energy consumption without any real gain (no enemy to defend against) or a lot of delicate ants sitting around in plain sight of their predators. The whole system is fragile and erratic and probably wouldn’t survive in the wild. A good measure of rain would remove the odd memories and probably ease the ants because their hive, the area that needs to be defended at all costs, would shrink again.

I’ve raised neurotic ants in need of a cold shower.

How does this give us any insight into modern computing? Well, my train of thoughts is this: If modern systems are unable to forget, because memory is cheap and permanent, we might be prone to design software that acts neurotic and hyped up. The ability to forget, to really don’t remember at all, might be crucial in designing resilient parallel systems. There is the cost of losing valueable information, but the benefit of losing all results of errors seems to match it. So volatile memory might be a nuisance for us programmers, but it also provides a “blank slate” every time the system starts and is the reason for the most important question in IT: “Have you tried turning it off and on again?”

Systems that rely on “place oriented programming”  seem to have the need of regular reset phases where the working memory is cleared and the system goes into the next cycle fresh and rested. We might even call it sleep. And in case you wonder: The sleep of ants is an ongoing topic for research.

Disclaimer: I know that not all ants are as dumb as lasius niger. Some ants even teach each other facts about their environment. The wikipedia article mentions some wonderful examples. I had ant colonies with more complex ants and they were wonderful. But right now, as I’m typing this, there is a lasius niger worker that heaves a wasp husk part to the top corner of the formicarium, throws it down (did I mention they love throwing things?) and runs back down to heave it up again, probably because the corner is somehow marked as a garbage dump zone. It has repeated this process at least half a dozen times now. This is how a biological infinite loop looks like. Some ants even parallelize such a loop to exhaustion: