The personal economics of programming languages

Recently, one of my students asked a good question about what programming languages I would recommend learning. His ideal language would be “syntactically ugly, but giving insights that are universal to programming”. My first reaction was to answer that he has just described Perl, but that was too easy of an answer. So I tried to define the basics about programming languages, starting with the personal economics.

Economics of programming languages

An organization that wants to produce a piece of software needs to answer a lot of questions like “what programming language will be best suited for the task?”. Often, these questions get diluted and rather sound like “what programming language should we stipulate for all our projects, now and forever?”. That’s when politics and economics overlap and intermingle. We can leave this problem for the organizations to solve themselves. But if we scale the question down to an individual programmer – you, what influences are there to find an answer to the question “what programming languages should I learn?”.

I try to answer with the concept of utility: Learn those languages that, over a reasonable time, yield the most “utility”. There are at least two types of utility in our profession: money and joy. You can learn a programming language because your job requires it (money) or because you are curious and/or dig its particularities (joy). Most of the time, a specific programming language contains a mixture of both utilities for you. How you rate those utilities is up to you and probably varies from situation to situation. If you start a private fun project, picking the boring mainstream language from work might get things get done faster, but when would you want the fun to be over sooner?

Let me give two extreme examples for this concept:

  • If you start to learn COBOL now, chances are high that you will achieve two things: You will be disgusted by the language and the existing codebase, but delighted by the salary and job security. COBOL is a high money-utility programming language. It ranks low in any survey or statistics about programming languages, but is widely used in big business today and tomorrow. You might refer to https://blog.hackerrank.com/the-inevitable-return-of-cobol/ for more information.
  • If you start to learn Esterel now, you might experience two things over time: an epiphany about how flawed our concept of time is in most programming languages and an existential crisis because your brain isn’t capable to wrap itself around most sourcecode. Whatever comes first will define your learning success. There are virtually no jobs that require Esterel (even if some might benefit from it) and you can only program and build so many bicycle computers in your spare time (this is a typical introduction project to Esterel). Esterel is a pure joy-utility programming language. You can claim to be proficient in synchronous programming afterwards, but nobody will know what that even is.

A third type of utility

But I think that there might be a third type of utility for personal learning choices based on economics: The stirrup iron utility. Knowledge of some programming languages isn’t useful from a money-driven viewpoint and may lack enjoyability, but it serves as a door-opener to more enjoyable or sellable languages. It serves as an interim utility because it doesn’t have value in itself, but serves as a multiplier for either the money or joy utility. To rate the value of this utility to your career, you need to be clear about your career goals, especially your anticipated skill portfolio.

Skill portfolio shapes

Modern recruitment differentiates between several skill portfolio shapes, most noteably the “I” and “T” shape:

  • Programmers with “I”-shaped skill portfolios are experts in one specific field of programming. They might, for example, be the best C# programmer you’ve ever met. But they flop around like a fish out of water once they need to use another programming language. They will choose their familiar tools for every problem that needs to be solved and will solve it fast if possible or
  • Programmers with “T”-shaped skill portfolios have knowledge across all fields of programming, albeit limited, and drilled down into one field specifically. Why they chose to master their field can mostly be explained with the money or joy utility. They probably gained their broad knowledge base by using stirrup irons.

If you happen to know what’s expected from you until your retirement (let’s say you chose to program in COBOL), the “I”-shape is a viable and efficient strategy to manage your skill portfolio. There is nothing wrong with this approach (as long as it works).

If you have a hunch that you don’t have the capability to invest in broad knowledge, the “I”-shaped skill portfolio is your logical choice. It takes a lot to be able to come to a self-assessment that shows your limitations. It’s a good thing to know your limits and build a career within them. A lot of programmers don’t know their limits and burn out, because not meeting the requirements produce a lot of stress (on both sides). Better be yourself than over-promise and under-deliver constantly.

The “T”-shape means that you need to invest your time wisely. And we are not talking “work time” only, but “life time”, because you’ll probably need to spend your spare time working on your portfolio, too. Becoming a “jack of all trades” programmer is an endeavour of at least ten years without any possibility to shortcut. You need to select your jobs in accordance to your learning strategy and always be receptive to opportunities. You need to improve your learning abilities. You need to do so much at once that I suggest you start by watching Cory House’s talk about “Becoming an Outlier”. He’s spot on with so many things.

Stirrup iron programming languages

There are some programming languages that can be seen as the archetypes of a whole class of languages. Most knowledge of these archetypes can be directly applied or transfered to each language in the class. It’s the language’s concepts that are the real benefit. If you understand the synchronous programming aspect in Esterel, you’ll recognize it straight away in languages like LabView or SIGNAL. It may even just be a part of the other language (like in many multi-paradigm programming languages), but it will be familiar to you.

So what are some stirrup iron languages?

That’s a tough question and I want to place it out there. Can you drop a comment and name the programming language that had the most peculiar influence on your knowledge? I would like to refer to the book Seven Languages In Seven Weeks from the Pragmatic Bookshelf. It covers Ruby, Io, Prolog, Scala, Erlang, Clojure and Haskell. Do you agree with that selection? I would like to hear from you.

There are some ideas about this topic already: The talk “The Future of Programming” from Bret Victor (if you don’t know this guy already, please watch his legendary “Inventing on Principle” too). Richard Astbury presents three “new” hot programming languages (with matching outfits) in his talk “The State of the Art”. And Robert C. Martin is sure to have found “The Last Programming Language”.

One thing is sure: We should train the next generations of programmers in those stirrup iron languages, so they can quickly grasp the language flavour of the year. This is mostly done already, of course, but the students inevitably complain about the “weird” choices. So we need to explain upfront the economics of programming languages.

And, in a lighter tone at the end, there is always the ongoing competition for the worst programming language ever.

My C++ Tool Belt

I suspect that every developer has a “tool belt” that he or she uses to be productive. By that I mean a collection of tools, libraries and whatever else helps. With a few exceptions, these tool belts will probably be language specific, or at least platform specific. As my projects updated their compilers and transitioned to C++11 and beyond, my C++ tool belt changed quite a bit. Since things like threading, smart pointers and functional abstractions where added to the standard library, those are now already included by default. Today I wanna write about what is in my modernized C++11 tool belt.

The Standard Library

Ever since the tr1 extensions, the standard library has progressed into becoming truly powerful and exceptional. The smart pointers, containers, algorithms are much more language extensions than “just” a library, and they play perfectly with actual language features, such as lambdas, auto and initializer lists.

fmtlib

fmtlib provides placeholder-based text formatting a la Python’s String.format. There have been a few implementations of this idea over the years, but this is the first where I think that it might just dethrone operator<< overloading for good. It's fast, stable, portable and has a nice API.
I begin to miss this library the moment I need to work on a project that does not have it.
The next best thing is Qt’s QString::arg mechanism, with slightly inferior API, a less inclusive license, and a much bigger dependency.

spdlog

Logging is a powerful tool, both for software development and maintenance. Chances are you are going to need it at one point. spdlog is my favorite choice for this task. It uses fmtlib internally, which is just another plus point. It’s simple, fast and very nice to use due to reuse of fmtlib’s formatting. I usually just include this in my projects and get the included fmtlib for free.

optional

This one is actually part of the most recent C++17, but since that is not widely available yet (meaning not many projects have adopted it), I’m going to list it explicitly. There are also a few alternative implementations, such as the one in Boost or akrzemi1’s single-header variant.
Unlike many other programming languages, C++ has a relatively high emphasis on value types. While reference types usually have a built-in “not available” state (a.k.a. nullptr, NULL, Nothing or nil), an optional can transport intent much clearer. For value types, however, it’s absolutely mandatory to have an optional type. Otherwise, you just end up wrapping the value in a pointer just to make it optional.
Do not, however, fall into the trap of using optional for error handling. It’s not made for that, and other abstractions, such as expected are much better for that.

CMake

There is really only one choice when it comes to build tools, and that’s CMake. It’s got its own bunch of weaknesses, but the goods far outweight the bads. With the target_ functions, it’s actually quite nice and scales really well to bigger projects. The main downside here is that it still does not play nice with some tools, most notably visual studio. CLion and QtCreator fare much better. Then again, CMake enables the use of other tools easily, such as clang-tidy.

A word on Boost

Boost is no longer the must-have it once was. Much of the mandated functionality has already been incorporated into the standard library. It is no longer a requirement for a sane C++ project. On the contrary, boost is notoriously huge and somewhat cumbersome to integrate. Boost is not a library, it is a collection of libraries, therefore you can still decide whether to use Boost on a library by library basis. However, much of that is viral, and using a small part of Boost will easily drag in a few hundreds of other Boost headers. The libraries I tend to include most often are Boost.Utility (for boost::noncopyable) and Boost.Filesystem. The former is obviously easy to do without Boost, especially with = delete; and the latter is a part of the standard library since C++17. I hope to be doing the majority of my projects without it in the future. Boost was a catalyst for most of the C++ progress in recent years. It slowly becoming obsolete, either by being integrated into the standard or it’s idioms no longer being needed, is just a sign of its own success.

My honorable mentions are Qt and the stb single file libraries. What are your go-to tools?

Analyzing iOS crash dumps with Xcode

The best way to analyze a crash in an iOS app is if you can reproduce it directly in the iOS simulator in debug mode or on a local device connected to Xcode. Sometimes you have to analyze a crash that happened on a device that you do not have direct access to. Maybe the crash was discovered by a tester who is located in a remote place. In this case the tester must transfer the crash information to the developer and the developer has to import it in Xcode. The iOS and Xcode functionalities for this workflow are a bit hidden, so that the following step-by-step guide can help.

Finding the crash dumps

iOS stores crash dumps for every crash that occured. You can find them in the Settings app in the deeply nested menu hierarchy under Privacy -> Analytics -> Analytics Data.

There you can select the crash dump. If you tap on a crash dump you can see its contents in a JSON format. You can select this text and send it to the developer. Unfortunately there is no “Select all” option, you have to select it manually. It can be quite long because it contains the stack traces of all the threads of the app.

Importing the crash dump in Xcode

To import the crash dump in Xcode you must save it first in a file with the file name extension “.crash”. Then you open the Devices dialog in Xcode via the Window menu:

To import the crash dump you must have at least one device connected to your Mac, otherwise you will find that you can’t proceed to the next step. It can be any iOS device. Select the device to open the device information panel:

Here you find the “View Device Logs” button to open the following Device Logs dialog:

To import the crash dump into this dialog select the “All Logs” tab and drag & drop the “.crash” file into the panel on the left in the dialog.

Initially the stack traces in the crash dump only contain memory addresses as hexadecimal numbers. To resolve these addresses to human readable symbols of the code you have to “re-symbolicate” the log. This functionality is hidden in the context menu of the crash dump:

Now you’re good to go and you should finally be able to find the cause of the crash.

About API astonishments

Nowadays we developers tend to stand on the shoulders of giants: We put powerful building-blocks from different libraries together to build something worth man-years in hours. Or we fill-in the missing pieces in a framework infrastructure to create a complete application in just a few days.

While it is great to have such tools in the form of application programmer interfaces (API) at your disposal it is hard to build high quality APIs. There are many examples for widely used APIs, good and bad. What does “bad API” mean? It depends on your view point:

Bad API for the API user

For the application programmer a bad API means things like:

  • Simple tasks/use cases are complicated
  • Complex tasks are impossible or require patching
  • Easy to misuse producing bugs

A very simple real life example of such an API is a C++ camera API I had to use in a project. Our users were able to change the area of interest (AOI) of the picture to produce images consisting of only a part of full resolution images. Our application did crash or not work as expected without obvious reasons. It took many hours of debugging to spot the subtle API misuse that could be verified be reading the documentation:

The value of camera.Width.GetMax() changed instead of being constant! The reason is that AOI was meant and not the sensor resolution width. The full resolution width we actually wanted is obtained by calling camera.WidthMax.GetValue(). This kind of naming makes the properties almost undistinguishable and communicates nothing of the implications. Terms like AOI or sensor width or full resolution just do not appear in this part of the API.

Small things like the example above may really hurt productivity and user experience of an API.

Bad API for the API programmer

API programmers can easily produce APIs that are bad for themselves because they take away too much freedom away resulting in:

  • Frequent breaking changes
  • API rewrites
  • Unimplementable features
  • Confusing, not fitting interfaces

Design your interfaces small and focused. Use types in the interface that leave as much freedom as possible without hurting usability (see Iterable vs. Collection vs. List vs. ArrayList for example). Try to build composable and extendable types because adding types or methods is less of a problem than changing them.

Conclusion

Developers should put extra care in interfaces they want to publish for others to use. Once the API is out there breaking it means angry users. Be aware that good API design is hard and necessary for a painless evolution of an API. Consider reading books like “Practical API Design” or “Build APIs You Won’t Hate” if you want to target a wider audience.

Mapping the user’s workflow

One of the most important things to understand before starting any design or development is the user’s workflow(s). A user uses your app to reach a goal. His starting point is the start of the workflow. His goal its end. He takes steps in order to get from the start to his goal.
The order and the type of steps he takes helps us to understand how he reaches his goals at the moment. Visualizing these steps, often called mapping, is a great way to see the system from the user’s perspective: what does he do with the system, how and when does he do it.
This workflow helps us to keep the big picture in mind and organise planning and execution around the important part of the project: the user goals.

How does a workflow look?

Use the visualization or tool that suits you most. A workflow can be a sketch of boxes and arrows. Or an excel sheet. You can use a diagramming software or a presentation software. The important point is that you see the start, the goals and the steps and can annotate each step with important details.

How can we create the workflow

A workflow describes a series of actions. When the system supports the user to get from his start to his goal our application does its job. The user experience is how efficient and pleasant it is for the user to take each step.
One way to find out about the steps the user takes is to observe him doing so. At first: try to only watch and listen. Take notes. Be open. Record each step as if you were a beginner knowing nothing about the system or how the software works or should work. Especially watch out for the struggles.
Struggles can be seen in:

  • mistakes
  • back steps
  • pauses
  • changing applications
  • repeated steps

The struggles give us a hint where to put our energy. In the second run keep an eye open for unusual steps. Unusual steps are actions which seem complicated or unnecessary from a beginner’s mind. Start with the notion that every step is needed but find out the reasons why it is. In subsequent observations look for variations and ask what information lead the user to decide differently this time.
Armed with your recordings you can now sketch the first version of what you understood about how the user reaches his goal using the current systems.

Eliminate the Water Carrier

Some years ago, an old lady with more than hundred years of life experience im America was asked which technology changed her life the most. She didn’t hesitate to answer: running water. The ability to open the tap and have instant access to fresh water was the single most important technology in her life, even before electricity and all the household appliances it enables. Without running water, every household is forced to employ or pay a worker that does nothing else but to carry water from the source to the sink.

In today’s physical world, with physical goods, there is still a profession that relies on a specific aspect of physical objects: They won’t move from A to B without a carrier. The whole field of logistics and transportation would be obsolete in the instant that physical goods learn to move themselves. The water carrier lives on, in the form of a cardboard or palette carrier.

The three basic goods of IT are software, data and information. They all share a common trait: They can move without a human carrier. In the old days before the internet, software was distributed on physical objects like floppy disks (think of oddly shaped usb sticks) or CDs later. With the ubiquitous access to running data (often called the internet and mobile computing), we can draw our software straight from the tap. (And yes, I like the metaphor of the modem as an “information tap”). As the data throughput of our internet connections grew, it became feasible to move large amounts of data into “the cloud”. The paper boy that brings the newspaper early every morning is replaced by a virtual newspaper that updates every few seconds. The profession of a data carrier didn’t exist outside of very delicate data movements. And even them got replaced by strong cryptography.

Even information and knowledge, a classic carrier-bound good, is slowly replaced by books and pre-recorded online courses. The “wise man” (or woman) still exists, but his range was extended from his immediate geographical surrounding and his arbirtrary placement on the timeline to the whole world and all times after his publication. We don’t need to be physically present to attend a course anymore and we don’t need to synchronize our schedule with the lecturer. Knowledge and information is free to roam the planet.

With all this said and known, why are there still jobs and activities that resemble nothing more than the water carrier of our information age? Let me reiterate once more what a water carrier does: He takes something from position A and moves it to position B. In the ideal case, everything he picked up at A is delivered at B, in full and unchanged. We don’t want the carrier to lose part of the water underway and we surely don’t want him to tamper with our water.

As soon as you add something valueable to the payload (you augment it) while you carry it from A to B, you aren’t a water carrier anymore, you can be described in terms of your augmentation. But what if you add nothing? If you deliver the payload in the same condition as you picked it up? Then you are a water carrier. You don’t have a justification for your work in IT. Or you have one that I can’t see right now, then I’m eager to hear from you! Please leave a comment.

There is a classic movie that describes life and work in IT perfectly: Office Space. If you haven’t seen it yet, please put it on your watch list. I’m sure you can even draw it from your information tap. In the movie, a company with a generic IT name needs to “consolidate their staff” (as in lose some slackers). They hire some consultants that interview the whole crew. Each interview is hilarious in itself, but one is funny, tragic and suitable for our topic at hand, the water carrier:

The problem with Tom Smykowski (the guy trying to defend his job) is, that he’s probably better with people than most developers, but he still cannot sell his augmentations to the two consultants. They try to tie him down to a physical good that must be carried, but even Tom has to admit that somebody else covers the physical level. So he tries to sell his “good influence” on the process as the augmentation, but the consultants are too ignorant to recognize it. Needless to say, Tom loses his job.

Every time you just relay information without transforming it (like appending additional information or condensing it to its essence), you just carry water. Improve your environment by bypassing yourself. If you take yourself out of the communication queue, you will save time and effort and nobody has a disadvantage. You should only be part of a communication or work queue if you can augment the thing being passed through the queue. If you can’t specify your augmentation, perhaps somebody else behind you in the queue can give you hints about it. I would argue that being able to pinpoint one’s contribution to the result is the most important part of every workplace description. If you know your contribution, you can improve it. Otherwise, you may be carrying water without even knowing it.

Eliminate the middlemen in your work queues to improve efficiency. But be sure to keep anybody who contributes to the result. So, eliminate the water carriers.

Recap of the Schneide Dev Brunch 2017-04-09

brunch64-borderedLast sunday, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. This brunch was well-attended and opened the sunroof season for us. We even had to take turns on the sunny places because we didn’t want to catch a sunburn in April. As usual, the main theme was that if you bring a software-related topic along with your food, everyone has something to share. Because we were very invested in our topics, we established an agenda for the event. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Online courses

Our first topic was an report on an ongoing online course, a so-called MOOC (Massive Open Online Course) on the topic “Software Design for Non-Designers”. It aims at bringing basic knowledge of UX and UI design to programmers, who frequently lack even the most fundamental principles of design (other than code design and even that is open for discussion). A great advantage of these MOOCs is that you can minimize your brutto time investment and therefor maximize your netto yield. You are not bound to a certain place, free from specific times (other than the interaction with other participants) and yet free to engage in a community of peers. The question that remains is how valueable the certificate will be. But the initial expectations are met: The specific course is very practical and requires moderate effort in reasonable periods.

One crucial aspect is the professionality of the presenting lecturer. In this MOOC, there are talk-oriented presenters and then there is Scott Klemmer. His lectures stand out because he writes on an invisible wall before him. The camera looks through the wall. What seems like nice CGI turns out to be a real glass pane. Mr. Klemmer puts down his note in mirror writing! Once you realize that, you cannot help it but be in awe.

There are a lot of MOOCs nowadays. Other courses that got mentioned cover the topic of machine learning https://www.coursera.org/learn/machine-learning and Getting Started with Redux (a famous Javascript framework) by Dan Abramov on Egghead: https://egghead.io/courses/getting-started-with-redux. Some courses even take place on Youtube, if you manage to avoid the comment sections, like the talks from Geoffrey Hinton about neuronal networks and machine learning. Mr. Hinton is part of the Google Brain team.

The critical part of each MOOC is the final examination. Some courses require online or even real-time tests, some online provide certificates for test results in a certain timespan. Usually, the training assignments are peer reviewed by other course participants.

We will probably see this type of knowledge transfer more often in the future.

Interesting websites

While we talked about a lot of topics at once, some websites and projects got mentioned. I include them here without full coverage of the topics that led to it:

  • jsfiddle: A website that provides a quick sketchboard for web technologies like Javascript, HTML and CSS. It’s like a repl for the web.
  • regex101: A website that provides a quick sketchboard (and debugger) for regular expressions in different languages. It’s like an online IDE for regular expressions.
  • codefights: A website that puts you in the fighting pit for developers. Prove your programming skills against competition all around the globe!
  • vimgolf: A website that lets you prove your proficiency in the only text editor that counts: vim. Every keystroke counts and a mouse cannot be found!

Some of these websites might be a lot more fun in a team, except the regex one. Don’t use regular expressions in a team project! It’s a violation of the sane developer’s rules of engagement.

Workplace conflicts

One participant reported about his latest insights in conflict management during work. He applied the concepts of warfare and the four steps of complex tasks to recent disputes and had tremenduous results. Even the introduction chapter of the Strategies of War book was enough to install new notions and terms into his planning and acting. He was astounded by the positive effects of his new portfolio.

The new terminology seems to be the essential part. European (or even western) adults don’t learn the terminology of conflict and therefore cannot process disputes on a rational level, only with emotions. You cannot plan or communicate with emotions, so you cannot plan your conflict behaviour. As soon as you have the language to describe the things you perceive, you can analyze them, reflect on them and plan for them. Making a solid plan (other than “go in and win somehow”) is the best preparation for an upcoming conflict. Words shape our world. I’ve seldomly seen it clearer than in this report.

Just for starters, there is a difference between a “friend” and an “ally”.

Project documentation

An open question to all participants was our handling of documentation efforts in a project, be it for the user, customer or following developer. We discussed it with this open scope and came up with some tools that I can repeat here:

  • The arc42 software architecture template can help to shape the documentation effort for future developers or current developers if they aren’t included in the architecture effort.
  • The user manual is often written in TEX. Developers are used to the tool by constant exposition during their academic studies.
  • One idea was to generate the requirements for the developers from the user manual, as in “user manual first” or “user manual driven development”.
  • The good old Markdown syntax is useable but has its limits in top-notch aesthetics.
  • We see some potential in ASCIIDoc, but it needs to improve further to play in the same league as other tools.
  • Several participants have tried to automate the process of taking screenshots of the software for usage in various documents. If you want to try this, be warned! There are many detail problems that need to be solved before your solution will be fully automatic and reliable. A good starting point for thoughts is the “handbook data set” that can reproduce the same screenshot content (like entries in lists, etc.) in a different software version.

In the outskirt area of this discussion, the worthwhile talk “Stop Refactoring!” by Nat Pryce was mentioned. He presents an interesting take on the old question of “good enough”.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei in June. We even have some topics already on the agenda (like a report about first-hand experiences with the programming language Rust). And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.