Zoom out early, zoom out often

Dennis Jarvis [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

A common pitfall of working long hours with high concentration level or stress level is the “yak shaving” effect. You start with a clear goal, dive down into the details and encounter an unforseen obstacle. No big problem, you just need to adjust focus for a moment and fix this little… but wait, in order to fix it, you first need to change this minor circumstance. And this change is prohibited by this effect, which needs to be adjusted, but relies on that. Much later, you’ll wake up from your dive and find yourself happily shaving a yak. But how exactly did you get there?

Avoid the yak

The best approach to counter yak shaving is “zooming out” of your current work in regular, externally triggered intervals and rehashing three aspects of your current work:

  • What do I want to achieve? (“Goal”)
  • What is my current task? (“Task”)
  • How does my task relate to my goal? (“Relationship”)

This “Goal/Task-Relationship” shouldn’t get too complicated. To describe your current Goal/Task-Relationship to a random person that just now arrived at the scene (ok, lets be clear: I’m talking about your boss), you should need at most two simple sentences. Every longer description is a sign of an unclear goal or inefficient steps (tasks) towards it.

To make sure that your Goal/Task-Relationship stays explainable, you could use the Pomodoro technique that partitions your concentrated work into intervals of half an hour (including the rehash phase).

Target fixation

The approach above helps against yaks, but not against target fixation. Target fixation occurs when you are so sure about your goal that you don’t question it even when the cost of achieving it rises to obscene levels. There are many stories I could tell about target fixation, but one sticks out for me because it happened myself and it happened recently.

The tragedy

In the midst of winter, during a cold period, my gas heater for the whole apartment broke down – on a late saturday evening. No amount of reading the manual, trying to turn it off and on again and maintainance routines could bring it back to life. The rooms grew colder. A long, cold weekend lay before me, but I couldn’t just sit it out – I had work to do with a tight deadline. So I frantically contacted one “24h emergency service” after the other with no success at all (this is in a rather big city, the experience really shocked me). My efforts to reach anybody who could help consumed time and nerves until I finally gave up. The backup option was to move to an hotel for two nights and I was ready to pack my things.

The remedy

oil-radiatorBut before I made the final decision to temporarily abandon the place, I called a friend to congratulate him on his birthday, totally unrelated to the heating desaster (it was on my todo list and needed to be done, so why not now?). After he asked me what’s up (he always senses misery) and I told him the whole catastrophe, he laughed and said: “Your problem can be solved with some money and a DIY store: just buy an oil radiator and plug it in – voilà, heating for one room”. I was baffled and excited: ten minutes later, and the store would be closed. In the last minute, I bought the radiator and had enough heat until the gas heater could be fixed during normal working days.

My target fixation is easily explained: “The gas heater broke down so I need to repair it/have it repaired”. The solution is also easy: “You need heating, but not necessarily by that broken gas heater”. It’s the same problem, just different zoom levels. By zooming out (being zoomed out, having somebody else provide the external view) of the narrow problem space I could see the whole picture and solve the real problem, not my perceived one.

Bird’s eye view

To counter target fixation, you have to zoom out regularly. But you need to zoom out even more and ask a different set of questions:

  • What problem do I want to solve? (“Problem”)
  • Can I think of a related, more generalized problem? (“Root”)
  • Have both problems the same cause? (“Cause”)

The “Problem Root Cause” approach helps to find a more abstract formulation of the problem at hand. You basically ask if you really solve a problem or merely a symptom of an hidden cause. In my story, I wanted to solve the problem of the broken gas heater. The generalized problem was lack of heating, regardless of which device it may provide. The cause was identical: cold weather without proper heating. Now I own an oil radiator on reserve.

Zoom out often

You really need to zoom out of your current work, take a few steps back and broaden your view to be sure about your path to the best solution. So my advice is to “zoom out early, zoom out often” (adapted from “commit early, commit often”). If you can manage the bird’s eye view of your path to the goal yourself, you’ll less often fall prey to yak shaving and target fixation.

Streaming images from your application to the web with GStreamer and Icecast – Part 1

Streaming existing media files such as videos to the web is a common task solved by streaming servers. But maybe you would like to encode and stream a sequence of images originating from inside your application on the fly as video to the web. This two part article series will show how to use the GStreamer media framework and the Icecast streaming server to achieve this goal.

GStreamer

GStreamer is an open source framework for setting up multimedia pipelines. The idea of such a pipeline is that it is constructed from elements, each performing a processing step on the multimedia data that flows through them. Each element can be connected to other elements (source and a sink elements), forming a directed, acyclic graph structure. GStreamer pipelines are comparable to Unix pipelines for text processing. In the simplest case a pipeline is a linear sequence of elements, each element receiving data as input from its predecessor element and sending the processed output data to its successor element. Here’s a GStreamer pipeline that encodes data from a video test source with the VP8 video codec, wraps (“multiplexes”) it into the WebM container format and writes it to a file:

videotestsrc ! vp8enc ! webmmux ! filesink location=test.webm

In contrast to Unix pipelines the notation for GStreamer pipelines uses an exclamation mark instead of a pipe symbol. An element can be configured with attributes denoted as key=value pairs. In this case the filesink element has an attribute specifying the name of the file into which the data should be written. This pipeline can be directly executed with a command called gst-launch-1.0 that is usually part of a GStreamer installation:

gst-launch-1.0 videotestsrc ! vp8enc ! webmmux ! filesink location=test.webm
videotestsrc

videotestsrc

If we wanted to use a different codec and container format, for example Theora/Ogg, we would simply have to replace the two elements in the middle:

gst-launch-1.0 videotestsrc ! theoraenc ! oggmux ! filesink location=test.ogv

Icecast

If we want to stream this video to the Web instead of writing it into a file we can send it to an Icecast server. This can be done with the shout2send element:

gst-launch-1.0 videotestsrc ! vp8enc ! webmmux ! shout2send ip=127.0.0.1 port=8000 password=hackme mount=/test.webm

This example assumes that an Icecast server is running on the local machine (127.0.0.1) on port 8000. On a Linux distribution this is usually just a matter of installing the icecast package and starting the service, for example via systemd:

systemctl start icecast

Note that WebM streaming requires at least Icecast version 2.4, while Ogg Theora streaming is supported since version 2.2. The icecast server can be configured in a config file, usually located under /etc/icecast.xml or /etc/icecast2/icecast.xml. Here we can set the port number or the password. We can check if our Icecast installation is up and running by browsing to its web interface: http://127.0.0.1:8000/ Let’s go back to our pipeline:

gst-launch-1.0 videotestsrc ! vp8enc ! webmmux ! shout2send ip=127.0.0.1 port=8000 password=hackme mount=/test.webm

The mount attribute in the pipeline above specifies the path in the URL under which the stream will be available. In our case the stream will be available under http://127.0.0.1:8000/test.webm You can open this URL in a media player such as VLC or MPlayer, or you can open it in a WebM cabable browser such as Chrome or Firefox, either directly from the URL bar or from an HTML page with a video tag:

<video src="http://127.0.0.1:8000/test.webm"></video>

If we go to the admin area of the Icecast web interface we can see a list of streaming clients connected to our mount point. We can even kick unwanted clients from the stream.

Conclusion

This part showed how to use GStreamer and Icecast to stream video from a test source to the web. In the next part we will replace the videotestsrc element with GStreamer’s programmable appsrc element, in order to feed the pipeline with raw image data from our application.

Configuration of TANGO devices

In the previous part of our TANGO tutorial trail we put our TANGO device into production by registering it with a TANGO database. The TANGO tools allowed for basic interaction with our device. Now we want to improve the device with the TANGO way of configuration: properties.

TANGO device configuration

TANGO devices are configured with properties, which are not to confuse with OO-properties or TANGO attributes. TANGO properties are read on initialisation of a device and saved to the TANGO database. That way they live across server restarts. TANGO properties replace simple configuration files or registry-like configuration frameworks. As they are saved in the TANGO database it makes our devices location-agnostic – they can run on any host system on the network. Let us add a format properties to our TimeDevice to change the output to our liking. Again, we use pogo to define the property:

Pogo-Device Property

The property will be generated as a member variable of our TANGO device manage by the framework. We do not need to read it from the database ourselves – the corresponding code is generated by pogo – we just use it (format in line 5):

/*----- PROTECTED REGION ID(TimeDevice::read_CurrentTime) ENABLED START -----*/

  attr_CurrentTime_read = new Tango::DevString;
  TimeProvider timeProvider;
  *attr_CurrentTime_read = Tango::string_dup(timeProvider.now(format).c_str());
  //	Set the attribute value
  attr.set_value(attr_CurrentTime_read, 1, 0, true);

/*----- PROTECTED REGION END -----*/	//	TimeDevice::read_CurrentTime

Managing device state

The state of a TANGO device is extremely important for TANGO clients because they often decide how to interact with a device based on its state. We will cover state and the TANGO state machine in a later post but for now we make our TimeDevice sane by setting the its state to ON after correct initialisation, so that it reflects the operating state of the TimeDevice:

/*----- PROTECTED REGION ID(TimeDevice::init_device) ENABLED START -----*/

  set_state(Tango::ON);
  set_status("Ready to accept time queries.");

/*----- PROTECTED REGION END -----*/    //    TimeDevice::init_device

Here is the result in Jive and AtkPanel:

Configure Device in Jive-AtkPanel

Conclusion
We extended our to device with some real world features like configuration by the means of device properties and rudimentary state management. Real state management is an important topic on its own and deserves a separate blog post. Feel free to play with the full source code.

What developers can learn from designers

Slow down

Technology demands speed. Our industry focuses on speed and efficiency. Even our processes measure speed. (Scrum calls it velocity) But thinking needs time. Planning takes time. Caring needs time. Details need time. Testing needs time. Hearing, researching, observing, listening. All these need time. Designers know this.
We need to slow down. In order to see and design the details without losing the big picture we need to slow down. Great designs come from thinking hard. How do you do that? You concentrate on the essence. What matters most. How do you identify the essence? By thinking hard. And that needs time.

Design is about intention

Take a look at your code: is every line there for a reason? Every line? The order of the methods. The name of the variables. The separation in classes, interfaces, packages. How much of it is accidentally? Good designers choose everything with a reason. The place of this button? No coincidence. This color? This control? This flow of actions? Everything has an intention behind it. The information presented. Even the information not presented. The wording? Is part of the overall character. The menu structure? Grounded in good decisions.
On the other side when I look at my code (especially after some months) it doesn’t look so organized and determined. The order of the methods? Grown. The reason for this interface when there is only one implementation? Maybe I thought there would be more. Using this pattern here? What part of your code tells you its intent? And how much cries: incidental complexity? Think about it: did you choose what to include and what to left out?

Test for change, build to learn

What was the subtitle of the first XP book? Embrace change. This sounds like we are victims. Change is coming and we need to cope with it. But what when change is really coming? Are we prepared? 58 unit tests for the garbage?! The whole architecture and patterns I developed, tested and refactored countless times? Delete them?! In reality we still fear change.
But it does not have to be this way. What do designers do? They test for change. They build wireframes, mockups, prototypes. If some of them didn’t work out they can abandon them. The cost to create them is low. And even when it was not the right design they learned something. They build the prototypes to test their hypotheses. They build them to proof or falsify their assumptions. They build to learn.
The learning effect is more important than the artifact itself.
And when the application is in production? They also test for change. They do A/B tests (again for learning). Designers don’t wait until change comes to them and then they have to embrace it, they test for change.

Listen

Listen. Truly listen. Shut down your preconceptions. How often do we ask too fast, too much? Suggestive questions? Questions with constrained possibilities to answer? I often ask goal directed questions. To further find out. To define what the requirements are.
Then one day I made a mistake. I asked an open ended question. And got an answer. Not what I expected. I thought I would know the shape of the problem. I thought: okay, we need a chart, the possibility to switch between different scales and a second view for the deviation. But no. Suddenly the customer tells me: just show one series in one scale. The deviation can be displayed in a table. We do not need other scales. In previous meetings he nodded in agreement when I presented the other solution. What happened? Did the customer change his mind? No. He told me his thoughts. Not the other way around. I did not tell him what I think and he agrees. He had to think for himself. He had to shape his thoughts in order to explain them to me. He had to think it through.

Net effect matters most

Developers like to think in features. When you ask a developer what did you do for customer X, he might tell you: we created a system to manage the complex process of submitting proposals for a great variety of technologies in an efficient manner. Features: submission of proposals, complexity management, flexibility and efficiency. The what.
A designer might answer: through our work scientists all over the world have access to advanced technology to explore the future of science. The effect on the world, users and customers.
Think what is made possible through our creations, how it improves lives. Start with why.

Documentation is essential

There is this notion in our craft that the code is all the documentation you need. Why is this the way it is? Take a look at the code. The code is the documentation. Look at the commit message. This is all you need.
No. In our experience code as documentation sucks. It is too low level. What is the goal you want to reach with this piece? What is the information you collected. What are the decisions you made. What is omitted. What is rethought. What alternatives were abandoned.
Designers use all kind of artifacts to learn and record their findings and decisions. They create and keep only the essential ones and keep it pragmatic. Easy to create. Easy to update. Easy to note down what you learned and what was wrong in your assumptions. The code is just one level of abstraction and usually the end result of the thought and decision process. Record and keep the way of the decisions, not just the end result.

Focus on the whole

Developers like to divide and conquer. To separate everything into small manageable pieces. Agile demands that. First services. Then microservices. What’s next? Nanoservices?
Designers on the other hand keep the complete experience in mind. For them the whole product matters. The whole is more than the sum of its parts. The dream of the developer is that all pieces fit like Lego stones together in the end. But they forget to imagine and plan the whole creation they wanted to build. A house is not the same as another house. The composition of rooms matter. The lighting. The connections between rooms and floors. The placement of windows and doors. The whole experience. The same is valid for applications that people use.

Solution alternatives

As developers we are natural problem solvers. We are given a problem and create a solution. Designers are problem solvers, too. They identify a problem and create many solutions, test them, rate them and present them. They explore. They test and learn. They collect data and evidence. They know that every solution has its trade offs. The most promising ones are evaluated. With a plan. With hypotheses. They crave for feedback.

Reduced and emphasized – It’s about the connection

YAGNI. KISS. We know them. But what do we do with the time saved? We solve other problems. Designers carve out the details. They think of interactions, clear wording, better defaults. The little things that delight the user. Going the extra mile. The user of the applications feels cared for. He feels that there was a human that thought about his situation. There’s a connection between designers and users through the application.
When we saw Bret Victor presenting his jaw dropping talk about “Inventing on principle” he made one important point: creators must feel a connection to their creation. I think everyone should feel a connection to the software he uses. He should feel cared for and delighted. Applications are not just tools, they are experiences, they create emotions, they connect us.

Recap of the Schneide Dev Brunch 2015-02-08

brunch64-borderedYesterday, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was well-attended but there was enough space for everyone. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Thoughts on the new brunch mechanics

We changed our appointment-finding process for the dev brunch this year. It’s now fixed-date, an appreciated remedy for the long doodle sessions before. But the reminder mail on the brunch mailing list is appreciated nonetheless. I hope to not forget it.

Thoughts on secure software development

Sparked by a talk about secure software development at the Objektforum series in Stuttgart, hosted by andrena Objects, we discussed typical weak points of development environments. Habits like “not my concern” or “somebody surely has approved of this” lead to situations when intruders (malicious or not) gain access to sensitive resources. Secure development begins with a security audit of the development area itself. We also want to note that just hanging out at the cafeteria of big IT companies and listening often gains crucial information that can be used in social engineering scenarios. We call the counter-measure “context awareness”. And for the Softwareschneiderei itself, being situated right next to a funeral parlor often calls for “social context awareness” (aka no laughter, no loud jokes) on our way to lunch.

Internal developer days

Two participating companies regularly hold internal “developer days” when the developers can do whatever they like, as long as its connected to software development. Both companies experience very positive results from it. We want to expand the Dev Brunch to something called the “Dev Event”, where we moderate workshops for developers. To start with it, we plan to perform the “Mäxchen” game event in March. Details and a doodle for the date finding (yes, we try to maximize participants here) will follow on the brunch mailing list.

IT security strategies

Based on the earlier discussion about secure software development, we talked about different security strategies for IT products and IT environments. The “walled castle” doctrine was highlighted. We touched topics like the recent BMW hack, the Heartbleed debacle and ready-to-use “secure” home cloud servers. Another discussion point was the TOR router that actually weakens the TOR effect. An example of top-notch obfuscation in sourcecode was a little piece of code that was thorougly examined, but still contained a surprising side effect (citation needed).

Experiences with Docker

The Docker virtualization tool is steadily climbing the hype cycle. So it’s only natural that we talk about it and share some tricks and insights. One topic was the use of Docker for High Performance Computing and a comparison of performance loss. The rule of thumb result was that Docker is “nearly native speed” (95%) while full virtual machines range in the 70% area. If you put different container tools under stress, they break in different ways. Docker will show increased latency, others lag in terms of CPU cycles, etc. The first rule of High Performance Computing is: there will be a bottleneck and it won’t be where you expect it to be.

Another tool mentioned is Docker Fig (a rather unlucky name for german ears). It’s the sugar coating needed to be productive with Docker, just like Vagrant for Virtualbox.

Tools for managing and orchestrating Docker containers are still in their childhood. We can’t wait for second-generation tools to emerge.

One magic ingredience to get the most out of virtualization is a SSD drive on the host. The cloud hosting provider DigitalOcean has a nifty offer where you can setup a virtual machine in one minute and pay a few cents for an hour of use. We truly live in exciting times.

New doctrines

We also talked about changes in the way computers are viewed and treated. The “pet vs. cattle” metaphor was an interesting take on the hardware admin’s realm. The “precious snowflake” syndrome is a sure sign of (too) old habits. For software applications to become “containerizable”, the “Twelve-Factor App” rules are the way to think and act. Plenty food for thought!

New gadgets

The Softwareschneiderei is the first company in germany to get hold of a Myo armband. This wireless gesture controller is worn like an oversized fitness tracker bracelet and combines a gyroscope with electromyographic data (the electric current in your arm muscles). This makes for an intuitive pointing device and an not-as-intuitive-yet finger/hand gesture detector. We each played a round of our custom game “Myo Huhn” (think Moorhuhn programmed over the weekend) and reached impressive scores on the first try. Sadly, the Myo isn’t ready for serious applications yet. Let’s see what future versions of this cool little device will bring. The example usages of their official video aren’t viable at the moment.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Domain model design with food coupons

A recent customer requirement for the implementation of an application specified that every data-modifying user action has to be confirmed by the user through a confirmation prompt.
The application in question is a single page web application with client/server communication over an HTTP JSON API. The domain model is located on the server side, the client side is the user interface.

One option to accomplish the requirement could have been to implement the confirmation process exclusively on the client-side. The client code would show a confirmation dialog right before every HTTP POST, PUT, PATCH or DELETE request and perform the request only after confirmation. This would be fairly easy to implement. The downside of this approach is that the requirement is not reflected in the application’s domain model. The requirement however is so crucial that it should be part of the domain model, not just an implementation detail of the client user interface. So we opted for a different approach, which makes the confirmation process part of the domain model and exposes it through the HTTP API.

Coupon system

The basic idea is a coupon system, analogous to the ones that can be found at some food and beverage sales booths at festivals: you choose a food or drink item, pay for it at the pay booth and get a coupon. This coupon can be redeemd at a different booth where you receive the actual item.

coupon

Source: http://de.wikipedia.org/wiki/Verzehrbon | License: Public domain

 

Transferred to our web application the implementation looks like this: The client sends a request for an action to the server. But instead of performing the action immediately, the server stalls the action and responds with a unique confirmation token that identifies the waiting action. The client receives the token and can finally trigger the action by sending the confirmation token to a separate confirmation API endpoint. The server recognizes the pending user action based on the confirmation token and executes it. Of course, some care has to be taken that those pending actions, which are never confirmed time out after a while and that a malicious user can’t flood the server with waiting actions. The confirmation dialog can be triggered from the client-side code via an HTTP response interceptor that checks for a confirmation token in the response and opens the confirmation dialog if a token is present and hands the token to the confirmation endpoint if the user clicks “Ok”.

Conclusion

With this design the requirement is encoded in the server-side domain model and becomes apparent through the API. Any user of the API is called to attention by the guidance of its design. Of course, an implementor of a new client could choose to ignore the hint and return the token directly to the server without prompting the user for confirmation, but that would be a deliberate and conscious choice and not a mere oversight.

Using your TANGO devices

Now that we have built a nice TANGO device server in the previous part of this tutorial we finally want to use it.

After installing TANGO from the sources or binaries provided on www.tango-controls.org and running the TANGO database device server you need to register your device with the database to use it fully. There is however a nodb-mode if you absolutely cannot communicate with the the database device due to networking restrictions. We assume normal operation with a database accessible for the following stuff.

Registering a device server at a TANGO database

The database to use is specified by the environment variable TANGO_HOST. So first you run the tool jive and run the Server Wizard from the Tools menu:

Server Wizard1

The server name equals the executable name for C++ device servers but can be set by the programmer for Python and Java device servers. We use time_device_server for our tutorial. The instance name may be chosen quite freely – lets call our server instance localtime. In the next step we have to start the server with the same TANGO_HOST and the instance name as parameter. That way you can register and run the same server multiple times on the same or even different machines and distinguish the them. Then you have to declare the device classes and name the device instances of this server:

Server Wizard2 Server Wizard3

The device name is a three part identifier which is used to communicate with the device. In our example we use the first part to differentiate between real/hardware devices and virtual/logical devices implemented completely in software. It also could be used for the different departments in your institution for example. It is up to you to fill the identifier with meaningful information.

At the end of the wizard the device server is reinitialised and ready to use. Now we can use Jive to find our device:

Device in Jive-AtkPanel

Our device implementation is very basic so it provides only the meaningless state information of UNKNOWN but also our read-only attribute providing us with the current machine time in ISO format. AtkPanel polls all attributes of our devices and gives us a generic overview of the actual device state. Writable attributes can be changed through AtkPanel or with Test device from the Jive context menu (bottom window of the screenshot above). Feel free to experiment a bit with both tools.

In the next post we will improve our device server and add configuration via device properties.