Configuration of TANGO devices

In the previous part of our TANGO tutorial trail we put our TANGO device into production by registering it with a TANGO database. The TANGO tools allowed for basic interaction with our device. Now we want to improve the device with the TANGO way of configuration: properties.

TANGO device configuration

TANGO devices are configured with properties, which are not to confuse with OO-properties or TANGO attributes. TANGO properties are read on initialisation of a device and saved to the TANGO database. That way they live across server restarts. TANGO properties replace simple configuration files or registry-like configuration frameworks. As they are saved in the TANGO database it makes our devices location-agnostic – they can run on any host system on the network. Let us add a format properties to our TimeDevice to change the output to our liking. Again, we use pogo to define the property:

Pogo-Device Property

The property will be generated as a member variable of our TANGO device manage by the framework. We do not need to read it from the database ourselves – the corresponding code is generated by pogo – we just use it (format in line 5):

/*----- PROTECTED REGION ID(TimeDevice::read_CurrentTime) ENABLED START -----*/

  attr_CurrentTime_read = new Tango::DevString;
  TimeProvider timeProvider;
  *attr_CurrentTime_read = Tango::string_dup(timeProvider.now(format).c_str());
  //	Set the attribute value
  attr.set_value(attr_CurrentTime_read, 1, 0, true);

/*----- PROTECTED REGION END -----*/	//	TimeDevice::read_CurrentTime

Managing device state

The state of a TANGO device is extremely important for TANGO clients because they often decide how to interact with a device based on its state. We will cover state and the TANGO state machine in a later post but for now we make our TimeDevice sane by setting the its state to ON after correct initialisation, so that it reflects the operating state of the TimeDevice:

/*----- PROTECTED REGION ID(TimeDevice::init_device) ENABLED START -----*/

  set_state(Tango::ON);
  set_status("Ready to accept time queries.");

/*----- PROTECTED REGION END -----*/    //    TimeDevice::init_device

Here is the result in Jive and AtkPanel:

Configure Device in Jive-AtkPanel

Conclusion
We extended our to device with some real world features like configuration by the means of device properties and rudimentary state management. Real state management is an important topic on its own and deserves a separate blog post. Feel free to play with the full source code.

What developers can learn from designers

Slow down

Technology demands speed. Our industry focuses on speed and efficiency. Even our processes measure speed. (Scrum calls it velocity) But thinking needs time. Planning takes time. Caring needs time. Details need time. Testing needs time. Hearing, researching, observing, listening. All these need time. Designers know this.
We need to slow down. In order to see and design the details without losing the big picture we need to slow down. Great designs come from thinking hard. How do you do that? You concentrate on the essence. What matters most. How do you identify the essence? By thinking hard. And that needs time.

Design is about intention

Take a look at your code: is every line there for a reason? Every line? The order of the methods. The name of the variables. The separation in classes, interfaces, packages. How much of it is accidentally? Good designers choose everything with a reason. The place of this button? No coincidence. This color? This control? This flow of actions? Everything has an intention behind it. The information presented. Even the information not presented. The wording? Is part of the overall character. The menu structure? Grounded in good decisions.
On the other side when I look at my code (especially after some months) it doesn’t look so organized and determined. The order of the methods? Grown. The reason for this interface when there is only one implementation? Maybe I thought there would be more. Using this pattern here? What part of your code tells you its intent? And how much cries: incidental complexity? Think about it: did you choose what to include and what to left out?

Test for change, build to learn

What was the subtitle of the first XP book? Embrace change. This sounds like we are victims. Change is coming and we need to cope with it. But what when change is really coming? Are we prepared? 58 unit tests for the garbage?! The whole architecture and patterns I developed, tested and refactored countless times? Delete them?! In reality we still fear change.
But it does not have to be this way. What do designers do? They test for change. They build wireframes, mockups, prototypes. If some of them didn’t work out they can abandon them. The cost to create them is low. And even when it was not the right design they learned something. They build the prototypes to test their hypotheses. They build them to proof or falsify their assumptions. They build to learn.
The learning effect is more important than the artifact itself.
And when the application is in production? They also test for change. They do A/B tests (again for learning). Designers don’t wait until change comes to them and then they have to embrace it, they test for change.

Listen

Listen. Truly listen. Shut down your preconceptions. How often do we ask too fast, too much? Suggestive questions? Questions with constrained possibilities to answer? I often ask goal directed questions. To further find out. To define what the requirements are.
Then one day I made a mistake. I asked an open ended question. And got an answer. Not what I expected. I thought I would know the shape of the problem. I thought: okay, we need a chart, the possibility to switch between different scales and a second view for the deviation. But no. Suddenly the customer tells me: just show one series in one scale. The deviation can be displayed in a table. We do not need other scales. In previous meetings he nodded in agreement when I presented the other solution. What happened? Did the customer change his mind? No. He told me his thoughts. Not the other way around. I did not tell him what I think and he agrees. He had to think for himself. He had to shape his thoughts in order to explain them to me. He had to think it through.

Net effect matters most

Developers like to think in features. When you ask a developer what did you do for customer X, he might tell you: we created a system to manage the complex process of submitting proposals for a great variety of technologies in an efficient manner. Features: submission of proposals, complexity management, flexibility and efficiency. The what.
A designer might answer: through our work scientists all over the world have access to advanced technology to explore the future of science. The effect on the world, users and customers.
Think what is made possible through our creations, how it improves lives. Start with why.

Documentation is essential

There is this notion in our craft that the code is all the documentation you need. Why is this the way it is? Take a look at the code. The code is the documentation. Look at the commit message. This is all you need.
No. In our experience code as documentation sucks. It is too low level. What is the goal you want to reach with this piece? What is the information you collected. What are the decisions you made. What is omitted. What is rethought. What alternatives were abandoned.
Designers use all kind of artifacts to learn and record their findings and decisions. They create and keep only the essential ones and keep it pragmatic. Easy to create. Easy to update. Easy to note down what you learned and what was wrong in your assumptions. The code is just one level of abstraction and usually the end result of the thought and decision process. Record and keep the way of the decisions, not just the end result.

Focus on the whole

Developers like to divide and conquer. To separate everything into small manageable pieces. Agile demands that. First services. Then microservices. What’s next? Nanoservices?
Designers on the other hand keep the complete experience in mind. For them the whole product matters. The whole is more than the sum of its parts. The dream of the developer is that all pieces fit like Lego stones together in the end. But they forget to imagine and plan the whole creation they wanted to build. A house is not the same as another house. The composition of rooms matter. The lighting. The connections between rooms and floors. The placement of windows and doors. The whole experience. The same is valid for applications that people use.

Solution alternatives

As developers we are natural problem solvers. We are given a problem and create a solution. Designers are problem solvers, too. They identify a problem and create many solutions, test them, rate them and present them. They explore. They test and learn. They collect data and evidence. They know that every solution has its trade offs. The most promising ones are evaluated. With a plan. With hypotheses. They crave for feedback.

Reduced and emphasized – It’s about the connection

YAGNI. KISS. We know them. But what do we do with the time saved? We solve other problems. Designers carve out the details. They think of interactions, clear wording, better defaults. The little things that delight the user. Going the extra mile. The user of the applications feels cared for. He feels that there was a human that thought about his situation. There’s a connection between designers and users through the application.
When we saw Bret Victor presenting his jaw dropping talk about “Inventing on principle” he made one important point: creators must feel a connection to their creation. I think everyone should feel a connection to the software he uses. He should feel cared for and delighted. Applications are not just tools, they are experiences, they create emotions, they connect us.

Recap of the Schneide Dev Brunch 2015-02-08

brunch64-borderedYesterday, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was well-attended but there was enough space for everyone. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Thoughts on the new brunch mechanics

We changed our appointment-finding process for the dev brunch this year. It’s now fixed-date, an appreciated remedy for the long doodle sessions before. But the reminder mail on the brunch mailing list is appreciated nonetheless. I hope to not forget it.

Thoughts on secure software development

Sparked by a talk about secure software development at the Objektforum series in Stuttgart, hosted by andrena Objects, we discussed typical weak points of development environments. Habits like “not my concern” or “somebody surely has approved of this” lead to situations when intruders (malicious or not) gain access to sensitive resources. Secure development begins with a security audit of the development area itself. We also want to note that just hanging out at the cafeteria of big IT companies and listening often gains crucial information that can be used in social engineering scenarios. We call the counter-measure “context awareness”. And for the Softwareschneiderei itself, being situated right next to a funeral parlor often calls for “social context awareness” (aka no laughter, no loud jokes) on our way to lunch.

Internal developer days

Two participating companies regularly hold internal “developer days” when the developers can do whatever they like, as long as its connected to software development. Both companies experience very positive results from it. We want to expand the Dev Brunch to something called the “Dev Event”, where we moderate workshops for developers. To start with it, we plan to perform the “Mäxchen” game event in March. Details and a doodle for the date finding (yes, we try to maximize participants here) will follow on the brunch mailing list.

IT security strategies

Based on the earlier discussion about secure software development, we talked about different security strategies for IT products and IT environments. The “walled castle” doctrine was highlighted. We touched topics like the recent BMW hack, the Heartbleed debacle and ready-to-use “secure” home cloud servers. Another discussion point was the TOR router that actually weakens the TOR effect. An example of top-notch obfuscation in sourcecode was a little piece of code that was thorougly examined, but still contained a surprising side effect (citation needed).

Experiences with Docker

The Docker virtualization tool is steadily climbing the hype cycle. So it’s only natural that we talk about it and share some tricks and insights. One topic was the use of Docker for High Performance Computing and a comparison of performance loss. The rule of thumb result was that Docker is “nearly native speed” (95%) while full virtual machines range in the 70% area. If you put different container tools under stress, they break in different ways. Docker will show increased latency, others lag in terms of CPU cycles, etc. The first rule of High Performance Computing is: there will be a bottleneck and it won’t be where you expect it to be.

Another tool mentioned is Docker Fig (a rather unlucky name for german ears). It’s the sugar coating needed to be productive with Docker, just like Vagrant for Virtualbox.

Tools for managing and orchestrating Docker containers are still in their childhood. We can’t wait for second-generation tools to emerge.

One magic ingredience to get the most out of virtualization is a SSD drive on the host. The cloud hosting provider DigitalOcean has a nifty offer where you can setup a virtual machine in one minute and pay a few cents for an hour of use. We truly live in exciting times.

New doctrines

We also talked about changes in the way computers are viewed and treated. The “pet vs. cattle” metaphor was an interesting take on the hardware admin’s realm. The “precious snowflake” syndrome is a sure sign of (too) old habits. For software applications to become “containerizable”, the “Twelve-Factor App” rules are the way to think and act. Plenty food for thought!

New gadgets

The Softwareschneiderei is the first company in germany to get hold of a Myo armband. This wireless gesture controller is worn like an oversized fitness tracker bracelet and combines a gyroscope with electromyographic data (the electric current in your arm muscles). This makes for an intuitive pointing device and an not-as-intuitive-yet finger/hand gesture detector. We each played a round of our custom game “Myo Huhn” (think Moorhuhn programmed over the weekend) and reached impressive scores on the first try. Sadly, the Myo isn’t ready for serious applications yet. Let’s see what future versions of this cool little device will bring. The example usages of their official video aren’t viable at the moment.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Domain model design with food coupons

A recent customer requirement for the implementation of an application specified that every data-modifying user action has to be confirmed by the user through a confirmation prompt.
The application in question is a single page web application with client/server communication over an HTTP JSON API. The domain model is located on the server side, the client side is the user interface.

One option to accomplish the requirement could have been to implement the confirmation process exclusively on the client-side. The client code would show a confirmation dialog right before every HTTP POST, PUT, PATCH or DELETE request and perform the request only after confirmation. This would be fairly easy to implement. The downside of this approach is that the requirement is not reflected in the application’s domain model. The requirement however is so crucial that it should be part of the domain model, not just an implementation detail of the client user interface. So we opted for a different approach, which makes the confirmation process part of the domain model and exposes it through the HTTP API.

Coupon system

The basic idea is a coupon system, analogous to the ones that can be found at some food and beverage sales booths at festivals: you choose a food or drink item, pay for it at the pay booth and get a coupon. This coupon can be redeemd at a different booth where you receive the actual item.

coupon

Source: http://de.wikipedia.org/wiki/Verzehrbon | License: Public domain

 

Transferred to our web application the implementation looks like this: The client sends a request for an action to the server. But instead of performing the action immediately, the server stalls the action and responds with a unique confirmation token that identifies the waiting action. The client receives the token and can finally trigger the action by sending the confirmation token to a separate confirmation API endpoint. The server recognizes the pending user action based on the confirmation token and executes it. Of course, some care has to be taken that those pending actions, which are never confirmed time out after a while and that a malicious user can’t flood the server with waiting actions. The confirmation dialog can be triggered from the client-side code via an HTTP response interceptor that checks for a confirmation token in the response and opens the confirmation dialog if a token is present and hands the token to the confirmation endpoint if the user clicks “Ok”.

Conclusion

With this design the requirement is encoded in the server-side domain model and becomes apparent through the API. Any user of the API is called to attention by the guidance of its design. Of course, an implementor of a new client could choose to ignore the hint and return the token directly to the server without prompting the user for confirmation, but that would be a deliberate and conscious choice and not a mere oversight.

Using your TANGO devices

Now that we have built a nice TANGO device server in the previous part of this tutorial we finally want to use it.

After installing TANGO from the sources or binaries provided on www.tango-controls.org and running the TANGO database device server you need to register your device with the database to use it fully. There is however a nodb-mode if you absolutely cannot communicate with the the database device due to networking restrictions. We assume normal operation with a database accessible for the following stuff.

Registering a device server at a TANGO database

The database to use is specified by the environment variable TANGO_HOST. So first you run the tool jive and run the Server Wizard from the Tools menu:

Server Wizard1

The server name equals the executable name for C++ device servers but can be set by the programmer for Python and Java device servers. We use time_device_server for our tutorial. The instance name may be chosen quite freely – lets call our server instance localtime. In the next step we have to start the server with the same TANGO_HOST and the instance name as parameter. That way you can register and run the same server multiple times on the same or even different machines and distinguish the them. Then you have to declare the device classes and name the device instances of this server:

Server Wizard2 Server Wizard3

The device name is a three part identifier which is used to communicate with the device. In our example we use the first part to differentiate between real/hardware devices and virtual/logical devices implemented completely in software. It also could be used for the different departments in your institution for example. It is up to you to fill the identifier with meaningful information.

At the end of the wizard the device server is reinitialised and ready to use. Now we can use Jive to find our device:

Device in Jive-AtkPanel

Our device implementation is very basic so it provides only the meaningless state information of UNKNOWN but also our read-only attribute providing us with the current machine time in ISO format. AtkPanel polls all attributes of our devices and gives us a generic overview of the actual device state. Writable attributes can be changed through AtkPanel or with Test device from the Jive context menu (bottom window of the screenshot above). Feel free to experiment a bit with both tools.

In the next post we will improve our device server and add configuration via device properties.

The developer experience (DX): Our development process

Personally I like reading stories about how others tackle problems, their way to the solution, their process.
Recently our team sat down together to brainstorm what is not so good or even bad about our way of solving things. To address those problems and to improve our developer experience I researched and documented our typical development process. This post is intended to give you a glimpse of how we work.
First you have to know that we are a service business so usually a project starts with a (potential or recurring) customer coming to us with his concerns, his business needs. On the course of a typical project the following phases (some in iterations) run:

1. Identifying the problem (aka analysis)

Often the first expressed need is not the problem to be solved. We need to dig deeper and identify the core. At this point in the process we ask the customer goal oriented questions. We guide the conversation to find any constraints of the solution. One of these constraints is often the technological platform (web, mobile, desktop).
If it is a bigger problem we need to work through different levels of abstraction, one iteration at a time.
We document all the results in functional specs. These are small blocks of text in the language of the customer describing the functional side of the problem and its constraints. These specs form the basis of every project and are entered as issues in our JIRA.
Alongside we record the concepts of the domain in form of a glossary in our wiki.
Besides the text we draw complex or core parts of the domain as diagrams on paper. A rough domain model. A module concept. A high level architecture of the system.

2. Planning

In a planning poker session we estimate each issue from the analysis. The customer sets a priority. Together we develop a milestone plan for our iterations. Smaller (1 week) iterations for explorative or fast paced projects and longer (2 – 3 weeks) iterations for slow paced or well defined projects.

3. Finding a solution

Before implementing a solution to a problem in code, we create different concepts which fit the constraints we recorded earlier.
These concepts can be paper prototypes or sketches. Sometimes these are HTML mockups or simulations.
From these we choose the best suited concept by talking with the customer or from our experience.

4. Development

At the start of the project we set up the necessary internal infrastructure. This usually consists of a git repository and jenkins jobs for continuous integration. We install the libraries, frameworks and applications. We create a project in the IDE and start implementing. While developing we write automated tests where appropriate. We use manual testing to tell us what the users will see and how it behaves like we expected. When we think we are done we look for feedback.

5. Feedback and acceptance

For feedback we deploy the project on a staging or test system, email the customer so that he fulfills a manual user acceptance test.
If the customer accepts the solution we continue with the next phase. If bigger problems arise we reopen our JIRA issues or create new ones and start with the analysis again.

6. Delivery

First we need to set up a machine to our project specific needs: automatic or manual (snowflaking). This is called provisioning.
We create scripts and jobs in Jenkins for the following steps: one for release and one for deployment and commissioning.
The release step packages the project and creates documents from our JIRA issues which were resolved in this version. The deploy step takes the release artifact and transfers this to the target machine. Usually this step also runs the deployed project. Every installation (release and deploy/commisioning) is documented manually on paper to be traceable. An email to the customer tells him what the new release includes. Tags in the repository identify the deployed code.

7. Monitoring

Every error or exception occurring is sent via email to a project specific account. Additionally we create a job in Jenkins which supervises the system periodically. The job tries to reach the application and executes some health checks.

How we distribute our backups geographically

We are a software development company, so all of our most valueable assets are constantly endangered by hardware failure. We regularly do risk assessments in regard to data security and over the years created a fine-tuned system of duplication and doubled duplication to prevent data loss. Those assessments aren’t really complicated, you basically sit down, relax and think about your deepest fears on a certain topic. Then you write them down and act on their avoidance or circumvention. Here’s an example of some results:

  • No data transfer over unsecured internet connections
  • No single point of failure
  • No single area of failure

The last result is of particular interest today: We want to prevent data loss in case of “area-based desaster”, like a whole-building fire or meteorite impact. Well, to be clear on the meteorite scenario, it is both highly improbable and dangerous. If the meteorite happens to be just a bit bigger than average, we won’t worry about backups anymore because we all live in a perimeter around our company. Yes, worst-case scenarios are always morbid.

Stages of data-loss prevention

We have several measures in effect to prevent data-loss in place. Technologies like RAID drives and processes like daily backups and several copies of that backups make sure that we always have at least one copy of all important data even in the most drastic locally confined desaster. But to adhere to the first rule that no data transfer can happen over unsecured internet connections and to make sure that an internet connection isn’t a single point of failure that may compromise data security, we had to come up with a way to distribute our backups in a physical manner without much effort.

The backup export disks

Our system relies on three facts:

  • Small and resilient hard drives with high capacity are affordable
  • Every home of our employees can be an unique backup storage location
  • If we take turns, the effort is low for everybody, but high enough to be effective

So we bought an “backup export disk” for every employee. It’s an 2,5″ USB-powered hard drive with enough storage capacity to keep our most important data. All export disks are registered at the backup distribution system that can, upon connect, provide them with the most current backup. And a little “backup export token” that gets passed from employee to employee in a predetermined order. The token is just a piece of cardboard that says “tag, you are it!”.

Our backup export process

So what do you have to do when you find the “backup export token” on your desk? Just five easy steps:

  • Bring your backup export disk next day (this is the hardest part: remembering to bag the disk at home)
  • Plug it into the backup distribution system (a specific computer in off-state with an USB-cable) and switch it on
  • Wait for the system to do its job. This will take a while, but you’ll get an e-mail at completion, so just wait for the e-mail to arrive
  • Unplug the backup export disk and take it back home (store it in a dry and safe place)
  • Forward the backup export token to the next employee in line

That’s all there is to the obvious process. Some more things happen behind the scenes, but the process mostly relies on the effect of repetition by several operators.

Simple and effective

This process ensures that our backup gets “exported” at least thrice a week to different locations. All in all, we store our backup in at least five locations with a maximum age of two weeks. The system can scale up (or down) without limitation, so it won’t change even if we double or triple the location count or the export frequency. And any individual disk cannot be compromised as the data is secured by strong encryption, so there is no need to restrict physical access to it on the storage locations (like using a safe) or fret if a disk would get lost.

Decentralized, but supervised

Every time a backup export disk is connected to the backup distribution system, the disk’s health figures and remaining space is reported to the administrators. Using this information, we can also reconstruct the distribution history and fetch the most current disk in an emergency case. If a disk shows its age, it gets replaced by a new one without effort. We only need to tell the backup distribution system about it and associate it with an employee so that the e-mail is sent to the right person.

Conclusion

By assigning our employees with the core mechanics of keeping the backups distributed and automating the rest, we reached a level of data security that even protects against area effect scenarios.