Domain model design with food coupons

A recent customer requirement for the implementation of an application specified that every data-modifying user action has to be confirmed by the user through a confirmation prompt.
The application in question is a single page web application with client/server communication over an HTTP JSON API. The domain model is located on the server side, the client side is the user interface.

One option to accomplish the requirement could have been to implement the confirmation process exclusively on the client-side. The client code would show a confirmation dialog right before every HTTP POST, PUT, PATCH or DELETE request and perform the request only after confirmation. This would be fairly easy to implement. The downside of this approach is that the requirement is not reflected in the application’s domain model. The requirement however is so crucial that it should be part of the domain model, not just an implementation detail of the client user interface. So we opted for a different approach, which makes the confirmation process part of the domain model and exposes it through the HTTP API.

Coupon system

The basic idea is a coupon system, analogous to the ones that can be found at some food and beverage sales booths at festivals: you choose a food or drink item, pay for it at the pay booth and get a coupon. This coupon can be redeemd at a different booth where you receive the actual item.

coupon

Source: http://de.wikipedia.org/wiki/Verzehrbon | License: Public domain

 

Transferred to our web application the implementation looks like this: The client sends a request for an action to the server. But instead of performing the action immediately, the server stalls the action and responds with a unique confirmation token that identifies the waiting action. The client receives the token and can finally trigger the action by sending the confirmation token to a separate confirmation API endpoint. The server recognizes the pending user action based on the confirmation token and executes it. Of course, some care has to be taken that those pending actions, which are never confirmed time out after a while and that a malicious user can’t flood the server with waiting actions. The confirmation dialog can be triggered from the client-side code via an HTTP response interceptor that checks for a confirmation token in the response and opens the confirmation dialog if a token is present and hands the token to the confirmation endpoint if the user clicks “Ok”.

Conclusion

With this design the requirement is encoded in the server-side domain model and becomes apparent through the API. Any user of the API is called to attention by the guidance of its design. Of course, an implementor of a new client could choose to ignore the hint and return the token directly to the server without prompting the user for confirmation, but that would be a deliberate and conscious choice and not a mere oversight.

Using your TANGO devices

Now that we have built a nice TANGO device server in the previous part of this tutorial we finally want to use it.

After installing TANGO from the sources or binaries provided on www.tango-controls.org and running the TANGO database device server you need to register your device with the database to use it fully. There is however a nodb-mode if you absolutely cannot communicate with the the database device due to networking restrictions. We assume normal operation with a database accessible for the following stuff.

Registering a device server at a TANGO database

The database to use is specified by the environment variable TANGO_HOST. So first you run the tool jive and run the Server Wizard from the Tools menu:

Server Wizard1

The server name equals the executable name for C++ device servers but can be set by the programmer for Python and Java device servers. We use time_device_server for our tutorial. The instance name may be chosen quite freely – lets call our server instance localtime. In the next step we have to start the server with the same TANGO_HOST and the instance name as parameter. That way you can register and run the same server multiple times on the same or even different machines and distinguish the them. Then you have to declare the device classes and name the device instances of this server:

Server Wizard2 Server Wizard3

The device name is a three part identifier which is used to communicate with the device. In our example we use the first part to differentiate between real/hardware devices and virtual/logical devices implemented completely in software. It also could be used for the different departments in your institution for example. It is up to you to fill the identifier with meaningful information.

At the end of the wizard the device server is reinitialised and ready to use. Now we can use Jive to find our device:

Device in Jive-AtkPanel

Our device implementation is very basic so it provides only the meaningless state information of UNKNOWN but also our read-only attribute providing us with the current machine time in ISO format. AtkPanel polls all attributes of our devices and gives us a generic overview of the actual device state. Writable attributes can be changed through AtkPanel or with Test device from the Jive context menu (bottom window of the screenshot above). Feel free to experiment a bit with both tools.

In the next post we will improve our device server and add configuration via device properties.

The developer experience (DX): Our development process

Personally I like reading stories about how others tackle problems, their way to the solution, their process.
Recently our team sat down together to brainstorm what is not so good or even bad about our way of solving things. To address those problems and to improve our developer experience I researched and documented our typical development process. This post is intended to give you a glimpse of how we work.
First you have to know that we are a service business so usually a project starts with a (potential or recurring) customer coming to us with his concerns, his business needs. On the course of a typical project the following phases (some in iterations) run:

1. Identifying the problem (aka analysis)

Often the first expressed need is not the problem to be solved. We need to dig deeper and identify the core. At this point in the process we ask the customer goal oriented questions. We guide the conversation to find any constraints of the solution. One of these constraints is often the technological platform (web, mobile, desktop).
If it is a bigger problem we need to work through different levels of abstraction, one iteration at a time.
We document all the results in functional specs. These are small blocks of text in the language of the customer describing the functional side of the problem and its constraints. These specs form the basis of every project and are entered as issues in our JIRA.
Alongside we record the concepts of the domain in form of a glossary in our wiki.
Besides the text we draw complex or core parts of the domain as diagrams on paper. A rough domain model. A module concept. A high level architecture of the system.

2. Planning

In a planning poker session we estimate each issue from the analysis. The customer sets a priority. Together we develop a milestone plan for our iterations. Smaller (1 week) iterations for explorative or fast paced projects and longer (2 – 3 weeks) iterations for slow paced or well defined projects.

3. Finding a solution

Before implementing a solution to a problem in code, we create different concepts which fit the constraints we recorded earlier.
These concepts can be paper prototypes or sketches. Sometimes these are HTML mockups or simulations.
From these we choose the best suited concept by talking with the customer or from our experience.

4. Development

At the start of the project we set up the necessary internal infrastructure. This usually consists of a git repository and jenkins jobs for continuous integration. We install the libraries, frameworks and applications. We create a project in the IDE and start implementing. While developing we write automated tests where appropriate. We use manual testing to tell us what the users will see and how it behaves like we expected. When we think we are done we look for feedback.

5. Feedback and acceptance

For feedback we deploy the project on a staging or test system, email the customer so that he fulfills a manual user acceptance test.
If the customer accepts the solution we continue with the next phase. If bigger problems arise we reopen our JIRA issues or create new ones and start with the analysis again.

6. Delivery

First we need to set up a machine to our project specific needs: automatic or manual (snowflaking). This is called provisioning.
We create scripts and jobs in Jenkins for the following steps: one for release and one for deployment and commissioning.
The release step packages the project and creates documents from our JIRA issues which were resolved in this version. The deploy step takes the release artifact and transfers this to the target machine. Usually this step also runs the deployed project. Every installation (release and deploy/commisioning) is documented manually on paper to be traceable. An email to the customer tells him what the new release includes. Tags in the repository identify the deployed code.

7. Monitoring

Every error or exception occurring is sent via email to a project specific account. Additionally we create a job in Jenkins which supervises the system periodically. The job tries to reach the application and executes some health checks.

How we distribute our backups geographically

We are a software development company, so all of our most valueable assets are constantly endangered by hardware failure. We regularly do risk assessments in regard to data security and over the years created a fine-tuned system of duplication and doubled duplication to prevent data loss. Those assessments aren’t really complicated, you basically sit down, relax and think about your deepest fears on a certain topic. Then you write them down and act on their avoidance or circumvention. Here’s an example of some results:

  • No data transfer over unsecured internet connections
  • No single point of failure
  • No single area of failure

The last result is of particular interest today: We want to prevent data loss in case of “area-based desaster”, like a whole-building fire or meteorite impact. Well, to be clear on the meteorite scenario, it is both highly improbable and dangerous. If the meteorite happens to be just a bit bigger than average, we won’t worry about backups anymore because we all live in a perimeter around our company. Yes, worst-case scenarios are always morbid.

Stages of data-loss prevention

We have several measures in effect to prevent data-loss in place. Technologies like RAID drives and processes like daily backups and several copies of that backups make sure that we always have at least one copy of all important data even in the most drastic locally confined desaster. But to adhere to the first rule that no data transfer can happen over unsecured internet connections and to make sure that an internet connection isn’t a single point of failure that may compromise data security, we had to come up with a way to distribute our backups in a physical manner without much effort.

The backup export disks

Our system relies on three facts:

  • Small and resilient hard drives with high capacity are affordable
  • Every home of our employees can be an unique backup storage location
  • If we take turns, the effort is low for everybody, but high enough to be effective

So we bought an “backup export disk” for every employee. It’s an 2,5″ USB-powered hard drive with enough storage capacity to keep our most important data. All export disks are registered at the backup distribution system that can, upon connect, provide them with the most current backup. And a little “backup export token” that gets passed from employee to employee in a predetermined order. The token is just a piece of cardboard that says “tag, you are it!”.

Our backup export process

So what do you have to do when you find the “backup export token” on your desk? Just five easy steps:

  • Bring your backup export disk next day (this is the hardest part: remembering to bag the disk at home)
  • Plug it into the backup distribution system (a specific computer in off-state with an USB-cable) and switch it on
  • Wait for the system to do its job. This will take a while, but you’ll get an e-mail at completion, so just wait for the e-mail to arrive
  • Unplug the backup export disk and take it back home (store it in a dry and safe place)
  • Forward the backup export token to the next employee in line

That’s all there is to the obvious process. Some more things happen behind the scenes, but the process mostly relies on the effect of repetition by several operators.

Simple and effective

This process ensures that our backup gets “exported” at least thrice a week to different locations. All in all, we store our backup in at least five locations with a maximum age of two weeks. The system can scale up (or down) without limitation, so it won’t change even if we double or triple the location count or the export frequency. And any individual disk cannot be compromised as the data is secured by strong encryption, so there is no need to restrict physical access to it on the storage locations (like using a safe) or fret if a disk would get lost.

Decentralized, but supervised

Every time a backup export disk is connected to the backup distribution system, the disk’s health figures and remaining space is reported to the administrators. Using this information, we can also reconstruct the distribution history and fetch the most current disk in an emergency case. If a disk shows its age, it gets replaced by a new one without effort. We only need to tell the backup distribution system about it and associate it with an employee so that the e-mail is sent to the right person.

Conclusion

By assigning our employees with the core mechanics of keeping the backups distributed and automating the rest, we reached a level of data security that even protects against area effect scenarios.

The work experience improvement budget (“Kreativbudget”)

We at the Softwareschneiderei are a small team of software developers working in a founder-owned company. We develop software since 15 years now and have experimented with a lot of management ideas and concepts. We can conclude that a lot of things don’t work for us while others are highly effective. There is no guarantee that anything we do works anywhere else, so don’t expect wonders just because it works wonders for us. But we are willing to share nearly every detail of our management style, and here is another bit of it: the “creativity budget”.

I’ve already blogged about this idea five years ago, but it’s still a good (and fairly uncommon) idea, so why not do it again? The name “creativity budget” (“Kreativbudget” in german) is actually really bad, but it stuck and we cannot realistically change it anymore. A more fitting name would be “work experience improvement budget” or something similar. The core of the idea is simple: Every employee can spend a certain amount of money every year to improve his/her own work experience. The investment doesn’t need to be profitable, the improvement doesn’t need to be effective, whatever was bought, the employee never needs to justify it. It’s just company money that the employee can rule over to improve the company in his/her fashion.

The actual ruleset is fairly simple: In recent years, the amount was defined to be 1000 EUR per year for each employee, regardless of actual job (development or administration, for example). Our students could invest half the amount (500 EUR). You don’t need to buy coffee or food, your work computer or laptop, all the basics are provided outside of the budget. You shouldn’t spend the budget on silly things just to get rid of it, but if you have an idea – even a crazy one – and think, “hey, that would be cool to have”, you just need to create a “purchase order” issue in our administration issue tracker and flag it as “on creativity budget”. We will buy it right away, without further discussion.

Why the creativity budget?

The most competent person to improve the work experience of an employee is he himself. Every hurdle we impose between him and his improvement ideas, like bureaucratic overhead or reviews, will only damage the improvement effect, but not improve the financial situation of the company. Our financial situation is directly linked to the productivity and happiness of all employees, so we will actually damage it by trying to go cheap. Not spending money won’t buy us happiness. And remember, we are a small company. The maximum amount of all creativity budgets combined is still only a small percentage of our total revenue (under 2%). If we can improve our total revenue just a little bit, it is totally worth it. But why speculate? We have hard numbers from the last dozen years that show that it works for us.

What did the budget gain us?

The most important gain is making room for errors. If you have to plea and convince higher-ups of an improvement, it better has convincing figures and a realistic chance of success. If not, you are the moron that suggested it. Using our budget, we can try crazy things and never need to explain ourselves. If it doesn’t work – who cares? If it works – well, you were the first, now we need to implement it for everyone.
We try things earlier. New technologies like solid state disks were frowned upon in the beginning – how long do they last, etc. We tried them early and got convinced quicker than most (but that’s another blog post).
We don’t calculate improvements first. One of the most common refusals for a new idea is the worry “what if everybody wants one?”. That’s the fear of upscaling paired with the fear of failure. What if the idea works and is a huge improvement and nobody wants it? We rather err on the side of monetary losses instead of productivity loss.

But what did it gain us precisely?

Well, to answer that, I have to present you the three categories of improvements we identified (without limiting the budget to them!):

  • Hardware: A certain piece of technology believed to make work easier or more enjoyable. Examples are computer mouses (everyone has his favorite mouse), keyboards, monitor upgrades (if the default double 24″ aren’t enough), SSDs (before we got rid of spindle disks) or even your favorite computer brand. It gained us fine-tuned workplaces that fit perfectly with the developers using them – no “one size fits all”.
  • Software: A computer program that you’d like to use even if that requires license costs. Examples are IDEs, editors, version control clients or even screenshot utilities. Don’t get me wrong – we had all these things before, but mostly open source products. If you want a commercial twin of a software, you don’t have to argue. It made our software landscape more diverse and introduced some products for the whole company – SmartGit is the example of choice.
  • Wetware: An activity you’d like to undertake – in the professional context of your job. You want to visit that certain conference? Have paid training on a specific topic? This category introduced us to some conferences that are worth revisiting and some we’ve already forgotten again. We got trainings and went to workshops, without any upfront filtering or “strategic planning”.

We’ve gained a lot of agility in pursuing technical excellence, each of us on his/her own course. We gained the insight that “work experience” is something we can directly influence and steer. It makes already self-confident employees even more confident. And it relieves the boss from important, but highly individual micro-management (but that’s just my own personal gain from it all).

Summary

In giving every employee the power to improve his/her direct work experience, we improved our overall experience even more. In all these years, we never used up the budgets completely, but the effect is very noticeable. We acted on impulse, tried it out, reflected and adopted it if worthwhile. And it was very worthwhile indeed. Currently, we discuss the idea to double or even triple the budget per year and see where it leads us.

Recap of the Schneide Dev Brunch 2014-12-14

brunch64-borderedIn mid-december, we held another Schneide Dev Brunch, a regular brunch on a sunday, only that all attendees want to talk about software development and various other topics. If you bring a software-related topic along with your food, everyone has something to share. The brunch was well-attended and we didn’t even think about using the roof garden (cold and rainy). There were lots of topics and chatter. As always, this recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

International brunch

We tried to establish a video conference with a guest from San Francisco and had tried the technical implementation beforehands. But we didn’t succeed, mostly because of a sudden christmas party on the USA side. So we can’t really say if the brunch character is preserved even if you join us in the middle of the (local) night.

How much inheritance do you use?

One question was how inheritance is used in the initial development of systems. Is it a pre-planned design feature or something that helps to resolve difficult programming situations in an ad-hoc manner? How deep are the inheritance levels?
The main response was that inheritance is seldom used upfront. The initial implementations are mostly free of class hierarchies. Inheritance is often used after the fact to extract abstractions (or generalizations) from the code. The hierarchies mostly grow “upwards” from the concrete level to abstract superclasses.
Another use case of inheritance is the handling of special cases with further specialization through subclasses. The initial class is modified just enough to enable proper insertion of the new code in its own subclass.
A third use case of inheritance, upfront this time, was proposed in regard of the domain model. Behavioural typing is a common motivation for the usage of inheritance in the model, as contrasted to the technical usage of inheritance to solve non-domain problems. In the domain level, inheritance resembling a “behaves-like” relation can be the most powerful expression of actual connections between types.

Book review “Analysis patterns”

The discussion about inheritance led to questions about domain models and their expression through formal notation. An example about accounts resulted in a short review of the book “Analysis Patterns”, written by Martin Fowler in 1999. The book introduces its own notation for models to be able to express the interrelations without being dragged down into the implementation level. UML isn’t suited as it’s a notation from the technical domain. Overall, the book seems to be mostly overlooked and under-appreciated. It contains a lot of valueable wisdom in the area of domain analysis, an activity that has to be done upfront of any larger project. This “upfront activity” characteristics might have led to it being ignored in most agile processes. The book is a perfect companion to Eric Evan’s “Domain-Driven Design”.

Book review “Agile!”

Another book review of this brunch was a deep review of Bertrand Meyer’s book “Agile! The Good, the Hype and the Ugly”. The book is the written opinion of Mr. Meyer in regard of all current agile processes and very polarizing as such – he does state his points clearly. But it’s also a very well-researched assessment of nearly all aspects of agile software development. You might want to argue with certain conclusions, but you’ll have to admit that Mr. Meyer knows what he’s talking about and got his facts right (even if his temper shines through sometimes). This book is the perfect companion to all the major agile books you’ve read. It serves as a counter-balance to the dogmatic views that sometimes come across. And it serves as a (albeit personal) rating of all agile practices, a gold mine for every project manager out there. the book itself is rather short with some reiterations (you’ll get the major points, even if you skip some pages) and written in an informal tone, so it’s an easy read as long as you’re neutral towards the topic.
When we reviewed the rating of agile practices on a big whiteboard, ranging from ugly to brilliant, it didn’t took long until discussions started. If nothing else, this book will help you review your practices and beliefs.

Embedded Agile on the rise

The next topic was related to agile software development, too. In the large field of embedded software development, adoption of agile practices lagged behind substantially. This has many reasons, of which we discussed a few, but the more interesting trend was that this changes. While there is still a considerable lack of literature for embedded software overall, the number of publications advocating modifications to the agile processes to fit the intricacies of embedded software development is steadily increasing.
A similar trend can be observed in the user experience community (think: user interface designers), termed “lean UX“.

Mobile game presentation

A long-awaited highlight of this brunch was the presentation of a mobile platforms game under development by one attendee. It’s a cool-looking Jump-and-Run game in the tradition of Super Mario, with lots of gimmicks and innovative effects. The best part of the presentation was the gameplay, controlled by the developer from behind the device, upside down and with live commentary. The game is developed in a platform-agnostic manner using several frameworks and suitable coding habits. Right now, it’s in its final phase of development and will be released soon. I don’t want to spoil too much beforehands and invite Martin (the author) to insert a comment below with links leading to more information.

A change in the Dev Brunch mechanics

The last topic on our agenda was a short review of the Dev Brunch series in the last years. In 2013, we introduced the extra “workshop events” that were adapted to the “game nights” in 2014. We want to return to more serious topics in 2015 and revive the workshops. Attendees (and future ones) are invited to make suggestions which workshop they would like to see. The Dev Brunch itself will be formalized further by introducing a steady pace of bi-monthly dates.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei. And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

TANGO device server step-by-step tutorial

Now that we learned about TANGO in general and the architecture of device servers it is time to get our hands dirty. Here is a step-by-step tutorial for making your software remotely accessible as TANGO devices.

We will develop a small C++ class that can provide us the current time and date as a string and then build a device server that makes our functionality available over TANGO to remote clients. Our plain C++ project structure looks like this:

$PROJECT_ROOT/
  CMakeLists.txt
  TimeProvider/
    CMakeLists.txt
    TimeProvider.h
    TimeProvider.cpp
    main.cpp

Here are our CMake build files:
toplevel

project(Time)
cmake_minimum_required(VERSION 2.8)

find_package(PkgConfig)

add_subdirectory(TimeProvider)

and for the TimeProvider

project(TimeProvider)

add_library(time TimeProvider.cpp)

add_executable(timeprovider main.cpp)
target_link_libraries(timeprovider time)

And the C++ sources for our standalone application:
TimeProvider.h

#include <string>

class TimeProvider
{
public:
    TimeProvider() {}

    const std::string now();
};

TimeProvider.cpp

#include "TimeProvider.h"

#include <ctime>

const std::string TimeProvider::now()
{
    time_t now = time(0);
    struct tm time;
    char timeString[100];
    time = *localtime(&now);
    strftime(timeString, sizeof(timeString), "%Y-%m-%d %X", &time);
    return timeString;
}

main.cpp

#include <iostream>
#include "TimeProvider.h"

int main()
{
    TimeProvider tp;
    std::cout << tp.now() << std::endl;
    return 0;
}

Next we create a new subdirectory “TimeDevice” and add it to our toplevel CMakeLists.txt along with the TANGO package lookup:

...
find_package(PkgConfig)
pkg_check_modules(TANGO tango>=7.2.6 REQUIRED)

add_subdirectory(TimeProvider)
add_subdirectory(TimeDevice)

In this newly created directory we now run the Pogo application with pogo TimeDevice from our TANGO installation to generate our device server skeleton:

Pogo-Create Deviceand add the Attribute:Pogo-AddAttributeso the result looks like:

Pogo-TimeDevice

Now we need to add the generated sources to our CMake build like this:

project(TimeDevice)

set(SOURCES
    ${PROJECT_NAME}.cpp
    ${PROJECT_NAME}Class.cpp
    ${PROJECT_NAME}StateMachine.cpp
    ClassFactory.cpp
    main.cpp
)

# this is needed because of wrong generation of include statements
# you may correct them in generated code because they are in protected regions
include_directories(.)

include_directories(
    ${TimeProvider_SOURCE_DIR}
    ${TANGO_INCLUDE_DIRS}
)

add_definitions("-std=c++11")

add_executable(time_device_server ${SOURCES})
target_link_libraries(time_device_server
    time
    ${TANGO_LIBRARIES}
)

As the last step, we implement the code for the CurrentTime attribute like this:

void TimeDevice::read_CurrentTime(Tango::Attribute &attr)
{
	DEBUG_STREAM << "TimeDevice::read_CurrentTime(Tango::Attribute &attr) entering... " << endl;
	/*----- PROTECTED REGION ID(TimeDevice::read_CurrentTime) ENABLED START -----*/

    attr_CurrentTime_read = new Tango::DevString;
    TimeProvider timeProvider;
    *attr_CurrentTime_read = Tango::string_dup(timeProvider.now().c_str());
    //	Set the attribute value
    attr.set_value(attr_CurrentTime_read, 1, 0, true);

	/*----- PROTECTED REGION END -----*/	//	TimeDevice::read_CurrentTime
}

For other correct implementations of string attributes see the documentation on the TANGO website.
Now we should end up with a ready to run TANGO device server executable.

Conclusion
If  you structure your project with hindsight you can integrate your drivers or services in your TANGO control system with very low effort. In the next post we we will show how to add a device server to a TANGO database and use its facilities like device properties for configuration or jive for inspection of a device.

Feel free to download the full source code of this tutorial.