Mapping the user’s workflow

One of the most important things to understand before starting any design or development is the user’s workflow(s). A user uses your app to reach a goal. His starting point is the start of the workflow. His goal its end. He takes steps in order to get from the start to his goal.
The order and the type of steps he takes helps us to understand how he reaches his goals at the moment. Visualizing these steps, often called mapping, is a great way to see the system from the user’s perspective: what does he do with the system, how and when does he do it.
This workflow helps us to keep the big picture in mind and organise planning and execution around the important part of the project: the user goals.

How does a workflow look?

Use the visualization or tool that suits you most. A workflow can be a sketch of boxes and arrows. Or an excel sheet. You can use a diagramming software or a presentation software. The important point is that you see the start, the goals and the steps and can annotate each step with important details.

How can we create the workflow

A workflow describes a series of actions. When the system supports the user to get from his start to his goal our application does its job. The user experience is how efficient and pleasant it is for the user to take each step.
One way to find out about the steps the user takes is to observe him doing so. At first: try to only watch and listen. Take notes. Be open. Record each step as if you were a beginner knowing nothing about the system or how the software works or should work. Especially watch out for the struggles.
Struggles can be seen in:

  • mistakes
  • back steps
  • pauses
  • changing applications
  • repeated steps

The struggles give us a hint where to put our energy. In the second run keep an eye open for unusual steps. Unusual steps are actions which seem complicated or unnecessary from a beginner’s mind. Start with the notion that every step is needed but find out the reasons why it is. In subsequent observations look for variations and ask what information lead the user to decide differently this time.
Armed with your recordings you can now sketch the first version of what you understood about how the user reaches his goal using the current systems.

Eliminate the Water Carrier

Some years ago, an old lady with more than hundred years of life experience im America was asked which technology changed her life the most. She didn’t hesitate to answer: running water. The ability to open the tap and have instant access to fresh water was the single most important technology in her life, even before electricity and all the household appliances it enables. Without running water, every household is forced to employ or pay a worker that does nothing else but to carry water from the source to the sink.

In today’s physical world, with physical goods, there is still a profession that relies on a specific aspect of physical objects: They won’t move from A to B without a carrier. The whole field of logistics and transportation would be obsolete in the instant that physical goods learn to move themselves. The water carrier lives on, in the form of a cardboard or palette carrier.

The three basic goods of IT are software, data and information. They all share a common trait: They can move without a human carrier. In the old days before the internet, software was distributed on physical objects like floppy disks (think of oddly shaped usb sticks) or CDs later. With the ubiquitous access to running data (often called the internet and mobile computing), we can draw our software straight from the tap. (And yes, I like the metaphor of the modem as an “information tap”). As the data throughput of our internet connections grew, it became feasible to move large amounts of data into “the cloud”. The paper boy that brings the newspaper early every morning is replaced by a virtual newspaper that updates every few seconds. The profession of a data carrier didn’t exist outside of very delicate data movements. And even them got replaced by strong cryptography.

Even information and knowledge, a classic carrier-bound good, is slowly replaced by books and pre-recorded online courses. The “wise man” (or woman) still exists, but his range was extended from his immediate geographical surrounding and his arbirtrary placement on the timeline to the whole world and all times after his publication. We don’t need to be physically present to attend a course anymore and we don’t need to synchronize our schedule with the lecturer. Knowledge and information is free to roam the planet.

With all this said and known, why are there still jobs and activities that resemble nothing more than the water carrier of our information age? Let me reiterate once more what a water carrier does: He takes something from position A and moves it to position B. In the ideal case, everything he picked up at A is delivered at B, in full and unchanged. We don’t want the carrier to lose part of the water underway and we surely don’t want him to tamper with our water.

As soon as you add something valueable to the payload (you augment it) while you carry it from A to B, you aren’t a water carrier anymore, you can be described in terms of your augmentation. But what if you add nothing? If you deliver the payload in the same condition as you picked it up? Then you are a water carrier. You don’t have a justification for your work in IT. Or you have one that I can’t see right now, then I’m eager to hear from you! Please leave a comment.

There is a classic movie that describes life and work in IT perfectly: Office Space. If you haven’t seen it yet, please put it on your watch list. I’m sure you can even draw it from your information tap. In the movie, a company with a generic IT name needs to “consolidate their staff” (as in lose some slackers). They hire some consultants that interview the whole crew. Each interview is hilarious in itself, but one is funny, tragic and suitable for our topic at hand, the water carrier:

The problem with Tom Smykowski (the guy trying to defend his job) is, that he’s probably better with people than most developers, but he still cannot sell his augmentations to the two consultants. They try to tie him down to a physical good that must be carried, but even Tom has to admit that somebody else covers the physical level. So he tries to sell his “good influence” on the process as the augmentation, but the consultants are too ignorant to recognize it. Needless to say, Tom loses his job.

Every time you just relay information without transforming it (like appending additional information or condensing it to its essence), you just carry water. Improve your environment by bypassing yourself. If you take yourself out of the communication queue, you will save time and effort and nobody has a disadvantage. You should only be part of a communication or work queue if you can augment the thing being passed through the queue. If you can’t specify your augmentation, perhaps somebody else behind you in the queue can give you hints about it. I would argue that being able to pinpoint one’s contribution to the result is the most important part of every workplace description. If you know your contribution, you can improve it. Otherwise, you may be carrying water without even knowing it.

Eliminate the middlemen in your work queues to improve efficiency. But be sure to keep anybody who contributes to the result. So, eliminate the water carriers.

Recap of the Schneide Dev Brunch 2017-04-09

brunch64-borderedLast sunday, we held another Schneide Dev Brunch, a regular brunch on the second sunday of every other (even) month, only that all attendees want to talk about software development and various other topics. This brunch was well-attended and opened the sunroof season for us. We even had to take turns on the sunny places because we didn’t want to catch a sunburn in April. As usual, the main theme was that if you bring a software-related topic along with your food, everyone has something to share. Because we were very invested in our topics, we established an agenda for the event. As usual, a lot of topics and chatter were exchanged. This recapitulation tries to highlight the main topics of the brunch, but cannot reiterate everything that was spoken. If you were there, you probably find this list inconclusive:

Online courses

Our first topic was an report on an ongoing online course, a so-called MOOC (Massive Open Online Course) on the topic “Software Design for Non-Designers”. It aims at bringing basic knowledge of UX and UI design to programmers, who frequently lack even the most fundamental principles of design (other than code design and even that is open for discussion). A great advantage of these MOOCs is that you can minimize your brutto time investment and therefor maximize your netto yield. You are not bound to a certain place, free from specific times (other than the interaction with other participants) and yet free to engage in a community of peers. The question that remains is how valueable the certificate will be. But the initial expectations are met: The specific course is very practical and requires moderate effort in reasonable periods.

One crucial aspect is the professionality of the presenting lecturer. In this MOOC, there are talk-oriented presenters and then there is Scott Klemmer. His lectures stand out because he writes on an invisible wall before him. The camera looks through the wall. What seems like nice CGI turns out to be a real glass pane. Mr. Klemmer puts down his note in mirror writing! Once you realize that, you cannot help it but be in awe.

There are a lot of MOOCs nowadays. Other courses that got mentioned cover the topic of machine learning https://www.coursera.org/learn/machine-learning and Getting Started with Redux (a famous Javascript framework) by Dan Abramov on Egghead: https://egghead.io/courses/getting-started-with-redux. Some courses even take place on Youtube, if you manage to avoid the comment sections, like the talks from Geoffrey Hinton about neuronal networks and machine learning. Mr. Hinton is part of the Google Brain team.

The critical part of each MOOC is the final examination. Some courses require online or even real-time tests, some online provide certificates for test results in a certain timespan. Usually, the training assignments are peer reviewed by other course participants.

We will probably see this type of knowledge transfer more often in the future.

Interesting websites

While we talked about a lot of topics at once, some websites and projects got mentioned. I include them here without full coverage of the topics that led to it:

  • jsfiddle: A website that provides a quick sketchboard for web technologies like Javascript, HTML and CSS. It’s like a repl for the web.
  • regex101: A website that provides a quick sketchboard (and debugger) for regular expressions in different languages. It’s like an online IDE for regular expressions.
  • codefights: A website that puts you in the fighting pit for developers. Prove your programming skills against competition all around the globe!
  • vimgolf: A website that lets you prove your proficiency in the only text editor that counts: vim. Every keystroke counts and a mouse cannot be found!

Some of these websites might be a lot more fun in a team, except the regex one. Don’t use regular expressions in a team project! It’s a violation of the sane developer’s rules of engagement.

Workplace conflicts

One participant reported about his latest insights in conflict management during work. He applied the concepts of warfare and the four steps of complex tasks to recent disputes and had tremenduous results. Even the introduction chapter of the Strategies of War book was enough to install new notions and terms into his planning and acting. He was astounded by the positive effects of his new portfolio.

The new terminology seems to be the essential part. European (or even western) adults don’t learn the terminology of conflict and therefore cannot process disputes on a rational level, only with emotions. You cannot plan or communicate with emotions, so you cannot plan your conflict behaviour. As soon as you have the language to describe the things you perceive, you can analyze them, reflect on them and plan for them. Making a solid plan (other than “go in and win somehow”) is the best preparation for an upcoming conflict. Words shape our world. I’ve seldomly seen it clearer than in this report.

Just for starters, there is a difference between a “friend” and an “ally”.

Project documentation

An open question to all participants was our handling of documentation efforts in a project, be it for the user, customer or following developer. We discussed it with this open scope and came up with some tools that I can repeat here:

  • The arc42 software architecture template can help to shape the documentation effort for future developers or current developers if they aren’t included in the architecture effort.
  • The user manual is often written in TEX. Developers are used to the tool by constant exposition during their academic studies.
  • One idea was to generate the requirements for the developers from the user manual, as in “user manual first” or “user manual driven development”.
  • The good old Markdown syntax is useable but has its limits in top-notch aesthetics.
  • We see some potential in ASCIIDoc, but it needs to improve further to play in the same league as other tools.
  • Several participants have tried to automate the process of taking screenshots of the software for usage in various documents. If you want to try this, be warned! There are many detail problems that need to be solved before your solution will be fully automatic and reliable. A good starting point for thoughts is the “handbook data set” that can reproduce the same screenshot content (like entries in lists, etc.) in a different software version.

In the outskirt area of this discussion, the worthwhile talk “Stop Refactoring!” by Nat Pryce was mentioned. He presents an interesting take on the old question of “good enough”.

Epilogue

As usual, the Dev Brunch contained a lot more chatter and talk than listed here. The number of attendees makes for an unique experience every time. We are looking forward to the next Dev Brunch at the Softwareschneiderei in June. We even have some topics already on the agenda (like a report about first-hand experiences with the programming language Rust). And as always, we are open for guests and future regulars. Just drop us a notice and we’ll invite you over next time.

Internationalization of a React application with react-intl

For the internationalization of a React application I have recently used the seemingly popular react-intl package by Yahoo.

The basic usage is simple. To resolve a message use the FormattedMessage tag in the render method of a React component:

import {FormattedMessage} from "react-intl";

class Greeting extends React.Component {
  render() {
    return (
      <div>
        <FormattedMessage id="greeting.message"
            defaultMessage={"Hello, world!"}/>
      </div>
    );
  }
}

Injecting the “intl” property

If you have a text in your application that can’t be simply resolved with a FormattedMessage tag, because you need it as a string variable in your code, you have to inject the intl property into your React component and then resolve the message via the formatMessage method on the intl property.

To inject this property you have to wrap the component class via the injectIntl() function and then re-assign the wrapped class to the original class identifier:

import {intlShape, injectIntl} from "react-intl";

class SearchField extends React.Component {
  render() {
    const intl = this.props.intl;
    const placeholder = intl.formatMessage({
        id: "search.field.placeholder",
        defaultMessage: "Search"
      });
    return (<input type="search" name="query"
               placeholder={placeholder}/>);
  }
}
SearchField.propTypes = {
    intl: intlShape.isRequired
};
SearchField = injectIntl(SearchField);

Preserving references to components

In one of the components I had captured a reference to a child component with the React ref attribute:

ref={(component) => this.searchInput = component}

After wrapping the parent component class via injectIntl() as described above in order to internationalize it, the internal reference stopped working. It took me a while to figure out how to fix it, since it’s not directly mentioned in the documentation. You have to pass the “withRef: true” option to the injectIntl() call:

SearchForm = injectIntl(SearchForm, {withRef: true});

Here’s a complete example:

import {intlShape, injectIntl} from "react-intl";

class SearchForm extends React.Component {
  render() {
    const intl = this.props.intl;
    const placeholder = intl.formatMessage({
        id: "search.field.placeholder",
        defaultMessage: "Search"
      });
    return (
      <form>
        <input type="search" name="query"
               placeholder={placeholder}
               ref={(c) => this.searchInput = c}/>
      </form>
    );
  }
}
SearchForm.propTypes = {
  intl: intlShape.isRequired
};
SearchForm = injectIntl(SearchForm,
                        {withRef: true});

Conclusion

Although react-intl appears to be one of the more mature internationalization packages for React, the overall experience isn’t too great. Unfortunately, you have to litter the code of your components with dependency injection boilerplate code, and the documentation is lacking.

Simple build triggers with secured Jenkins CI

The jenkins continuous integration (CI) server provides several ways to trigger builds remotely, for example from a git hook. Things are easy on an open jenkins instance without security enabled. It gets a little more complicated if you like to protect your jenkins build environment.

Git plugin notify commit url

For git there is the “notifyCommitUrl” you can use in combination with the Poll SCM settings:

$JENKINS_URL/git/notifyCommit?url=http://$REPO/project/myproject.git

Note two things regarding this approach:

  1. The url of the source code repository given as a parameter must match the repository url of the jenkins job.
  2. You have to check the Poll SCM setting, but you do not need to provide a schedule

Another drawback is its restriction to git-hosted jobs.

Jenkins remote access api

Then there is the more general and more modern jenkins remote access api, where you may trigger builds regardless of the source code management system you use.
curl -X POST $JENKINS_URL/job/$JOB_NAME/build?token=$TOKEN

It allows even triggering parameterized builds with HTTP POST requests like:

curl -X POST $JENKINS_URL/job/$JOB_NAME/build \
--user USER:TOKEN \
--data-urlencode json='{"parameter": [{"name":"id", "value":"123"}, {"name":"verbosity", "value":"high"}]}'

Both approaches work great as long as your jenkins instance is not secured and everyone can do everything. Such a setting may be fine in your companies intranet but becomes a no-go in more heterogenious environments or with a public jenkins server.

So the way to go is securing jenkins with user accounts and restricted access. If you do not want to supply username/password as part of the url for doing HTTP BASIC auth and create users just for your repository triggers there is another easy option:

Using the Build Authorization Token Root Plugin!

Build authorization token root plugin

The plugin introduces a configuration setting in the Build triggers section to define an authentication token:

It also exposes a url you can access without being logged in to trigger builds just providing the token specified in the job:

$JENKINS_URL/buildByToken/build?job=$JOB_NAME&token=$TOKEN

Or for parameterized builds something like:

$JENKINS_URL/buildByToken/buildWithParameters?job=$JOB_NAME&token=$TOKEN&Type=Release

Conclusion

The token root plugin does not need HTTP POST requests but also works fine using HTTP GET. It does neither requires a user account nor the awkward Poll SCM setting. In my opinion it is the most simple and pragmatic choice for build triggering on a secured jenkins instance.

Look at the automated tests to diagnose the project ailments

A cornerstone of modern software development is developer testing. That means that developers are the primary authors of automated test code. In theory, that is a good thing and might look like the quality assurance department is out of work soon. In practice, we as a profession tried for nearly twenty years to install a culture of developer testing in our work and still end up with software projects that feature no automated tests at all (Side note: JUnit 1.0 was released in February of 1998).

What we know about automated tests

One piece of common understanding about developer testing is the test pyramide. Let’s iterate quickly what we know about it. There are different kinds of automated tests and the test pyramide differentiates three of them:

  • Acceptance tests or UI tests are the heaviest type of automated test. They operate on the software from the outside, with the means of a real user and try to assert that real use cases are accomplishable.
  • Integration tests often use several parts of the system in a test scenario that asserts the correct collaboration of the parts. Integration tests may take some time to come to a conclusion and utilize real hardware like network or disks.
  • Unit tests tend to be small and quick and focus on a particular aspect of an “unit” like a class or entity aggregate. Their reach into the system should be short and might be forcefully restricted by employing mocks.

These three types, the A, I and U of automated tests, should come in different numbers. A good rule of thumb is that for every acceptance test, there might be up to one thousand unit tests. If you draw the quantities as areas, they appear in form of a pyramide. A small top of acceptance tests rests on a broader seating of integration tests that relies on a groundwork of many unit tests. A healthy test pyramide looks like this:

Take this picture as an orientation, not as an absolute scale. But be sure to count your different test types from time to time.

Outlining the tests

This is actually one of the first things I do when I get introduced to a new and unknown code base. This happens quite often when I do consulting work for existing development teams. Have a look at the automated tests, determine their type and count their numbers. If it resembles anything close to the test pyramide, you’ve got a chance. If the resulting shape looks different, you might find this blog entry useful:

The Tower

If you have a hard time finding any tests (because there are none) or you find only some half-assed attempts to produce a meaningful automated test suite, you look at a tower project. The tower is rather small in diameter, in the cases of absent tests it is nothing more than a thin vertical line (the “stick”). If you find a solid number of tests for every type, you’ve found a “block” project. Block projects usually don’t have a problem, but a history of test effort migration either from unit to acceptance tests or, more common, in the other direction. If you find a block, you are fine.

The tower, though, is a case of neglect. The project team might have started serious efforts to automated their tests, but got demotivated by intrinsic or extrinsic influences and abandoned the tests soon after their creation. Nobody has looked after them since and the only reason they still pass green is that they didn’t really test anything to begin with or only cover an area of the system that is as finished as it is boring. Topics like user management or utility classes are usually the first and only things that got tests in a tower scenario.

Don’t get me wrong, the tower indicates the absence of tests, but not the absence of willingness to write automated tests, unless the tower is really a stick. A team willing to invest in automated tests may only lack knowledge and coaching about the topic. Be sure to lead them bottom-up (unit tests first), though.

The Egg

If you’ve categorized and counted the tests and couldn’t find many acceptance or unit tests, you’ve found an egg. The egg consists of mostly integration tests that may lean into unit testing territory by asserting smallest bits of functionality here and there (often embedded in an overarching test storyline) or dip their toes into gui-based testing by asserting presentation-specific properties of widget objects. While they provide ample test coverage for the system, they also tie application logic and presentation details together and don’t help to separate domain code from the use cases.

The project team is probably proud of their test coverage and doesn’t see any value in differentiating the automated tests types, because “every test improves the situation”. The blindness to test types is the core problem that may be cured with training and coaching (I’ve found the ATRIP-rules to be particularly effective to distinguish integration and unit tests), but the symptoms, especially the lack of separation of concerns, have to be mitigated soon, too.

One way to start there is to break the tests down into their integration and their unit test parts. You can work from assertion to assertion and ask: is this necessary to ensure the current use case? If not, extract a new unit test focussed on only this one assertion.

As soon as you add a pedestal consisting of unit tests to your egg, you are on your best way to a healthy test pyramide.

The Ice Cream Cone

This is the most fearsome automated test outline in existence, even more dramatic than the stick. Usually, the project team is really enthusiastic about writing tests or at least follow order to do so, but they cannot test parts of the application in isolation. A really tragic case was a complex system that was so entangled with its database, through countless stored procedures that contributed to the application logic, that it was hopeless to think about tests without the database. And because every automated test had to start the whole system including the database, there was really no need to differentiate between application logic and presentation logic. It all became a gordic knot of dependencies that enforced the habit of writing elaborate automated GUI-based tests to test the smallest logic bits deep inside the core. It felt like eating single rice grains with overly long, flimsy wooden chopsticks that would break often.

The ice cream cone is problematic because the project team needs to realize that their effort was mislead and the tests are all telling the bitter truth: the system’s architecture isn’t fit for proper automated tests. It’s not the tests, it’s you (or your architecture)! Nobody wants to hear that and more so, nobody wants to untangle the mess (without the help of a proper safety net consisting of automated tests). Pinning tests are probably helpful in this scenario.

But you need to turn the test pyramide around or the project team will suffocate by the overly costly test tax while increasing technical debt.

Epilogue

Please keep in mind that it’s not a problem in itself that your project doesn’t have a normal test pyramide. It’s great that you have automated tests at all! But your current test type distribution might not be as effective as possible, might be more expensive than necessary and might be not the right automated test setup for your development goals.

What are your stories with automated test setups? Care to share it with us in the comments?

CSS 3D transforms

If you are like me when thinking about 3D in the browser you immediately speak of WebGL. But what most developers forget is that we use simple 3D mechanism in our web sites and applications already: the z-index.
While the z-index in only stacking flat containers above each other. Almost all modern browsers can use CSS to create simple 3D models.
Let’s start with a cuboid.

<div class="container">
  <div id="cuboid">
    <div class="front">1</div>
    <div class="back">2</div>
    <div class="right">3</div>
    <div class="left">4</div>
    <div class="top">5</div>
    <div class="bottom">6</div>
  </div>
</div>

We have 6 sides and for easier recognizing each one each has a number on it. We make them bigger and give them a different background to distinguish them further.

.container {
  width: 300px;
  height: 300px;
  position: relative;
  margin: 0 auto 40px;
  padding-top: 100px;
}

#cuboid {
  width: 100%;
  height: 100%;
  position: absolute;
}

#cuboid div {
  display: block;
  position: absolute;
  border: 2px solid black;
  line-height: 196px;
  font-size: 120px;
  font-weight: bold;
  color: white;
  text-align: center;
}

Until now we didn’t use any 3D transformations. For ordering the sides we rotate and translate each side in its place.

    #cuboid .front  { transform: translateZ(100px); }
    #cuboid .back   { transform: rotateX(-180deg) translateZ(0px); }
    #cuboid .right  { transform: rotateY(90deg) translateZ(150px) translateX(-50px); }
    #cuboid .left   { transform: rotateY(-90deg) translateZ(50px) translateX(50px); }
    #cuboid .top    { transform: rotateX(90deg) translateZ(50px) translateY(50px); }
    #cuboid .bottom { transform: rotateX(-90deg) translateZ(200px) translateY(-50px); }

This brought the back side on top but no 3D visible yet. Further we tell the browser to use 3d on its children and move the scene a bit out.

#cuboid {
  transform-style: preserve-3d;
  transform: translateZ( -100px );
}

Still we are trapped in flatland. Ah we are looking straight onto the front. So we rotate the scene.

#cuboid {
  transform-style: preserve-3d;
  transform: translateZ( -100px ) rotateX(20deg) rotateY(20deg);
}

Now we have depth. Something is quite not right. If we remember one thing from our OpenGL days we need another ingredient to make it look 3D: a perspective.

.container {
  perspective: 1200px;
}

Last but not least we add animation to see it spinning.

#cuboid {
  transform-style: preserve-3d;
  transform: translateZ(-100px) rotateX(20deg) rotateY(20deg);
  animation: spinCuboid 5s infinite ease-out;
}
@keyframes spinCuboid {
  0% { transform: translateZ(-100px) rotateX(0deg) rotateY(0deg); }
  100% { transform: translateZ(-100px) rotateX(360deg) rotateY(360deg); }
}