Evolution of programming languages

Programming languages evolve over time. They get new language features and their standard library is extended. Sounds great, doesn’t it? We all know not going forward means your go backward.

But I observe very different approaches looking at several programming ecosystems we are using.

Featuritis

Java and especially C# added more and more “me too” features release after release making relatively lean languages quite complex multi-paradigm languages. They started object oriented and added generics, functional programming features and declarative programming (LINQ in C#) and different UI toolkits (AWT, Swing, JavaFx in Java; Winforms, WPF in C#) to the mix.

Often the new language features add their own set of quirks because they are an afterthought and not carefully enough designed.

For me, this lack of focus makes said language less attractive than more current approaches like Kotlin or Go.

In addition, deprecation often has no effect (see Java) where 20 year old code and style still works which increases the burden further . While it is great from a business perspektive in that your effort to maintain compatibility is low it does not help your code base. Different styles and old ways of doing something tend to remain forever.

Revolution

In Grails (I know, it is not a programming language, but I has its own ecosystem) we see more of a revolution. The core concept as a full stack framework stays the same but significant components are changed quite rapidly. We have seen many changes in technology like jetty to tomcat, ivy to maven, selenium-rc to geb, gant to gradle and the list goes on.

This causes many, sometimes subtle, changes in behaviour that are a real pain when maintaining larger applications over many years.

Framework updates are often a time-consuming hassle but if you can afford it your code base benefits and will eventually become cleaner.

Clean(er) evolution

I really like the evolution in C++. It was relatively slow – many will argue too slow – in the past but it has picked up pace in the last few years. The goal is clearly stated and only features that support it make it in:

  • Make C++ a better language for systems programming and library building
  • Make C++ easier to teach and learn
  • Zero-Cost abstractions
  • better Tool-support

If you cannot make it zero-cost your chances are slim to get your feature in…

C at its core did not change much at all and remained focused on its merits. The upgrades mostly contained convenience features like line comments, additional data type definitions and multithreading.

Honest evolution – breaking backwards compatibility

In Python we have something I would call “honest evolution”. Python 3 introduced some breaking changes with the goal of a cleaner and more consistent language. Python 2 and 3 are incompatible so the distinction in the version number is fair. I like this approach of moving forward as it clearly communicates what is happening and gets rid of the sins in the past.

The downside is that many systems still come with both, a Python 2 and a Python 3 interpreter and accompanying libraries. Fortunately there are some options and tools for your code to mitigate some of the incompatibilities, like the __future__ module and python-six.

At some point in the future (expected in 2020) there will only support for Python 3. So start making the switch.

Advertisements

Getting Shibboleth SSO attributes securely to your application

Accounts and user data are a matter of trust. Single sign-on (SSO) can improve the user experience (UX), convenience and security especially if you are offering several web applications often used by the same user. If you do not want to force your users to big vendors offering SSO like google or facebook or do not trust them you can implement SSO for your offerings with open-source software (OSS) like shibboleth. With shibboleth it may be even feasible to join an existing federation like SWITCH, DFN or InCommon thus enabling logins for thousands of users without creating new accounts and login data.

If you are implementing you SSO with shibboleth you usually have to enable your web applications to deal with shibboleth attributes. Shibboleth attributes are information about the authenticated user provided by the SSO infrastructure, e.g. the apache web server and mod_shib in conjunction with associated identity providers (IDP). In general there are two options for access of these attributes:

  1. HTTP request headers
  2. Request environment variables (not to confuse with system environment variables!)

Using request headers should be avoided as it is less secure and prone to spoofing. Access to the request environment depends on the framework your web application is using.

Shibboleth attributes in Java Servlet-based apps

In Java Servlet-based applications like Grails or Java EE access to the shibboleth attributes is really easy as they are provided as request attributes. So simply calling request.getAttribute("AJP_eppn") will provide you the value of the eppn (“EduPrincipalPersonName”) attribute set by shibboleth if a user is authenticated and the attribute is made available. There are 2 caveats though:

  1. Request attributes are prefixed by default with AJP_ if you are using mod_proxy_ajp to connect apache with your servlet container.
  2. Shibboleth attributes are not contained in request.getAttributeNames()! You have to directly access them knowing their name.

Shibboleth attributes in WSGI-based apps

If you are using a WSGI-compatible python web framework for your application you can get the shibboleth attributes from the wsgi.environ dictionary that is part of the request. In CherryPy for example you can use the following code to obtain the eppn:

eppn = cherrypy.request.wsgi_environ['eppn']

I did not find the name of the WSGI environment dictionary clearly documented in my efforts to make shibboleth work with my CherryPy application but after that everything was a bliss.

Conclusion

Accessing shibboleth attributes in a safe manner is straightforward in web environments like Java servlets and Python WSGI applications. Nevertheless, you have to know the above aspects regarding naming and visibility or you will be puzzled by the behaviour of the shibboleth service provider.

Explicit types – and when to use them

Many modern programming languages offer a way declare variables without an explicit type if the type can be inferred, either dynamically or statically. Many also allow for variables to be explicitly defined with a type. For example, Scala and C# let you omit the explicit variable type via the var keyword, but both also allow defining variables with explicit types. I’m coming from the C++ world, where “auto” is available for this purpose since the relatively recent C++11. However, people are still debating whether you should actually use it.

Pros

Herb Sutter popularised the almost-always-auto style. He advocates that using more type inference is good because it is roughly equivalent to programming against interfaces instead of implementations. He says that “Overcommitting to explicit types makes code less generic and more interdependent, and therefore more brittle and limited.” However, he also mentions that you might sometimes want to use explicit types.

Now what exactly is overcommiting here? When is the right time to use explicit types?

Cons

Opponents to implicit typing, many of them experienced veterans, often state that they want the actual type visible in the source code. They don’t want to rely on type inference being right. They want the code to explicitly state what’s going on.

At first, I figured that was just conservatism in the face of a new “scary” feature that they did not fully understand. After all, IDEs can usually infer the type on-the-fly and you can hover on a variable to let it show you the type.

For C++, the function signature is a natural boundary where you often insert explicit types, unless you want to commit to the compile time and physical dependency cost that comes with templates. Other languages, such as Groovy, do not have this trade-off and let you skip explicit types almost everywhere. After working with Groovy/Grails for a while, where the dominant style seems to be to omit types whereever possible, it dawned on me that the opponents of implicit typing have a point. Not only does the IDE often fail to show me the inferred type (even though it still works way more often than I would have anticipated), but I also found it harder to follow and modify code that did not mention explicit types. Seemingly contrary to Herb Sutter’s argument, that code felt more brittle than I had liked.

Middle-ground

As usual, the truth seems to be somewhere in the middle. I propose the following rule for when to use explicit types:

  • Explicit typing for domain-types
  • Implicit typing everywhere else

Code using types from the problem domain should be as specific as possible. There’s no need for it to be generic – it’s actually counter-productive, as otherwise the code model would be inconsistent with model of the problem domain. This is also the most important aspect to grok when reading code, so it should be explicit. The type is as important as the action on it.

On the other hand, for pure-fabrication types that do not respresent a concept in the domain, the action is important, while the type is merely a means to achieve this action. Typically, most of the elements from a language’s standard library fall into this category. All your containers, iterators, callables. Their types are merely implementation details: an associative container could be an array, or a hash-map or a tree structure. Exchanging it rarely changes the meaning of the code in the problem domain – it just changes its performance characteristics.

Containers will occasionally contain domain-types in their type. What do you do about those? I think they belong in the “everywhere else” catergory, but you should be take extra care to name the contained type when working with it – for example when declaring the variable of the for-each loop on it, or when inserting something into it. This way, the “collection of domain-type” aspect will become clear, but the specific container implementation will stay implicit – like it should.

What do you think? Is this a useful proposition for your code?

Updating from Grails 2.3 to something newer

We are developing, running and maintaining moderately sized Grails web application with > 120 domain classes  since 2008 or Grails 1.0.3. The web application is still in production running on Grails 2.3.8. Just recently we wanted Java 8 support and the usual bugfixes and improvements you get by updating the framework. Since time and budget are very limited (as always…) we decided not to move to 3.x but only to the latest 2.x version. It seemed a safer and easier option and opened up the way to 3.x where many things changed completely.

Trying to go to 2.5.4

The upgrade procedure is generally well documented in Grails. That allowed us to upgrade from 1.0 to 1.3, from 1.3 to 2.2 and finally from 2.2 to 2.3. We skipped 2.0 because of too many problems we faced during the upgrade. As usual the major changes and tasks are mentioned in the upgrade guide. It started smoothly but we finally had to abort the upgrade process because we were bitten by https://github.com/grails/grails-data-mapping/issues/581 . We had not the time to dig fully into it and resolve the issue.

Trying to go to 2.4.5

Many of the changes and improvements and most notably a Groovy version supporting the Java 8 runtime are already available in Grails 2.4.5. So we gave it a shot hoping for fewer problems than with 2.5.4. Actually we got our application running in less than an hour but quite some of our unit, integration and functional tests failed. After finding some advice in http://stackoverflow.com/questions/16532631/grails-unit-test-mock-domain-with-assigned-id we changed our unit tests to use the @Mock() mixin instead of mockDomain() which works in 2.3 and is broken in 2.4.

When trying to fix our integration tests we saw that some of our HQL queries failed. Something was wrong navigating/querying multiple association levels so we finally gave up on this one, too.

Conclusion

Even though we managed to keep our Grails application alive for many years and several framework versions each upgrade carries a significant risk of breakage and requires quite some effort. This time we are stuck again and will have to invest more time to bring the application up-to-date again.

I would advise anyone already using or deciding for Grails as the web framework of choice to start with the latest and greatest release and to budget several person days for upgrades of medium sized projects. The devil is in the details…

How to speed up your ORM queries of n x m associations

What causes a speedup like this? (all numbers are in ms)

Disclaimer: the absolute benchmark numbers are for illustration purposes, the relationship and the speedup between the different approaches are important (just for the curious: I measured 500 entries per table in a PostgreSQL database with both Rails 4.1.0 and Grails 2.3.8 running on Java 7 on a recent MBP running OSX 10.8)

Say you have the model classes Book and (Book)Writer which are connected via a n x m table named Authorship:

A typical query would be to list all books with its authors like:

Fowler, Martin: Refactoring

A straight forward way is to query all authorships:

In Rails:

# 1500 ms
Authorship.all.map {|authorship| "#{authorship.writer.lastname}, #{authorship.writer.firstname}: #{authorship.book.title}"}

In Grails:

// 585 ms
Authorship.list().collect {"${it.writer.lastname}, ${it.writer.firstname}: ${it.book.title}"}

This is unsurprisingly not very fast. The problem with this approach is that it causes the famous n+1 select problem. The first option we have is to use eager fetching. In Rails we can use ‘includes’ or ‘joins’. ‘Includes’ loads the associated objects via additional queries, one for authorship, one for writer and one for book.

# 2300 ms
Authorship.includes(:book, :writer).all

‘Joins’ uses SQL inner joins to load the associated objects.

# 1000 ms
Authorship.joins(:book, :writer).all
# returns only the first element
Authorship.joins(:book, :writer).includes(:book, :writer).all

Additional queries with ‘includes’ in our case slows down the whole request but with joins we can more than halve our time. The combination of both directives causes Rails to return just one record and is therefore ruled out.

In Grails using ‘belongsTo’ on the associations speeds up the request considerably.

class Authorship {
    static belongsTo = [book:Book, writer:BookWriter]

    Book book
    BookWriter writer
}

// 430 ms
Authorship.list()

Also we can implement eager loading with specifying ‘lazy: false’ in our mapping which boosts a mild performance increase.

class Authorship {
    static mapping = {
        book lazy: false
        writer lazy: false
    }
}

// 416 ms
Authorship.list()

Can we do better? The normal approach is to use ‘has_many’ associations and query from one side of the n x m association. Since we use more properties from the writer we start from here.

class Writer < ActiveRecord::Base
  has_many :authors
  has_many :books, through: :authors
end

Testing the different combinations of ‘includes’ and ‘joins’ yields interesting results.

# 1525 ms
Writer.all.joins(:books)

# 2300 ms
Writer.all.includes(:books)

# 196 ms
Writer.all.joins(:books).includes(:books)

With both options our request is now faster than ever (196 ms), a speedup of 7.
What about Grails? Adding ‘hasMany’ and the authorship table as a join table is easy.

class BookWriter {
    static mapping = {
        books joinTable:[name: 'authorships', key: 'writer_id']
    }

    static hasMany = [books:Book]
}
// 313 ms, adding lazy: false results in 295 ms
BookWriter.list().collect {"${it.lastname}, ${it.firstname}: ${it.books*.title}"}

The result is rather disappointing. Only a mild speedup (2x) and even slower than Rails.

Is this the most we can get out of our queries?
Looking at the benchmark results and the detailed numbers Rails shows us hints that the query per se is not the problem anymore but the deserialization. What if we try to limit our created object graph and use a model class backed by a database view? We can create a view containing all the attributes we need even with associations to the books and writers.

create view author_views as (SELECT "authorships"."writer_id" AS writer_id, "authorships"."book_id" AS book_id, "books"."title" AS book_title, "writers"."firstname" AS writer_firstname, "writers"."lastname" AS writer_lastname FROM "authorships" INNER JOIN "books" ON "books"."id" = "authorships"."book_id" INNER JOIN "writers" ON "writers"."id" = "authorships"."writer_id")

Let’s take a look at our request time:

# 15 ms
 AuthorView.select(:writer_lastname, :writer_firstname, :book_title).all.map { |author| "#{author.writer_lastname}, #{author.writer_firstname}: #{author.book_title}" }
// 13 ms
AuthorView.list().collect {"${it.writerLastname}, ${it.writerFirstname}: ${it.bookTitle}"}

13 ms and 15 ms. This surprised me a lot. Seeing this in comparison shows how much this impacts performance of our request.

The lesson here is that sometimes the performance can be improved outside of our code and that mapping database results to objects is a costly operation.

Grails Update from 2.2 to 2.3

An update in the minor version does not seem like a big step but this is one brought a lot of changes, so here a step by step guide which highlights some pitfalls.

First update the version of Grails in your application properties:

app.grails.version=2.3.8

The tomcat and hibernate plugins now have versions of their respective frameworks and not the version number of Grails:

plugins.tomcat=7.0.52.1
plugins.hibernate=3.6.10.13

Grails 2.3 has a new databinding mechanism. To use the old one, especially if you use custom property editors you have to add this option to your Config.groovy:

grails.databinding.useSpringBinder = true

But even with the old databinding something changed. The field id is not bound in command objects you need to bind id explicitly:

def action = { MyCommand command ->
  command.id = params['id']?.toLong()
}

Besides the databinding mechanism also the dependency resolving changed. But you can use the old ivy mechanism by including this in BuildConfig.groovy:

grails.project.dependency.resolver="ivy"

Nonetheless all dependencies must be declared in application.properties or BuildConfig.groovy. If you have a lib directory with local jars in your application you need to add this to your repositories as a local directory:

grails.project.dependency.resolution = {
    repositories {
        flatDir name:'myRepo', dirs:'lib'
    }
}

When you have all dependencies declared your application should start.

Tests

Grails 2.3 features a new test mode: forking. This causes some problems and is better to be deactivated in BuildConfig.groovy:

grails.project.fork = [
        test: false,
]

With the new version only JUnit4 style tests are supported. This means that you don’t extend GroovyTestCase or GrailsUnitTestCase. All rules must be public and non static. All tests methods need to be annotated with @Test. Set up methods are annotated with @Before and must be public. The tearDown methods must also be annotated with @After and be public. A bug in Grails prevents you from naming the set up and tear down methods freely: the names must be setUp and tearDown. All test methods must be public void, the old def declaration is not supported anymore. Now without extending GroovyTestCase you lose the assertion methods and need to add a static import:

import static groovy.util.GroovyTestCase.*

Unit Tests

All tests should be annotated with @TestMixin([GrailsUnitTestMixin]). If you need to mock domain classes you change mockDomain to @Mock:

class MyTest {
	public void testThis() {
      mockDomain(MyDomainClass, [mdc])
  }
}
@Mock([Proposal])
class MyTest {
	public void testThis() {
      mdc.save()
  }
}

Configuration is now already mocked and your properties can be added easily:

config.my.property.value.is='123'

Integration tests

As mentioned before setUp method naming has a bug: you have to name them setUp otherwise the changes to your database aren’t rollbacked.

Acceptance Tests with Selenium

You need to patch the Remote Control Plugin because of a ClassNotFoundException. Add an additional constructor to RemoteControl.groovy to support setting the classloader:

RemoteControl(ClassLoader loader) {
  super(new HttpTransport(getFunctionalTestReceiverAddress(), loader), loader)
} 

In your tests you call this new constructor with the classloader of your class:

new RemoteControl(getClass().classLoader)

Using a Groovy Mixin in your application does not work in your tests and need to be replaced with grails.util.Mixin. But this only works in one way: the target class can access the mixin but the mixin not the target class. For this to work you need to let your mixin implement MixinTargetAware and declare a field named target:

class MyMixin implements MixinTargetAware {
	def target
}

Subtle changes and pitfalls

If you have a classname with a Controller suffix and a corresponding test but which isn’t a Grails controller Grails nevertheless tries to mock the class in your unit tests. If you rename the test to something without controller everything works fine.

In our pre 2.3 project we had some select tags in our views and used fieldValue for the selection:

<g:select value="${fieldValue(bean: object, field: 'value')}">

But now the select tag uses equals which fails if the values aren’t Strings. Just use the unescaped value:

<g:select value="${object?.value}">

I hope this guide and hints help others to avoid the headaches when upgrading your Grails application.

Automatic deployment of (Grails) applications

What was your most embarrassing moment in your career as a software engineer? Mine was when I deployed an application to production and it didn’t even start.

Early in my career deploying an application usually involved a fair bunch of manual steps. Logging in to a remote server via ssh and executing various commands. After a while repetitive steps were bundled in shell scripts. But mistakes happened. That’s normal. The solution is to automate as much as we can. So here are the steps to automatic deployment happiness.

Build

One of the oldest requirements for software development mentioned in The Joel Test is that you can build your app in one step. With Grails that’s easy just create a build file (we use Apache Ant here but others will do) in which you call grails clean, grails test and then grails war:

<project name="my_project" default="test" basedir=".">
  <property name="grails" value="${grails.home}/bin/grails"/>
  
  <target name="-call-grails">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="${grails.task}"/><arg value="${grails.file.path}"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>
  
  <target name="-call-grails-without-filepath">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="${grails.task}"/><env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>

  <target name="clean" description="--> Cleans a Grails application">
    <antcall target="-call-grails-without-filepath">
      <param name="grails.task" value="clean"/>
    </antcall>
  </target>
  
  <target name="test" description="--> Run a Grails applications tests">
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="test-app"/>
      <arg value="-echoOut"/>
      <arg value="-echoErr"/>
      <arg value="unit:"/>
      <arg value="integration:"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>

  <target name="war" description="--> Creates a WAR of a Grails application">
    <property name="build.for" value="production"/>
    <property name="build.war" value="${artifact.name}"/>
    <chmod file="${grails}" perm="u+x"/>
    <exec dir="${basedir}" executable="${grails}" failonerror="true">
      <arg value="-Dgrails.env=${build.for}"/><arg value="war"/><arg value="${target.directory}/${build.war}"/>
      <env key="GRAILS_HOME" value="${grails.home}"/>
    </exec>
  </target>
  
</project>

Here we call Grails via the shell scripts but you can also use the Grails ant task and generate a starting build file with

grails integrate-with --ant

and modify it accordingly.

Note that we specify the environment for building the war because we want to build two wars: one for production and one for our functional tests. The environment for the functional tests mimic the deployment environment as close as possible but in practice you have little differences. This can be things like having no database cluster or no smtp.
Now we can put all this into our continuous integration tool Jenkins and every time a checkin is made out Grails application is built.

Test

Unit and integration tests are already run when building and packaging. But we also have functional tests which deploy to a local Tomcat and test against it. Here we fetch the test war of the last successful build from our CI:

<target name="functional-test" description="--> Run functional tests">
  <mkdir dir="${target.base.directory}"/>
  <antcall target="-fetch-file">
    <param name="fetch.from" value="${jenkins.base.url}/job/${jenkins.job.name}/lastSuccessfulBuild/artifact/_artifacts/${test.artifact.name}"/>
    <param name="fetch.to" value="${target.base.directory}/${test.artifact.name}"/>
  </antcall>
  <antcall target="-run-tomcat">
    <param name="tomcat.command.option" value="stop"/>
  </antcall>
  <copy file="${target.base.directory}/${test.artifact.name}" tofile="${tomcat.webapp.dir}/${artifact.name}"/>
  <antcall target="-run-tomcat">
    <param name="tomcat.command.option" value="start"/>
  </antcall>
  <chmod file="${grails}" perm="u+x"/>
  <exec dir="${basedir}" executable="${grails}" failonerror="true">
    <arg value="-Dselenium.url=http://localhost:8080/${product.name}/"/>
    <arg value="test-app"/>
    <arg value="-functional"/>
    <arg value="-baseUrl=http://localhost:8080/${product.name}/"/>
    <env key="GRAILS_HOME" value="${grails.home}"/>
  </exec>
</target>

Stopping and starting Tomcat and deploying our application war in between fixes the perm gen space errors which are thrown after a few hot deployments. The baseUrl and selenium.url parameters tell the functional plugin to look at an external running Tomcat. When you omit them they start the Tomcat and Grails application themselves in their process.

Release

Now all tests passed and you are ready to deploy. So you fetch the last build … but wait! What happens if you have to redeploy and in between new builds happened in the ci? To prevent this we introduce a step before deployment: a release. This step just copies the artifacts from the last build and gives them the correct version. It also fetches the lists of issues fixed from our issue tracker (Jira) for this version as a PDF. These lists can be sent to the customer after a successful deployment.

Deploy

After releasing we can now deploy. This means fetching the war from the release job in our ci server and copying it to the target server. Then the procedure is similar to the functional test one with some slight but important differences. First we make a backup of the old war in case anything goes wrong and we have to rollback. Second we also copy the context.xml file which Tomcat needs for the JNDI configuration. Note that we don’t need to copy over local data files like PDF reports or serach indexes which were produced by our application. These lie outside our web application root.

<target name="deploy">
  <antcall target="-fetch-artifacts"/>

  <scp file="${production.war}" todir="${target.server.username}@${target.server}:${target.server.dir}" trust="true"/>
  <scp file="${target.server}/context.xml" todir="${target.server.username}@${target.server}:${target.server.dir}/${production.config}" trust="true"/>

  <antcall target="-run-tomcat-remotely"><param name="tomcat.command.option" value="stop"/></antcall>

  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${tomcat.webapps.dir}/${production.war}"/>
    <param name="remote.tofile" value="${tomcat.webapps.dir}/${production.war}.bak"/>
  </antcall>
  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${target.server.dir}/${production.war}"/>
    <param name="remote.tofile" value="${tomcat.webapps.dir}/${production.war}"/>
  </antcall>
  <antcall target="-copy-file-remotely">
    <param name="remote.file" value="${target.server.dir}/${production.config}"/>
    <param name="remote.tofile" value="${tomcat.conf.dir}/Catalina/localhost/${production.config}"/>
  </antcall>

  <antcall target="-run-tomcat-remotely"><param name="tomcat.command.option" value="start"/></antcall>
</target>

Different Environments: Staging and Production

If you look closely at the deployment script you notice that uses the context.xml file from a directory named after the target server. In practice you have multiple deployment targets not just one. At the very least you have what we call a staging server. This server is used for testing the deployment and the deployed application before interrupting or corrupting the production system. It can even be used to publish a pre release version for the customer to try. We use a seperate job in our ci server for this. We separate the configurations needed for the different environments in directories named after the target server. What you shouldn’t do is to include all those configurations in your development configurations. You don’t want to corrupt a production application when using the staging one or when your tests run or even when you are developing. So keep configurations needed for the deployment environment separate and separate from each other.

Celebrate

Now you can deploy over and over again with just one click. This is something to celebrate. No more headaches, no more bitten finger nails. But nevertheless you should take care when you access a production system even it is automatically. Something you didn’t foresee in your process could go wrong or you could make a mistake when you try out the application via the browser. Since we need to be aware of this responsibility everybody who interacts with a production system has to wear our cowboy hats. This is a conscious step to remind oneself to take care and also it reminds everybody else that you shouldn’t disturb someone interacting with a production system. So don’t mess with the cowboy!