Analyzing iOS crash dumps with Xcode

The best way to analyze a crash in an iOS app is if you can reproduce it directly in the iOS simulator in debug mode or on a local device connected to Xcode. Sometimes you have to analyze a crash that happened on a device that you do not have direct access to. Maybe the crash was discovered by a tester who is located in a remote place. In this case the tester must transfer the crash information to the developer and the developer has to import it in Xcode. The iOS and Xcode functionalities for this workflow are a bit hidden, so that the following step-by-step guide can help.

Finding the crash dumps

iOS stores crash dumps for every crash that occured. You can find them in the Settings app in the deeply nested menu hierarchy under Privacy -> Analytics -> Analytics Data.

There you can select the crash dump. If you tap on a crash dump you can see its contents in a JSON format. You can select this text and send it to the developer. Unfortunately there is no “Select all” option, you have to select it manually. It can be quite long because it contains the stack traces of all the threads of the app.

Importing the crash dump in Xcode

To import the crash dump in Xcode you must save it first in a file with the file name extension “.crash”. Then you open the Devices dialog in Xcode via the Window menu:

To import the crash dump you must have at least one device connected to your Mac, otherwise you will find that you can’t proceed to the next step. It can be any iOS device. Select the device to open the device information panel:

Here you find the “View Device Logs” button to open the following Device Logs dialog:

To import the crash dump into this dialog select the “All Logs” tab and drag & drop the “.crash” file into the panel on the left in the dialog.

Initially the stack traces in the crash dump only contain memory addresses as hexadecimal numbers. To resolve these addresses to human readable symbols of the code you have to “re-symbolicate” the log. This functionality is hidden in the context menu of the crash dump:

Now you’re good to go and you should finally be able to find the cause of the crash.

Advertisements

Simple build triggers with secured Jenkins CI

The jenkins continuous integration (CI) server provides several ways to trigger builds remotely, for example from a git hook. Things are easy on an open jenkins instance without security enabled. It gets a little more complicated if you like to protect your jenkins build environment.

Git plugin notify commit url

For git there is the “notifyCommitUrl” you can use in combination with the Poll SCM settings:

$JENKINS_URL/git/notifyCommit?url=http://$REPO/project/myproject.git

Note two things regarding this approach:

  1. The url of the source code repository given as a parameter must match the repository url of the jenkins job.
  2. You have to check the Poll SCM setting, but you do not need to provide a schedule

Another drawback is its restriction to git-hosted jobs.

Jenkins remote access api

Then there is the more general and more modern jenkins remote access api, where you may trigger builds regardless of the source code management system you use.
curl -X POST $JENKINS_URL/job/$JOB_NAME/build?token=$TOKEN

It allows even triggering parameterized builds with HTTP POST requests like:

curl -X POST $JENKINS_URL/job/$JOB_NAME/build \
--user USER:TOKEN \
--data-urlencode json='{"parameter": [{"name":"id", "value":"123"}, {"name":"verbosity", "value":"high"}]}'

Both approaches work great as long as your jenkins instance is not secured and everyone can do everything. Such a setting may be fine in your companies intranet but becomes a no-go in more heterogenious environments or with a public jenkins server.

So the way to go is securing jenkins with user accounts and restricted access. If you do not want to supply username/password as part of the url for doing HTTP BASIC auth and create users just for your repository triggers there is another easy option:

Using the Build Authorization Token Root Plugin!

Build authorization token root plugin

The plugin introduces a configuration setting in the Build triggers section to define an authentication token:

It also exposes a url you can access without being logged in to trigger builds just providing the token specified in the job:

$JENKINS_URL/buildByToken/build?job=$JOB_NAME&token=$TOKEN

Or for parameterized builds something like:

$JENKINS_URL/buildByToken/buildWithParameters?job=$JOB_NAME&token=$TOKEN&Type=Release

Conclusion

The token root plugin does not need HTTP POST requests but also works fine using HTTP GET. It does neither requires a user account nor the awkward Poll SCM setting. In my opinion it is the most simple and pragmatic choice for build triggering on a secured jenkins instance.

Remote development with PyCharm

PyCharm is a fantastic tool for python development. One cool feature that I quite like is its support for remote development. We have quite a few projects that need to interact with special hardware, and that hardware is often not attached to the computer we’re developing on.
In order to test your programs, you still need to run it on that computer though, and doing this without tool support can be especially painful. You need to use a tool like scp or rsync to transmit your code to the target machine and then execute it using ssh. This all results in painfully long and error prone iterations.
Fortunately, PyCharm has tool support in its professional edition. After some setup, it allows you do develop just as you would on a local machine. Here’s a small guide on how to set it up with an ubuntu vagrant virtual machine, connecting over ssh. It work just as nicely on remote computers.

1. Create a new deployment configuration

In the Tools->Deployment->Configurations click the small + in the top left corner. Pick a name and choose the SFTP type.
add_server

In the “Connection” Tab of the newly created configuration, make sure to uncheck “Visible only for this project”. Then, setup your host and login information. The root path is usually a central location you have access to, like your home folder. You can use the “Autodetect” button to set this up.

connection
For my VM, the settings look like this.

On the “Mappings” Tab, set the deployment path for your project. This would be the specific folder of your project within the root you set on the previous page. Clicking the associated “…” button here helps, and even lets you create the target folder on the remote machine if it does not exist yet.

2. Activate the upload

Now check “Tools->Deployment->Automatic Upload”. This will do an upload when you change a file, so you still need to do the initial upload manually via “Tools->Deployment->Upload to “.

3. Create a project interpreter

Now the files are synced up, but the runtime environment is not on the remote machine. Go to the “Project Interpreter” page in File->Settings and click the little gear in the top-right corner. Select “Add Remote”.

remote_interpreter
It should have the Deployment configuration you just created already selected. Once you click ok, you’re good to go! You can run and debug your code just like on a local machine.

Have fun developing python applications remotely.

Using passwords with Jenkins CI server

For many of our projects the Jenkins continuous integration (CI) server is one important cornerstone. The well known “works on my machine” means nothing in our company. Only code in repositories and built, tested and packaged by our CI servers counts. In addition to building, testing, analyzing and packaging our projects we use CI jobs for deployment and supervision, too. In such jobs you often need some sort of credentials like username/password or public/private keys.

If you are using username/password they do not only appear in the job configuration but also in the console build logs. In most cases this is undesirable but luckily there is an easy way around it: using the Environment Injector Plugin.

In the plugin you can “inject passwords to the build as environment variables” for use in your commands and scripts.inject-passwords-configuration

The nice thing about this is that the passwords are not only masked in the job configuration (like above) but also in the console logs of the builds!inject-passwords-console-log

Another alternative doing mostly the same is the Credentials Binding Plugin.

There is a lot more to explore when it comes to authentication and credential management in Jenkins as you can define credentials at the global level, use public/private key pairs and ssh agents, connect to a LDAP database and much more. Just do not sit back and provide security related stuff plaintext in job configurations or your deployments scripts!

My favorite Unix tool

Awk is a little language designed for the processing of lines of text. It is available on every Unix (since V3) or Linux system. The name is an acronym of the names of its creators: Aho, Weinberger and Kernighan.

Since I spent a couple of minutes to learn awk I have found it quite useful during my daily work. It is my favorite tool in the base set of Unix tools due to its simplicity and versatility.

Typical use cases for awk scripts are log file analysis and the processing of character separated value (CSV) formats. Awk allows you to easily filter, transform and aggregate lines of text.

The idea of awk is very simple. An awk script consists of a number of patterns, each associated with a block of code that gets executed for an input line if the pattern matches:

pattern_1 {
    # code to execute if pattern matches line
}

pattern_2 {
    # code to execute if pattern matches line
}

# ...

pattern_n {
    # code to execute if pattern matches line
}

Patterns and blocks

The patterns are usually regular expressions:

/error|warning/ {
    # executed for each line, which contains
    # the word "error" or "warning"
}

/^Exception/ {
    # executed for each line starting
    # with "Exception"
}

There are some special patterns, namely the empty pattern, which matches every line …

{
    # executed for every line
}

… and the BEGIN and END patterns. Their blocks are executed before and after the processing of the input, respectively:

BEGIN {
    # executed before any input is processed,
    # often used to initialize variables
}

END {
    # executed after all input has been processed,
    # often used to output an aggregation of
    # collected values or a summary
}

Output and variables

The most common operation within a block is the print statement. The following awk script outputs each line containing the string “error”:

/error/ { print }

This is basically the functionality of the Unix grep command, which is filtering. It gets more interesting with variables. Awk provides a couple of useful built-in variables. Here are some of them:

  • $0 represents the entire current line
  • $1$n represent the 1…n-th field of the current line
  • NF holds the number of fields in the current line
  • NR holds the number of the current line (“record”)

By default awk interprets whitespace sequences (spaces and tabs) as field separators. However, this can be changed by setting the FS variable (“field separator”).

The following script outputs the second field for each line:

{ print $2 }

Input:

John 32 male
Jane 45 female
Richard 73 male

Output:

32
45
73

And this script calculates the sum and the average of the second fields:

{
    sum += $2
}

END {
    print "sum: " sum ", average: " sum/NR
}

Output:

sum: 150, average: 50

The language

The language that can be used within a block of code is based on C syntax without types and is very similar to JavaScript. All the familiar control structures like if/else, for, while, do and operators like =, ==, >, &&, ||, ++, +=, … are there.

Semicolons at the end of statements are optional, like in JavaScript. Comments start with a #, not with //.

Variables do not have to be declared before usage (no ‘var’ or type). You can simply assign a value to a variable and it comes into existence.

String concatenation does not have an explicit operator like “+”. Strings and variables are concatenated by placing them next to each other:

"Hello " name ", how are you?"
# This is wrong: "Hello" + name + ", how are you?"

print is a statement, not a function. Parentheses around its parameter list are optional.

Functions

Awk provides a small set of built-in functions. Some of them are:

length(string), substr(string, index, count), index(string, substring), tolower(string), toupper(string), match(string, regexp).

User-defined functions look like JavaScript functions:

function min(number1, number2) {
    if (number1 < number2) {
        return number1
    }
    return number2
}

In fact, JavaScript adopted the function keyword from awk. User-defined functions can be placed outside of pattern blocks.

Command-line invocation

An awk script can be either read from a script file with the -f option:

$ awk -f myscript.awk data.txt

… or it can be supplied in-line within single quotes:

$ awk '{sum+=$2} END {print "sum: " sum " avg: " sum/NR}' data.txt

Conclusion

I hope this short introduction helped you add awk to your toolbox if you weren’t familiar with awk yet. Awk is a neat alternative to full-blown scripting languages like Python and Perl for simple text processing tasks.

Introduction to Debian packaging

In former posts I wrote about packaging your software as RPM packages for a variety of use cases. The other big binary packaging system on Linux systems is DEB for Debian, Ubuntu and friends. Both serve the purpose of convenient distribution, installation and update of binary software artifacts. They define their dependencies and describe what the package provides.

How do you provide your software as DEB packages?

The master guide to debian packaging can be found at https://www.debian.org/doc/manuals/maint-guide/. It is a very comprehensive guide spanning many pages and providing loads of information. Building a usable package of your software for your clients can be a matter of minutes if you know what to do. So I want to show you the basic steps, for refinement you will need to consult the guide or other resources specific to the software you want to package. For example there are several guides specific to packaging of software written in Python. I find it hard to determine what the current and recommended way to debian packages for python is because there are differing guides over the last 10 years or so. You may of course say “Just use pip for python” 🙂

Basic tools

The basic tools for building debian packages are debhelper and dh-make. To check the resulting package you will need lintian in addition. You can install them and basic software build tools using:

  sudo apt-get install build-essential dh-make debhelper lintian

The python packaging system uses make and several scripts under the hood to help building the binary packages out of source tarballs.

Your first package

First you need a tar-archive of the software you want to package. With python setuptools you could use python setup.py sdist to generate the tarball. Then you run dh_make on it to generate metadata and the package build environment for your package. Now you have to edit the metadata files, namely control, copyright, changelog. Finally you run dpkg-buildpackage to generate the package itself.  Here is an example of the necessary commands:

mkdir hello-deb-1.0
cd hello-deb-1.0
dh_make -f ../hello-deb-1.0.tar.gz
# edit deb metadata files
vi debian/control
vi debian/copyright
vi debian/changelog
dpkg-buildpackage -us -uc
lintian -i -I --show-overrides hello-deb_1.0-1_amd64.changes

The control file roughly resembles RPMs SPEC file. Package name, description, version and dependency information belong there. Note that debian is very strict when it comes to naming of packages, so make sure you use the pattern ${name}-${version}.tar.gz for the archive and that it extracts into a corresponding directory without the extension, e.g. ${name}-${version}.

If everything went ok several files were generated in your base directory:

  • The package itself as .deb file
  • A file containing the changelog and checksums between package versions and revisions ending with .changes
  • A file with the package description ending with .dsc
  • A tarball with the original sources renamed according to debian convention hello-deb_1.0.orig.tar.gz (note the underscore!)

Going from here

Of course there is a lot more to the tooling and workflow when maintaining debian packages. In future posts I will explore additional means for improving and updating your packages like the quilt patch management tool, signing the package, symlinking, scripts for pre- and post-installation and so forth.

All .NET assemblies for one and one for all

Sometimes you have developed a simple utility tool that doesn’t need the directory structure of a full-blown application for resources and other configuration. However, this tool might have a couple of library dependencies. On the .NET platform this usually means that you have to distribute the .dll files for the libraries along with the executable (.exe) file of the tool.

Wouldn’t it be nice to distribute your tool only as a single .exe file, so that users don’t have to drag around a lot of files when they move the tool from one location to another?

In the C++ world you would use static linking to link library dependencies into the resulting executable. For the .NET platform Microsoft provides a command-line tool called ILMerge. It can merge multiple .NET assemblies into a single assembly:

ILMerge

You can either download ILMerge from Microsoft as an .msi package or install it as a NuGet package from the package manager console (accessible in Visual Studio under Tools: Library Package Manager):

PM> Install-Package ilmerge

The basic command line syntax of ILMerge is:

> ilmerge /out:filename <primary assembly> [...]

The primary assembly would be the original executable of your tool. It must be listed first, followed by the library assemblies (.dll files) to merge. Here’s an example, which represents the scenario from the diagram above:

> ilmerge /out:StandaloneApplication.exe Application.exe A.dll B.dll C.dll

Keep in mind that the resulting executable is still dependent on the existence of the .NET framework on the system, it’s not completely independent.

Graphical user interface

There’s also a graphical user interface for ILMerge available. It’s an open-source tool by a third-party developer and it’s called ILMerge-GUI, published on Microsoft’s CodePlex project hosting platform.

ILMerge-GUI

You simply drag and drop the assemblies to merge on the designated area, choose a name for the output assembly and click the “Merge!” button.