Paper Prototyping is an highly effective tool to examine the usability of software, even before it is written. The basic idea of Paper Prototyping is that you perform real (software) tasks with real users, but replace everything technical with low-fi substitutes.
Adventures in Low-fi
A typical Paper Prototyping session looks like this:
- The user gets his task description and has to perform it with the software
- The computer screen is replaced by an arrangement of paper sheets on a desk
- The graphical user interface is replaced by hand-drawn copies on paper
- The computer itself is replaced by a human, mimicking the software responses to input
- The user operates by finger-pointing or writing with a pen
Advantages of Paper Prototyping
The whole situation described above seems awkward on first look, but is really rewarding for a project in its early stages. The customer has to provide real end user tasks and enough details of the solution to make up a prototype. The team has to produce reasonable drafts of the software GUI and come up with enough understanding of the processes and tasks involved to survive the session without major outages.
The result of a Paper Prototyping session can be used in various ways:
- Detailed specification of the GUI
- Use Case or User Story (acted out already)
- Mock screenshots for the user manual
- Data for initial acceptance tests
Classical user interface vs. gestures
This approach worked very well as long as the user only had a mouse (pointing, clicking) and a keyboard (writing) at hands. Even then, advanced features like grabbing (drag&drop) or automatic scrolling challenged the creativity of the prototypers. But most GUIs were rather dull and static. The perfect playground for Paper Prototyping.
With the advent of touchscreens, we soon realized that pointing and clicking only needs one finger out of ten. Gestures were introduced to keep all our fingertips busy and to enrich the interaction between user and GUI. We instantly understand the “zoom in” or “scroll down” gestures because it resembles natural behaviour (at least for some of us).
In the wake of gestures, the GUI of our software gets more and more dynamic. The GUI has to be minimalistic so we can control it even with stubby fingers (the new handicap of our generation, compare cell phone key pads). Detailed information has to be provided on demand and only temporarily. Everything can be manipulated. The classical approach to tab through a form (by carefully designed tab orders) isn’t that suitable anymore.
Gestures vs. Paper Prototyping
When using a Paper Prototype, the throughput of scribbled paper is enormous even with the classical GUIs. The more dynamic some dialog is, the more different parts need to be prepared in various states and locations (depending on the fragmentation of the paper screen). With gestures on a touchscreen, the user needs to be able to express them on the screen. Most touchscreen interfaces heavily depend on the (simulated) physical interaction between the fingers and some drawn “objects” on the interface. This is the moment when Paper Prototyping falls short of resembling the real interaction. You just can’t fiddle that fast with all the paper shreds.
No solution yet
I observed this effect when performing a Paper Prototype workshop with my students. The interfaces with classical mouse/keyboard handling performed well in the sessions. Interfaces for touchscreens (iPhone apps were the big newcomer here) just didn’t work out well, especially when downsized to palm size. We weren’t able to come up with a viable solution to make Paper Prototyping perform again for touchscreens and gestures.
Any ideas out there?