Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

■ Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

■ How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

■ Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

■ Can be irritating or helpful depending on how much it interferes with normal user activity—e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width="47%" padding="6%"]

Lock-in & Lock-out

■ Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

■ Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width="47%" padding="0%"]

Extra step

■ Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

■ Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width="47%" padding="6%"]

Specialised affordances

 
■ Design elements so that they can only be used in particular contexts or arrangements

Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width="47%" padding="0%"]

Partial self-correction

■ Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction – e.g. Google’s “Did you mean…?” feature

■ An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width="47%" padding="6%"]

Portions

■ Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

■ Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width="47%" padding="0%"]

Conditional warnings

■ Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

■ Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.

The Convention on Modern Liberty

Barricades, London

Britain’s supposedly on the verge of a summer of rage, and while like Mary Riddell I am of course reminded of Ballard, it’s not quite the same. I don’t think this represents the ‘middle class’ ennui of Chelsea Marina.

Instead I think we may have reached a tipping point where more people than not, are, frankly, fed up (and scared) about what’s happening, whether it’s the economic situation, the greed of the feckless, the intransigent myopia of those who were supposed to ‘oversee’ what’s going on, the use of fear to intimidate away basic freedoms, or a home secretary who treats the entire country like the naughty schoolchildren she left behind. In short: we’re basically losing our liberty very rapidly indeed. This PDF, compiled by UCL Student Human Rights Programme, provides a withering summary. As many have repeated, 1984 was not supposed to be an instruction manual. But, as Cardinal Wolsey warned, “be well advised and assured what matter ye put in his head; for ye shall never pull it out again”.

The Convention on Modern Liberty, taking place across the UK this Saturday 28th February, aims to demonstrate the dissatisfaction with what’s happening, and hopefully raise awareness of just what’s going on right under our noses. It features an interesting cross-section of speakers, and the speeches will be streamed on the site (tickets for the London session sold out very quickly).

I’m a normal person, trying my best to advance the progress of humanity, yet I feel that the government has contempt for me as a member of the public in general, on an everyday basis. Everywhere we go, we are watched, monitored, surveilled, threatened, considered guilty. We shouldn’t have to live like this.

P.S. I apologise for the lack of posts over the last week: my laptop’s graphics card finally gave in – it had been kind-of usable at a low resolution by connecting the output to another monitor for a while, but that too has now failed. Thanks to everyone who’s e-mailed and sent things: I will get round to them as soon as I can.

The Hacker’s Amendment

Screwdrivers

Congress shall pass no law limiting the rights of persons to manipulate, operate, or otherwise utilize as they see fit any of their possessions or effects, nor the sale or trade of tools to be used for such purposes.

From Artraze commenting on this Slashdot story about the levels of DRM in Windows 7.

I think it maybe needs some qualification about not using your things to cause harm to other people, but it’s an interesting idea. See also Mister Jalopy’s Maker’s Bill of Rights from Make magazine a couple of years ago.

Designed environments as learning systems

West London from Richmond Park - Trellick Tower in the centre

How much of designing an environment is consciously about influencing how people use it? And how much of that influence is down to users learning what the environment affords them, and acting accordingly?

The first question’s central what this blog’s been about over the last four years (with ‘products’, ‘systems’, ‘interfaces’ and so on variously standing in for ‘environment’), but many of the examples I’ve used, from anti-sit features to bathrooms and cafés designed to speed up user throughput, only reveal the architect’s (presumed) behaviour-influencing intent in hindsight, i.e. by reviewing them and trying to understand, if it isn’t obvious, what the motivation is behind a particular design feature. While there are examples where the intent is explicitly acknowledged, such as crime prevention through environmental design, and traffic management, it can still cause surprise when a behaviour-influencing agenda is revealed.

Investigating what environmental and ecological psychology have to say about this, a few months ago I came across The Organization of Spatial Stimuli, an article by Raymond G. Studer, published in 1970 [1] – it’s one of the few explicit calls for a theory of designing environments to influence user behaviour, and it raises some interesting issues:

“The nature of the environmental designer’s problem is this: A behavioral system has been specified (within the constraints imposed by the particular human participants and by the goals of the organization of which they are members.) The participants are not presently emitting the specified behaviors, otherwise there would be no problem. It is necessary that they do emit these behaviors if their individual and collective goals are to be realized. The problem then is to bring about the acquisition or modification of behaviors towards the specified states (without in any way jeopardizing their general well-being in the process). Such a change in state we call learning. Designed environments are basically learning systems, arranged to bring about and maintain specified behavioral topologies. Viewed as such, stimulus organization becomes a more clearly directed task. The question then becomes not how can stimuli be arranged to stimulate, but how can stimuli be arranged to bring about a requisite state of behavioral affairs.

[E]vents which have traditionally been regarded as the ends in the design process, e.g. pleasant, exciting, stimulating, comfortable, the participant’s likes and dislikes, should be reclassified. They are not ends at all, but valuable means which should be skilfully ordered to direct a more appropriate over-all behavioral texture. They are members of a class of (designed environmental) reinforcers. These aspects must be identified before behavioral effects of the designed environment can be fully understood.”

Now, I think it’s probably rare nowadays for architects or designers to talk of design features as ‘stimuli’, even if they are intended to influence behaviour. Operant conditioning and B.F. Skinner’s behaviourism are less fashionable than they once were. But the “designed environments are learning systems” point Studer makes can well be applied beyond simply ‘reinforcing’ particular behaviours.

Think how powerful social norms and even framing can be at influencing our behaviour in environments – the sober environment of a law court gives (most of) us a different range of perceived affordances to our own living room (social norms, mediated by architecture) – and that’s surely something we learn. Frank Lloyd Wright intentionally designed dark, narrow corridors leading to large, bright open rooms (e.g. in the Yamamura House) so that the contrast – and people’s experience – was heightened (framing, of a sort) – but this effect would probably be lessened by repeated exposure. It still influenced user behaviour though, even if only the first few times, but the memory of the effect that such a room had those first few times probably lasted a lifetime. Clearly, the process of forming a mental model about how to use a product, or how to behave in an environment, or how to behave socially, is about learning, and the design of the systems around us does educate us, in one way or another.

Stewart Brand’s classic How Buildings Learn (watch the series too) perhaps suggests (among other insights) an extension of the concept: if, when we learn what our environment affords us, this no longer suits our needs, the best architecture may be that which we can adapt, rather than being constrained by the behavioural assumptions designed into our environments by history.

I’m not an architect, though, or a planner, and – as I’ve mentioned a few times on the blog – it would be very interesting to know, from people who are: to what extent are notions of influencing behaviour taught as part of architectural training? This series of discussion board posts suggests that the issue is definitely there for architecture students, but is it framed as a conscious, positive process (e.g. “funnel pedestrians past the shops”), a reactionary one (e.g. “use pebbled paving to make it painful for hippies to congregate“), one of educating users through architectural features (as in Studer’s suggestion), or as something else entirely?

[1] Studer, R.G. ‘The Organization of Spatial Stimuli.’ In Pastalan, L.A. and Carson, D.H. (eds.), Spatial Behavior of Older People. Ann Arbor: University of Michigan, 1970.

Dan Lockton

Angular measure

OXO Good Grips Mini Angled Measuring Jug

A few years ago I went to a talk at the RCA by Alex Lee, president of OXO International. Apart from a statistic about how many bagel-slicing finger-chopping accidents happen each year in New York city, what stuck in my mind were the angled measuring jugs he showed us, part of the well-known Good Grips range of inclusively designed kitchen utensils.

The clever angled measuring scale – easily visible from above, as the jug is filled – seems such an obvious idea. As the patents (US 6,263,732; US 6,543,284) put it:

The indicia on an upwardly directed surface of the at least one ramp allows a user to look downwardly into the measuring cup to visually detect the volume level of the contents in the measuring cup, thereby eliminating the need to look horizontally at the cup at eye level.

OXO Good Grips Mini Angled Measuring Jug

Now, this is an extremely simple way to improve the process of using a measuring cup / jug. It’s good if you find it hard to bend down to look at the side of the vessel. It’s helpful if you’re standing over it, pouring stuff into it. It reduces parallax error – so potentially improving accuracy – and it also, simply, makes it easier to be accurate.

In this sense, then, improved / easier-to-read scales can influence user behaviour. I guess that’s obvious: if it’s easy to use something in a particular way, it’s more likely that it will be used that way. It’s a persuasive interface, in an extremely simple form.

Kenwood JK450/455 kettleSo, the question is, if I build an electric kettle with an angled scale like this, will it make it more likely that people use it more efficiently, i.e. fill it with the amount of water they need? If you’re standing with the kettle under the tap, putting water in, is this kind of angled scale going to make it easier to put the right amount in?

Kenwood sells a kettle which has angled scale markings, the JK450/455 (right), though they’re implemented differently to (and more cheaply than) the OXO method, simply being printed on the side of the kettle body. It’s still a clever idea. This review suggests an energy saving of around 10% compared with Kenwood’s claimed “up to 35%” but of course this saving very much depends on how inefficient the user was previously.

I think something along the lines of either the OXO or Kenwood designs (but not infringing the patents!) is worth an extended trial later this year – watch this space.

OXO Good Grips Mini Angled Measuring Jug
Thanks to Michael for the Buckfast.

Design with Intent links 2009-02-04