All posts filed under “Mistake-proofing

Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

■ Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

■ How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

■ Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

■ Can be irritating or helpful depending on how much it interferes with normal user activity—e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width=”47%” padding=”6%”]

Lock-in & Lock-out

■ Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

■ Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width=”47%” padding=”0%”]

Extra step

■ Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

■ Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width=”47%” padding=”6%”]

Specialised affordances

 
■ Design elements so that they can only be used in particular contexts or arrangements

Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width=”47%” padding=”0%”]

Partial self-correction

■ Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction – e.g. Google’s “Did you mean…?” feature

■ An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width=”47%” padding=”6%”]

Portions

■ Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

■ Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width=”47%” padding=”0%”]

Conditional warnings

■ Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

■ Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.

Staggering insight

Staggered crossing in Bath

I’ve mentioned a few times, perhaps more often in presentations than on the blog, the fact that guidelines for the design of pedestrian crossings in the UK [PDF] recommend that where a crossing is staggered, pedestrians should be routed so that they have to face traffic, thus increasing the likelihood of noticing oncoming cars, and indeed of oncoming drivers noticing the pedestrians:

5.2.5 Staggered crossings on two-way roads should have a left handed stagger so that pedestrians on the central refuge are guided to face the approaching traffic stream.

When I gave this example of Design with Intent at Lancaster, the discussion – led, I think, by Lucy Suchman and Patricia Clough – turned to how this arrangement inevitably formalised and reinforced the embedded hegemony of the motor car in society, and so on: that the motorist is privileged over the pedestrian and the pedestrian must submit by watching out for cars, rather than the other way around.

Now, all that is arguably true – I had seen this example as merely a clever, sensible way to use design to influence user behaviour for safety, for everyone’s benefit (both pedestrians and drivers) without it costing any more than, say, a crossing staggered the opposite way round – but this is, maybe, the nature of this whole field of Design with Intent: lots of disciplines potentially have perspectives on it and what it means. What a traffic engineer or an ergonomist or a mistake-proofer sees as a safety measure, a sociologist may see as a designed-in power relation. What Microsoft saw as a tool for helping users was seen as patronising and annoying (at least by the most vociferous users). It’s all interesting, because it all broadens the number of interpretations and considerations applied to everything, and – if I’m honest – force me to think on more levels about every example.

Multiple lenses are helpful to designers otherwise stuck at whatever focal length the client’s prescribed.

Back to the crossings, though: the above crossing in Bath is a bit unusual in how it’s arranged with so many control panels for pedestrians. But in general, with simple Pelican and Puffin crossings in the UK, there is a design feature even more obvious, which only struck me* the same day I photographed the above crossing in Bath: the pedestrian signal control panel is usually also to the right of where pedestrians stand waiting to cross, i.e. (with UK driving on the left), in order to press the button, pedestrians have to turn to face the oncoming traffic.

The guidelines actually mention this as helping people with poor vision, but it would seem that it really assists all users, even if only slightly. It means you can watch the traffic as you decide whether or not you actually need to press the button, and will be more likely to be standing in a position where you can see the oncoming traffic at the point when you walk out into the road.

5.1.7 To assist blind and partially sighted pedestrians, as they approach the crossing, the primary push button/indicator panel should normally be located on the right hand side. The alignment should encourage them to face oncoming vehicles. The centre of the push button should be between 1.0 and 1.1 metres above the footway level.

This is the sort of ‘hidden’ intentional, strategic design detailing which fascinates me. It is obvious, it is quotidian, but it’s also thoughtful.

Staggered crossing in Bath

*Looking back through my notebooks, I see that someone actually mentioned this to me at a seminar at Sheffield Hallam in September 2007 but I forgot about it: many thanks to whoever it was, and I should be better at reading through my notes next time!

Hard to handle

Open door using outside handle

Open door using outside handleBritish Rail’s drop-the-window- then-stick-your-hand-outside- to-use-the-handle doors puzzled over by Don Norman in The Design of Everyday Things are still very much around, though often refurbished and repainted as with this delightful/vile pink First Great Western-liveried example.

I’m assuming that this design was intended to introduce an extra step into the door-opening procedure, a speed-hump, if you like, to make it less likely that a door was opened accidentally while the train was in motion (before central door locking was introduced – which makes it less necessary). From a usability point of view, we might immediately dismiss any system which has to have such detailed instructions to inform the user about performing such a simple task, but it’s certainly interesting to consider this kind of poka-yoke. Being forced to lowering the window to get to the handle is almost like a modal ‘Are you sure you want to delete this file?’ dialogue box.

Open door using outside handle

However, other concerns come into play and now need to be considered in addition: this sticker suggests keeping the window closed to cut drag and save fuel, but as I walked along the train, almost all these windows were dropped down, left in that position by the last person to close the door. The urgency of scrabbling to lower the window, stick your hand out and use the handle, with a crowd of commuters behind you probably overwrites any intentions to close the window again engendered by the ‘Make a small change’ sticker.

Open door using outside handle

Designing Safe Living

New Sciences of Protection logo Lancaster University’s interdisciplinary Institute for Advanced Studies (no, not that one) has been running a research programme, New Sciences of Protection, culminating in a conference, Designing Safe Living, on 10-12 July, “investigat[ing] ‘protection’ at the intersections of security, sciences, technologies, markets and design.”

The keynote speakers include the RCA’s Fiona Raby, Yahoo!’s Benjamin Bratton and Virginia Tech’s Timothy Luke, and the conference programme [PDF, 134 kB] includes some intriguing sessions on subjects such as ‘The Art/Design/Politics of Public Engagement’, ‘Designing Safe Citizens’, ‘Images of Safety’ and even ‘Aboriginal Terraformation (performance panel)’.

I’ll be giving a presentation called ‘Design with Intent: Behaviour-Shaping through Design’ on the morning of Saturday 12 July in a session called ‘Control, Design and Resistance’. There isn’t a paper to accompany the presentation, but here’s the abstract I sent in response to being invited by Mark Lacy:

Design with Intent: Behaviour-Shaping through Design
Dan Lockton, Brunel Design, Brunel University, Uxbridge, Middlesex UB8 3PH

“Design can be used to shape user behaviour. Examples from a range of fields – including product design, architecture, software and manufacturing engineering – show a diverse set of approaches to shaping, guiding and forcing users’ behaviour, often for intended socially beneficial reasons of ‘protection’ (protecting users from their own errors, protecting society from ‘undesirable’ behaviour, and so on). Artefacts can have politics. Commercial benefit – finding new ways to extract value from users – is also a significant motivation behind many behaviour-shaping strategies in design; social and commercial benefit are not mutually exclusive, and techniques developed in one context may be applied usefully in others, all the while treading the ethical line of persuasion-vs-coercion.

Overall, a field of ‘Design with Intent’ can be identified, synthesising approaches from different fields and mapping them to a range of intended target user behaviours. My research involves developing a ‘suggestion tool’ for designers working on social behaviour-shaping, and testing it by application to sustainable/ecodesign product use problems in particular, balancing the solutions’ effectiveness at protecting the environment, with the ability to cope with emergent behaviours.”

The programme’s rapporteur, Jessica Charlesworth, has been keeping a very interesting blog, Safe Living throughout the year.

I’m not sure what my position on the idea of ‘designing safe living’ is, really – whether that’s the right question to ask, or whether ‘we’ should be trying to protect ‘them’, whoever they are. But it strikes me that any behaviour, accidental or deliberate, however it’s classified, can be treated/defined as an ‘error’ by someone, and design can be used to respond accordingly, whether viewed through an explicit mistake-proofing lens or simply designing choice architecture to suggest the ‘right’ actions over the ‘wrong’ ones.

Lights reminding you to turn things off

Standby indicators - Duncan DrennanStandby indicators - Duncan Drennan

Duncan Drennan
, who writes the very thoughtful Art of Engineering blog, notes something extremely interesting: standby lights, if they’re annoying/visible enough, can actually motivate users to switch the device off properly:

Our DVD player has (to me) the most irritating standby light that I have ever seen on any device. When on, the light is constantly illuminated, but when in standby the light flashes continuously (at a slow rate). This drives me mad, but results in an interesting action – it causes me to turn it off at the plug when I am not using it (which is most of the time). Suddenly one little flashing light has resulted in more energy saving than having no light.

As he notes, designing a system with an indicator which actually draws power to inform you of… ‘nothing’ … actually may not be as inefficient as a from-first-principles efficiency design process would suggest, because of that human reaction. Similarly to the Static! project’s Power-Aware Cord, you may need to use a little extra energy to make people realise how much they’re using without thinking. Although:

There is one problem with this, it only works on people who care. If I did not care about saving energy, then I would just leave the laptop plugged in and the DVD player on. That means that you have to consider how your users will handle this kind of subtle feedback and determine whether turning the light off, or encouraging unplugging, results in more energy savings.

Sometimes the most obvious design decisions may not be the ones which result in the greatest energy saving.

This is a very astute observation indeed.

Are there any other examples where this sort of effect can be usefully employed? How similar is this to the ‘useful landmine’ concept where you deliberately force/provoke/annoy yourself into taking actions you otherwise wouldn’t bother/would forget to do?

Design with Intent presentation from Persuasive 2008

EDIT: I’ve now added the audio! Thanks everyone for the suggestions on how best to do it; the audio is hosted on this site rather than the Internet Archive as the buffering seemed to stall a bit too much. Let me know if you have any problems.

I’ve put my presentation from Persuasive 2008 on SlideShare, – because of the visual style it really needs to be listened to, or viewed alongside the text (below, or in the comments when viewing it on the SlideShare site). Alternatively, just download it [PPT, 11.6 Mb] – it comes with the notes.

Read More