Category Archives: Defaults

The ‘You Are Here’ Use-mark

You are here - Florence, Italy

Who really needs a “You Are Here” marker when other visitors’ fingers have done the work for you?

(Above, in Florence; below, in San Francisco)

You are here - San Francisco, California

Use-marks, like desire paths, are a kind of emergent behaviour record of previous users’ perceptions (and perceived affordances), intentions, behaviours and preferences. (As Google’s search history is a database of intentions.)

Indeed, while we’d probably expect the “You Are Here” spot to be worn (so it’s not telling us anything especially new) can we perhaps think of use-marks / desire paths as being a physical equivalent of revealed preferences? (Carl Myhill almost makes this point in this great paper [PDF].)

And (I have to ask), to what extent does the presence of wear and use-marks by previous users influence the use decisions and behaviour of new users (social proof)? If you see a well-trodden path, do you follow it? Do you pick a dog-eared library book to read because it is presumably more interesting than the ones that have never been read? What about where you’re confused by a new interface on, say, a ticket machine? Can you pick it up more quickly by (consciously or otherwise) observing how others have worn or deformed it through prior use?

Can we design public products / systems / services which intentionally wear to give cues to future users? How (other than “Most read stories today”) can we apply this digitally?

Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

■ Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

■ How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

■ Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

■ Can be irritating or helpful depending on how much it interferes with normal user activity—e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width=”47%” padding=”6%”]

Lock-in & Lock-out

■ Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

■ Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width=”47%” padding=”0%”]

Extra step

■ Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

■ Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width=”47%” padding=”6%”]

Specialised affordances

 
■ Design elements so that they can only be used in particular contexts or arrangements

Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width=”47%” padding=”0%”]

Partial self-correction

■ Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction – e.g. Google’s “Did you mean…?” feature

■ An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width=”47%” padding=”6%”]

Portions

■ Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

■ Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width=”47%” padding=”0%”]

Conditional warnings

■ Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

■ Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.

Placebo buttons, false affordances and habit-forming

Elevator graph


This is a great graph
from GraphJam, by ‘Bloobeard’. It raises the question, of course, whether the ‘door close’ buttons on lifts/elevators really do actually do anything, or are simply there to ‘manage expectations‘ or act as a placebo.

The Straight Dope has quite a detailed answer from 1986:

The grim truth is that a significant percentage of the close-door buttons [CDB] in this world, for reasons that we will discuss anon, don’t do anything at all.

In the meantime, having consulted with various elevator repairmen, I would say that apparent CDB nonfunctionality may be explained by one of the following:

(1) The button really does work, it’s just set on time delay.
Suppose the elevator is set so that the doors close automatically after five seconds. The close-door button can be set to close the doors after two or three seconds. The button may be operating properly when you push it, but because there’s still a delay, you don’t realize it.

(2) The button is broken. Since a broken close-door button will not render the elevator inoperable and thus does not necessitate an emergency service call, it may remain unrepaired for weeks.

(3) The button has been disconnected, usually because the building owner received too many complaints from passengers who had somebody slam the doors on them.

(4) The button was never wired up in the first place. One repair type alleges that this accounts for the majority of cases.

Gizmodo, more recently, contends that:

…the Door Close button is there mostly to give passengers the illusion of control. In elevators built since the early ’90s. The button is only enabled in emergency situations with a key held by an authority.

Door close button

This is clearly not always true; I’ve just tested the button in the lift down the corridor here at Brunel (installed around a year ago) and it works fine. So it would seem that enabling the functionality (or not) or modifying it (e.g. time delays) is a decision that can be made for each installation, along the lines of the Straight Dope information.

If there’s a likelihood (e.g. in a busy location) that people running towards a lift will become antagonised by those already inside pressing the button (deliberately or otherwise) and closing the door on them, maybe it’s sensible to disable it, or introduce a delay. If the installation’s in a sparsely populated corner of a building where there’s only likely to be one lift user at a time, it makes sense for the button to be functional. Or maybe for the doors to close more quickly, automatically.

But thinking about this more generally: how often are deceptive buttons/controls/options – deliberate false affordances – used strategically in interaction design? What other examples are there? Can it work when a majority of users ‘know’ that the affordance is false, or don’t believe it any more? Do people just give up believing after a while – the product has “cried Wolf” too many times?

Matt Webb (Mind Hacks, Schulze & Webb) has an extremely interesting discussion of the extinction burst in conditioning, which seems relevant here:

There’s a nice example I read, I don’t recall where, about elevators. Imagine you live on the 10th floor and you take the elevator up there. One day it stops working, but for a couple of weeks you enter the elevator, hit the button, wait a minute, and only then take the stairs. After a while, you’ll stop bothering to check whether the elevator’s working again–you’ll go straight for the stairs. That’s called extinction.

Here’s the thing. Just before you give up entirely, you’ll go through an extinction burst. You’ll walk into the elevator and mash all the buttons, hold them down, press them harder or repeatedly, just anything to see whether it works. If it doesn’t work, hey, you’re not going to try the elevator again.

But if it does work! If it does work then bang, you’re conditioned for life. That behaviour is burnt in.

I think this effect has a lot more importance in everyday interaction with products/systems/environments than we might realise at first – a kind of mild Cargo Cult effect – and designers ought to be aware of it. (There’s a lot more I’d like to investigate about this effect, and how it might be applied intentionally…)

We’ve looked before at the thermostat wars and the illusion of control in this kind of context. It’s related to the illusion of control psychological effect studied by Ellen Langer and others, where people are shown to believe they have some control over things they clearly don’t: in most cases, a button does afford us control, and we would rationally expect it to: an expectation does, presumably, build up that similar buttons will do similar things in all lifts we step into, and if we’re used to it not doing anything, we either no longer bother pressing it, or we still press it every time “on the off-chance that one of these days it’ll work”.

How those habits form can have a large effect on how the products are, ultimately, used, since they often shake out into something binary (you either do something or you don’t): if you got a bad result the first time you used the 30 degree ‘eco’ mode on your washing machine, you may not bother ever trying it again, on that machine or on any others. If pressing the door close button seems to work, that behaviour gets transferred to all lifts you use (and it takes some conscious ‘extinction’ to change it).

There’s no real conclusion to this post, other than that it’s worth investigating this subject further.

Salt licked?

Salt shakers. Image from Daily MailSalt shakers. Image from Daily Mail

UPDATE: See the detailed response below from Peter of Gateshead Council, which clarifies, corrects and expands upon some of the spin given by the Mail articles. The new shakers were supplied to the chip shop staff for use behind the counter: “Our main concern was around the amount of salt put on by staff seasoning food on behalf of customers before wrapping it up… Our observations… confirmed that customers were receiving about half of the recommended daily intake of salt in this way. We piloted some reduced hole versions with local chip shops who all found that none of their customers complained about the reduced saltiness.”

A number of councils in England have given fish & chip shops replacement salt shakers with fewer holes – from the Daily Mail:

Research has suggested that slashing the holes from the traditional 17 to five could cut the amount people sprinkle on their food by more than half.

And so at least six councils have ordered five-hole shakers – at taxpayers’ expense – and begun giving them away to chip shops and takeaways in their areas. Leading the way has been Gateshead Council, which spent 15 days researching the subject of salty takeaways before declaring the new five-hole cellars the solution.

Officers collected information from businesses, obtained samples of fish and chips, measured salt content and ‘carried out experiments to determine how the problem of excessive salt being dispensed could be overcome by design’. They decided that the five-hole pots would reduce the amount of salt being used by more than 60 per cent yet give a ‘visually acceptable sprinkling’ that would satisfy the customer.

OK. This is interesting. This is where the unit bias, defaults, libertarian paternalism and industrial design come together, in the mundanity of everyday interaction. It’s Brian Wansink’s ‘mindless margin’ being employed strategically, politically – and just look at the reaction it’s got from the public (and from Littlejohn). A BBC story about a similar initiative in Norfolk also gives us the industry view:

A spokesman for the National Federation of Fish Friers called the scheme a “gimmick” and said customers would just shake the containers more.

Graham Adderson, 62, who owns the Downham Fryer, in Downham Market, said: “I think the scheme is hilarious. If you want to put salt on your fish and chips and there are only four holes, you’re just going to spend longer putting more on.”

I’m assuming Gateshead Council’s research took account of this effect, although there are so many ways that users’ habits could have been formed through prior experience that this ‘solution’ won’t apply to all users. There might be some customers who always put more salt on, before even tasting their food. There might be people who almost always think the fish & chips they get are too heavily salted anyway – plenty of people, anecdotally at least, used to buy Smith’s Salt ‘n’ Shake and not use the salt at all.

And there are probably plenty of people who will, indeed, end up consuming less salt, because of the heuristic of “hold salt shaker over food for n seconds” built up over many years of experience.

Overall: I actually quite like this idea: it’s clever, simple, and non-intrusive, but I can see how the interpretation, the framing, is crucial. Clearly, when presented in the way that the councils media have done here (as a government programme to eliminate customer choice, and force us all down the road decided by health bureaucrats), the initiative’s likely to elicit an angry reaction from a public sick of a “nanny state” interfering in every area of our lives. Politicians jumping on the Nudge bandwagon need to be very, very careful that this isn’t the way their initiatives are perceived and portrayed by the press (and many of them will be, of course): it needs to be very, very clear how each such measure actually benefits the public, and that message needs to be given extremely persuasively.

Final thought: Many cafés, canteens and so on have used sachets of salt, that customers apply themselves, for many years. The decision made by the manufacturers about the size of these portions is a major determinant of how much salt is used, because of the unit bias (people assume that one portion is the ‘right’ amount), and, just as with washing machine detergent, manipulation of this portion size could well be used as part of a strategy to influence the quantity used by customers. But would a similar salt sachet strategy (perhaps driven by manufacturers rather than councils) have provoked similar reactions? I’m not sure that it would. ‘Nanny manufacturer’ is less despised than ‘nanny state’, I think, certainly in the UK.

What do you think?

The asymmetry of the indescribable

Like the itchy label in my shirt, there’s something which has been niggling away at the back of my mind, ever since I started being exposed to ‘academic fields’, and boundaries between ‘subjects’ (probably as a young child). I’m sure others have expressed it much better, and, ironically, it probably has a name itself, and a whole discipline devoted to studying it.

It’s this:
The set of things/ideas/concepts/relationships/solutions/sets that have been named/defined is much, much, much smaller than the set of actual things/ideas/concepts/relationships/solutions/sets.

And yet without a name or definition for what you’re researching, you’ll find it difficult to research it, or at least to tell anyone what you’re doing. The set of things we can comprehend researching is thus limited to what we’ve already defined.

How do we ever advance, then? Are we not just forever sub-dividing the same limited field with which we’re already familiar? Or am I missing something? Is this a kind of (obvious) generalisation of the Sapir-Whorf hypothesis?

Relating it to my current research, as I ought to, the problems of choice architecture, defaults, framing, designed-in perceived affordances and so on are clearly special cases of the idea: the decision options people perceive as available to them can be, and are, used strategically to limit what decisions people make and how they understand things (e.g. Orwell’s Newspeak). But whether it’s done deliberately or not, the problem exists anyway.

Richard Thaler at the RSA

Richard H Thaler at the RSA

Richard Thaler, co-author of Nudge (which is extremely relevant to the Design with Intent research), gave a talk at the RSA in London today, and, though only mentioned briefly, he clearly drew the links between design and behaviour change. Some notes/quotes I scribbled down:
Continue reading