All posts filed under “Lock-in

Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

â–  Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

â–  How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

â–  Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

â–  Can be irritating or helpful depending on how much it interferes with normal user activity–e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width=”47%” padding=”6%”]

Lock-in & Lock-out

â–  Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

â–  Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width=”47%” padding=”0%”]

Extra step

â–  Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

â–  Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width=”47%” padding=”6%”]

Specialised affordances

 
â–  Design elements so that they can only be used in particular contexts or arrangements

â–  Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width=”47%” padding=”0%”]

Partial self-correction

â–  Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction — e.g. Google’s “Did you mean…?” feature

â–  An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width=”47%” padding=”6%”]

Portions

â–  Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

â–  Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width=”47%” padding=”0%”]

Conditional warnings

â–  Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

â–  Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.

Swoopo: Irrational escalation of commitment

Swoopo

Swoopo, a new kind of “entertainment shopping” auction site, takes Martin Shubik’s classic Dollar Auction game to a whole new, automated, mass participation level. It’s an example of the escalation of commitment, or a sunk cost fallacy, where we increase our commitment (in this case with real money) even though (in this case) most users’ positions are becoming less and less valuable.

Thee Cake Scraps has a good analysis of how this works:

It is a ‘auction’ site…sort of. Swoopo sells bids for $1. Each time you use a bid on an item the price is increased by $0.15 for that item. So here is an example:

Person A buys 5 bids from Swoopo for $5 total. Person A sees an auction for $1000 and places the first bid. The auction is now at $0.15. Person A now has a sunk cost of $1 (the cost of the bid they used). There is no way to get that dollar back, win or lose. If Person A wins they must pay the $0.15.

Person B also purchased $5 of bids. Person B sees the same auction and places the second bid. The auction price is now $0.30 (because each bid increases the cost by exactly 15 cents). Person B now has a sunk cost of $1. If Person B wins they must pay the $0.30. Swoopo now has $2 in the bank and the auction is at 30 cents.

This can happen with as many users as there are suckers to start accounts. Why are they suckers? Because everybody that does not have the top spot just loses the money they spent on bids. *Poof* Gone. If you think this sounds a little like gambling or a complete scam you are not alone. People get swept up into the auction and don’t want to get nothing for the money they spent on bids.

The key thing seems to be that some bidders will win items at lower than RRP, i.e. they get a good deal, but for every one of those, there are many, many others who have all paid for their bids (money going to Swoopo) and received nothing as a result. The house will always win.

Swoopo staff respond here and here (at Crunchgear).

As is obligatory with this blog, I need to ask: where else have systems been designed to use this behaviour-shaping technique? There must be many examples in auctions, games and gambling in general – but can the idea be applied to consumer products/services, using escalating commitment to shape user behaviour? Can this be applied to help users save energy, do more exercise, etc as opposed merely to extracting value from them with no benefit in return?

How to fit a normal bulb in a BC3 fitting and save £10 per bulb

BC3 and 2-pin bayonet fitting compared
Standard 2-pin bayonet cap (left) and 3-pin bayonet cap BC3 (right) fittings compared

Summary for mystified international readers: In the UK new houses/flats must, by law, have a number of light fittings which will ‘not accept incandescent filament bulbs’ (a ‘green’ idea). This has led to the development of a proprietary, arbitrary format of compact fluorescent bulb, the BC3, which costs a lot more than standard compact fluorescents, is difficult to obtain, and about which the public generally doesn’t know much (yet). If you’re so minded, it’s not hard to modify the fitting and save money.

A lot of visitors have found this blog recently via searching for information on the MEM BC3 3-pin bayonet compact fluorescent bulbs, where to get them, and why they’re so expensive. The main posts here discussing them, with background to what it’s all about, are A bright idea? and some more thoughts – and it’s readers’ comments which are the really interesting part of both posts.

There are so many stories of frustration there, of people trying to ‘do their bit’ for the environment, trying to fit better CFLs in their homes, and finding that instead of instead of the subsidised or even free standard 2-pin bayonet CFLs available all over the place in a variety of improved designs, styles and quality, they’re locked in to having to pay 10 or 15 times as much for a BC3 bulb, and order online, simply because the manufacturer has a monopoly, and does not seem to supply the bulbs to normal DIY or hardware stores.

Frankly, the system is appalling, an example of exactly how not to design for sustainable behaviour. It’s a great ‘format lock-in’ case study for my research, but a pretty pathetic attempt to ‘design out’ the ‘risk’ of the public retro-fitting incandescent bulbs in new homes. This is the heavy-handed side of the legislation-ecodesign nexus, and it’s clearly not the way forward. Trust the UK to have pushed ahead with it without any thought of user experience.
Read More

Digital control round-up

An 'Apple' dongle

Mac as a giant dongle

At Coding Horror, Jeff Atwood makes an interesting point about Apple’s lock-in business model:

It’s almost first party only– about as close as you can get to a console platform and still call yourself a computer… when you buy a new Mac, you’re buying a giant hardware dongle that allows you to run OS X software.

There’s nothing harder to copy than an entire MacBook. When the dongle — or, if you prefer, the “Apple Mac” — is present, OS X and Apple software runs. It’s a remarkably pretty, well-designed machine, to be sure. But let’s not kid ourselves: it’s also one hell of a dongle.

If the above sounds disapproving in tone, perhaps it is. There’s something distasteful to me about dongles, no matter how cool they may be.

Of course, as with other dongles, there are plenty of people who’ve got round the Mac hardware ‘dongle’ requirement. Is it true to say (à la John Gilmore) that technical people interpret lock-ins (/other constraints) as damage and route around them?

Screenshot of Mukurtu archive website

Social status-based DRM

The BBC has a story about the Mukurtu Wumpurrarni-kari Archive, a digital photo archive developed by/for the Warumungu community in Australia’s Northern Territory. Because of cultural constraints, social status, gender and community background have been used to determine whether or not users can search for and view certain images:

It asks every person who logs in for their name, age, sex and standing within their community. This information then restricts what they can search for in the archive, offering a new take on DRM.

For example, men cannot view women’s rituals, and people from one community cannot view material from another without first seeking permission. Meanwhile images of the deceased cannot be viewed by their families.

It’s not completely clear whether it’s intended to help users perform self-censorship (i.e. they ‘know’ they ‘shouldn’t’ look at certain images, and the restrictions are helping them achieve that) or whether it’s intended to stop users seeing things they ‘shouldn’t’, even if they want to. I think it’s probably the former, since there’s nothing to stop someone putting in false details (but that does assume that the idea of putting in false details would be obvious to someone not experienced with computer login procedures; it may not).

While from my western point of view, this kind of social status-based discrimination DRM seems complete anathema – an entirely arbitrary restriction on knowledge dissemination – I can see that it offers something aside from our common understanding of censorship, and if that’s ‘appropriate’ in this context, then I guess it’s up to them. It’s certainly interesting.

Neverthless, imagining for a moment that there were a Warumungu community living in the EU, would DRM (or any other kind of access restriction) based on a) gender or b) social status not be illegal under European Human Rights legislation?

Disabled buttonsDisabling buttons

From Clientcopia:

Client: We don’t want the visitor to leave our site. Please leave the navigation buttons, but remove the links so that they don’t go anywhere if you click them.

It’s funny because the suggestion is such a crude way of implementing it, but it’s not actually that unlikely – a 2005 patent by Brian Shuster details a “program [that] interacts with the browser software to modify or control one or more of the browser functions, such that the user computer is further directed to a predesignated site or page… instead of accessing the site or page typically associated with the selected browser function” – and we’ve looked before at websites deliberately designed to break in certain browers and disabling right-click menus for arbitrary purposes.

Persuasion & control round-up

  • New Scientist: Recruiting Smell for the Hard Sell
    Image from New ScientistSamsung’s coercive atmospherics strategy involves the smell of honeydew melon:

    THE AIR in Samsung’s flagship electronics store on the upper west side of Manhattan smells like honeydew melon. It is barely perceptible but, together with the soft, constantly morphing light scheme, the scent gives the store a blissfully relaxed, tropical feel. The fragrance I’m sniffing is the company’s signature scent and is being pumped out from hidden devices in the ceiling. Consumers roam the showroom unaware that they are being seduced not just via their eyes and ears but also by their noses.

    In one recent study, accepted for publication in the Journal of Business Research, Eric Spangenberg, a consumer psychologist and dean of the College of Business and Economics at Washington State University in Pullman, and his colleagues carried out an experiment in a local clothing store. They discovered that when “feminine scents”, like vanilla, were used, sales of women’s clothes doubled; as did men’s clothes when scents like rose maroc were diffused.

    A spokesman from IFF revealed that the company has developed technology to scent materials from fibres to plastic, suggesting that we can expect a more aromatic future, with everything from scented exercise clothing and towels to MP3 players with a customised scent. As more and more stores and hotels use ambient scents, however, remember that their goal is not just to make your experience more pleasant. They want to imprint a positive memory, influence your future feelings about particular brands and ultimately forge an emotional link to you – and more importantly, your wallet.

    (via Martin Howard‘s very interesting blog, and the genius Mind Hacks)

  • Consumerist: 5 Marketing Tricks That Unleash Shopping Frenzies
    Beanie BabiesThe Consumerist’s Ben Popken outlines “5 Marketing Tricks That Unleash Shopping Frenzies”:

    * Artificially limit supply. They had a giant warehouse full of Beanie Babies, but released them in squirts to prolong the buying orgy.
    * Issue press releases about limited supply so news van show up
    * Aggressively market to children. Daddy may not play with his kids as much as he should but one morning he can get up at the crack of dawn, get a Teddy Ruxpin, and be a hero.
    * Make a line of minute variations on the same theme to create the “collect them all” effect.
    * Make it only have one highly specialized function so you can sell one that laughs, one that sings, one that skydives, etc, ad nauseum.

    All of us are familiar with these strategies – whether consciously or not – but can similar ideas ever be employed in a way which benefits the consumer, or society in general, without actual deception or underhandedness? For example, can artificially limiting supply to increase demand ever be helpful? Certainly artificially limiting supply to decrease demand can be helpful to consumers might sometimes be helpful – if you knew you could get a healthy snack in 5 minutes, but an unhealthy one took an hour to arrive, you might be more inclined to go for the healthy one; if the number of parking spaces wide enough to take a large 4 x 4 in a city centre were artificially restricted, it might discourage someone from choosing to drive into the city in such a vehicle.

    But is it helpful – or ‘right’ – to use these types of strategy to further an aim which, perhaps, deceives the consumer, for the ‘greater good’ (and indeed the consumer’s own benefit, ultimately)? Should energy-saving devices be marketed aggressively to children, so that they pressure their parents to get one?

    (Image from Michael_L‘s Flickr stream)

  • Kazys Varnelis: Architecture of Disappearance
    Architecture of disappearance
    Kazys Varnelis notes “the architecture of disappearance”:

    I needed to show a new Netlab intern the maps from Banham’s Los Angeles, Architecture of Four Ecologies and realized that I had left the original behind. Luckily, Google Books had a copy here, strangely however, in their quest to remove copyrighted images, Google’s censors (human? algorithmic?) had gone awry and had started producing art such as this image.

    It’s not clear here whether there’s a belief that the visual appearance of the building itself is copyrighted (which surely cannot be the case – photographers’ rights (UK at least) are fairly clear on this) or whether that by effectively making the image useless, it prevents someone using an image from Google Books elsewhere. The latter is probabky the case, but then why bother showing it at all?

    (Thanks to Katrin for this)

  • Fanatic Attack
    Finally, in self-regarding nonsense news, this blog’s been featured on Fanatic Attack, a very interesting, fairly new site highlighting “entrancement, entertainment, and an enhancement of curiosity”: people, organisations and projects that display a deep passion or obsession with a particular subject or theme. I’m grateful to be considered as such!
  • Biting Apple

    BBC News headline, 28 September 2007

    Interesting to see the BBC’s summary of the current iPhone update story: “Apple issues an update which damages iPhones that have been hacked by users”. I’m not sure that’s quite how Apple’s PR people would have put it, but it’s interesting to see that whoever writes those little summaries for the BBC website found it easiest to sum up the story in this way. This is being portrayed as Apple deliberately, strategically damaging the phones, rather than an update unintentionally causing problems with unlocked or modified phones.

    Regardless of what the specific issue is here, and whether unmodified iPhones have also lost functionality because of some problem with the update, can’t we just strip out all this nonsense? How many people who wanted an iPhone also wanted to be locked in to AT&T or whatever the local carrier will be in each market? Anyone? Who wants to be locked in to anything? What a waste of technical effort, sweat and customer goodwill: it’s utterly pathetic.

    This is exactly what Fred Reichheld‘s ‘Bad profits’ idea calls out so neatly:

    Whenever a customer feels misled, mistreated, ignored, or coerced, then profits from that customer are bad. Bad profits come from unfair or misleading pricing. Bad profits arise when companies save money by delivering a lousy customer experience. Bad profits are about extracting value from customers, not creating value.

    If bad profits are earned at the expense of customers, good profits are earned with customers’ enthusiastic cooperation. A company earns good profits when it so delights its customers that they willingly come back for more–and not only that, they tell their friends and colleagues to do business with the company.

    What is the question that can tell good profits from bad? Simplicity itself: How likely is it that you would recommend this company to a friend or colleague?

    If your iPhone’s just turned into the most stylish paperweight in the office, are you likely to recommend it to a colleague?

    More to the point, if Apple had moved – in the first place – into offering telecom services to go with the hardware, with high levels of user experience and a transparent pricing system, how many iPhone users and Mac evangelists wouldn’t have at least considered changing?