All posts filed under “Friction

Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

â–  Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

â–  How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

â–  Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

â–  Can be irritating or helpful depending on how much it interferes with normal user activity–e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width=”47%” padding=”6%”]

Lock-in & Lock-out

â–  Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

â–  Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width=”47%” padding=”0%”]

Extra step

â–  Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

â–  Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width=”47%” padding=”6%”]

Specialised affordances

 
â–  Design elements so that they can only be used in particular contexts or arrangements

â–  Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width=”47%” padding=”0%”]

Partial self-correction

â–  Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction — e.g. Google’s “Did you mean…?” feature

â–  An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width=”47%” padding=”6%”]

Portions

â–  Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

â–  Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width=”47%” padding=”0%”]

Conditional warnings

â–  Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

â–  Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.

Friday quote: Friction

“If the point of contact between the product and the people becomes a point of friction, then the Industrial Designer has failed.”

Henry Dreyfuss, Designing for People, 1955


Cognitive friction
is one thing, and generally a result rather than a deliberate strategy; process friction is something else, and can very much be a deliberate strategy, as well as an accidental consequence of poor or badly thought-out interaction design. This is closer to what Dreyfuss is getting at, I think.

In default, defiance

‘Choice of default’ is a theme which has come up a few times on the blog: in general, many people accept the options/settings presented to them, and do not question or attempt to alter them. The possibilities for controlling or shaping users’ behaviour in this way are, clearly, enormous; two interesting examples have recently been brought to my attention (thanks to Chris Weightman and Patrick Kalaher):

Send to FedEx Kinko's button in Adobe Reader

Recent versions of Adobe’s PDF creation and viewing software, Acrobat Professional and Adobe Reader (screenshot above) have ‘featured’ a button on the toolbar (and a link in the File menu) entitled “Send to FedEx Kinko’s” which upload the document to FedEx Kinko’s online printing service. As Gavin Clarke reports in The Register, this choice of default (the result of a tie-in between Adobe and FedEx) has irritated other printing companies and trade bodies sufficiently for Adobe to agree to remove the element from the software:

Adobe Systems has scrapped the “send to FedEx Kinkos” print button in iAdobe Reader and Acrobat Professional, in the face of overwhelming opposition from America’s printing companies.

Adobe said today it would release an update to its software in 10 weeks that will remove the ability to send PDFs to FedEx Kinkos for printing at the touch of a button.

No doubt the idea of linking to a service that’s often the only choice presented to consumers in the track towns of Silicon Valley made eminent sense to Adobe, itself based in San Jose, California. But the company quickly incurred the wrath of printers outside the Valley for including a button to their biggest competitor, in software used widely by the design and print industry.

I wonder how many users of Acrobat/Reader actually used the service? Did its inclusion change any users’ printing habits (i.e. they stopped using their current printer and used Kinko’s instead)? And was this due to pure convenience/laziness? Presumably Kinko’s could identify which of their customers originated from clicking the button – were they charged exactly the same as any other customer, or was this an opportunity for price discrimination?

As some of the comments – both on the Register story and on Adobe’s John Loiacono’s bloghave noted, the idea of a built-in facility to send documents to an external printing service is not bad in itself, but allowing the user to configure this, or allowing printing companies to offer their own one-click buttons to users, would be much more desirable from a user’s point of view.

In a sense, ‘choice of default’ could be the other side of process friction as a design strategy. By making some options deliberately easier – much easier – than the alternatives (which might actually be more beneficial to the user), the other options appear harder in comparison, which is effectively the same as making some options or methods harder in the first place. The new-PCs-pre-installed-with-Windows example is probably the most obvious modern instance of choice of default having a major effect on consumer behaviour, as an anonymous commenter noted here last year:

Ultimately, though, you can sum up the free-software tug-of-war political control this way: it’s easiest to get a Windows computer and use it as such. Next easiest to get a MacOS one and use it as such. Commercial interests and anti-free software political agenda. Next easiest is a Linux computer, where the large barrier of having to install and configure an operating system yourself must be leapt. Also, it’s likely you don’t actually save any money upfront, because you probably end up buying a Windows box and wiping it to install Linux. Microsoft exacts their tax even if you won’t use the copy of Windows you’re supposedly paying them for.

Starbucks Mug; photo by Veryfotos
Photo by veryfotos.

Sometimes ‘choice of default’ can mean actually hiding the options which it’s undesirable for customers to choose:

Here’s a little secret that Starbucks doesn’t want you to know: They will serve you a better, stronger cappuccino if you want one, and they will charge you less for it. Ask for it in any Starbucks and the barista will comply without batting an eye. The puzzle is to work out why. The drink in question is the elusive “short cappuccino”–at 8 ounces, a third smaller than the smallest size on the official menu, the “tall,” and dwarfed by what Starbucks calls the “customer-preferred” size, the “Venti,” which weighs in at 20 ounces and more than 200 calories before you add the sugar.

The short cappuccino has the same amount of espresso as the 12-ounce tall, meaning a bolder coffee taste, and also a better one. The World Barista Championship rules, for example, define a traditional cappuccino as a “five- to six-ounce beverage.” This is also the size of cappuccino served by many continental cafés. Within reason, the shorter the cappuccino, the better.

This secret cappuccino is cheaper, too–at my local Starbucks, $2.35 instead of $2.65. But why does this cheaper, better drink–along with its sisters, the short latte and the short coffee–languish unadvertised? The official line from Starbucks is that there is no room on the menu board, although this doesn’t explain why the short cappuccino is also unmentioned on the comprehensive Starbucks Web site, nor why the baristas will serve you in a whisper rather than the usual practice of singing your order to the heavens.

The rest of this Slate article* from 2006, by Tim Harford, advances the idea that this kind of tactic is designed specifically to allow price discrimination:

This is the Starbucks way of sidestepping a painful dilemma over how high to set prices. Price too low and the margins disappear; too high and the customers do. Any business that is able to charge one price to price-sensitive customers and a higher price to the rest will avoid some of that awkward trade-off… Offer the cheaper product but make sure that it is available only to those customers who face the uncertainty and embarrassment of having to request it specifically.

Initially, one might think it a bit odd that the lower-priced item has survived at all as an option, given that it can only be a very small percentage of customers who are ‘in the know’ about it. But unlike a shop or company carrying a ‘secret product line’, which requires storage and so on, the short cappuccino can be made without needing any different ingredients, so it presumably makes sense to contnue offering it.

Thinking about other similarly hidden options (especially ‘delete’ options when buying equipment) reveals how common this sort of practice has become. I’m forever unticking (extra-cost) options for insurance or faster delivery when ordering products online; even when in-store, the practice of staff presenting extended warranties and insurance as if they’re the default choice on new products is extremely widespread.

Perhaps a post would be in order rounding up ways to save money (or get a better product) by requesting hidden options, or requesting the deletion of unnecessary options – please feel free to leave any tips or examples in the comments. Remember, all progress depends on the unreasonable man (or woman).

*There is another tactic raised in the article, pertinent to our recent look at casino carpets, which I will get around to examining further in due course.

Process friction

WD-40

Koranteng Ofosu-Amaah kindly sent me a link to this article by Ben Hyde:

I once had a web product that failed big-time. A major contributor to that failure was tedium of getting new users through the sign-up process. Each screen they had to step triggered the lost of 10 to 20% of the users. Reducing the friction of that process was key to survival. It is a thousand times easier to get a cell phone or a credit card than it is to get a passport or a learner’s permit. That wasn’t the case two decades ago.

Public health experts have done a lot of work over the decades to create barrier between the public and dangerous items and to lower barriers to access to constructive ones. So we make it harder to get liquor, and easier to get condoms. Traffic calming techniques are another example of engineering that makes makes a system run more slowly.

I find these attempts to shift the temperature of entire systems fascinating. This is at the heart of what you’re doing when you write standards, but it’s entirely scale free… In the sphere of internet identity it is particularly puzzling how two countervailing forces are at work. One trying to raise the friction and one trying to lower it. Privacy and security advocates are attempting to lower the temp and increase the friction. On the other hand there are those who seek in the solution to the internet identity problem a way to raise the temperature and lower the friction. That more rather than less transactions would take place.

The idea of ‘process friction’ which is especially pertinent as applied to architectures of control. Simply, if you design a process to be difficult to carry out, fewer people will complete it, since – just as with frictional forces in a mechanical system – energy (whether real or metaphorical) is lost by the user at each stage.

This is perhaps obvious, but is a good way to think about systems which are designed to prevent users carrying out certain tasks which might otherwise be easy – from copying music or video files, to sleeping on a park bench. Just as friction (brakes) can stop or slow down a car which would naturally roll down a hill under the force of gravity, so friction (DRM, or other architectures of control) attempts to stop or slow down the tendency for information to be copied, or for people to do what they do naturally. Sometimes the intention is actually to stop the proscribed behaviour (e.g. an anti-sit device); other times the intention is to force users to slow down or think about what they’re doing.

From a designer’s point of view, there are far more examples where reducing friction in a process is more important than introducing it deliberately. In a sense, is this what usability is?. Affordances are more valuable than disaffordances, hence the comparative rarity of architectures of control in design, but also why they stand out so much as frustrating or irritating.

The term cognitive friction is more specific than general ‘process friction’, but still very much relevant – as explained on the Cognitive Friction blog:

Cognitive Friction is a term first used by Alan Cooper in his book The Inmates are Running the Asylum, where he defines it like this:

“It is the resistance encountered by a human intellect when it engages with a complex system of rules that change as the problem permutes.”

In other words, when our tools manifest complex behaviour that does not fit our expectations, the result can be very frustrating.

Going back to the Ben Hyde article, the use of the temperature descriptions is interesting – he equates cooling with increasing the friction, making it more difficult to get things done (similarly to the idea of chilling effects), whereas my instinctive reaction would be the opposite (heat is often energy lost due to friction, hence a ‘hot’ system, rather than a cold system, is one more likely to have excessive friction in it – I see many architectures of control as, essentially, wasting human effort and creating entropy).

But I can see the other view equally well: after all, lubricating oils work better when warmed to reduce their viscosity, and ‘cold welds’ are an important subject of tribological research. Perhaps the best way to look at it is that, just as getting into a shower that’s too hot or too cold is uncomfortable, so a system which is not at the expected ‘temperature’ is also uncomfortable for the user.