All posts filed under “Choice Architecture

Salt licked?

Salt shakers. Image from Daily MailSalt shakers. Image from Daily Mail

UPDATE: See the detailed response below from Peter of Gateshead Council, which clarifies, corrects and expands upon some of the spin given by the Mail articles. The new shakers were supplied to the chip shop staff for use behind the counter: “Our main concern was around the amount of salt put on by staff seasoning food on behalf of customers before wrapping it up… Our observations… confirmed that customers were receiving about half of the recommended daily intake of salt in this way. We piloted some reduced hole versions with local chip shops who all found that none of their customers complained about the reduced saltiness.”

A number of councils in England have given fish & chip shops replacement salt shakers with fewer holes – from the Daily Mail:

Research has suggested that slashing the holes from the traditional 17 to five could cut the amount people sprinkle on their food by more than half.

And so at least six councils have ordered five-hole shakers — at taxpayers’ expense — and begun giving them away to chip shops and takeaways in their areas. Leading the way has been Gateshead Council, which spent 15 days researching the subject of salty takeaways before declaring the new five-hole cellars the solution.

Officers collected information from businesses, obtained samples of fish and chips, measured salt content and ‘carried out experiments to determine how the problem of excessive salt being dispensed could be overcome by design’. They decided that the five-hole pots would reduce the amount of salt being used by more than 60 per cent yet give a ‘visually acceptable sprinkling’ that would satisfy the customer.

OK. This is interesting. This is where the unit bias, defaults, libertarian paternalism and industrial design come together, in the mundanity of everyday interaction. It’s Brian Wansink’s ‘mindless margin’ being employed strategically, politically – and just look at the reaction it’s got from the public (and from Littlejohn). A BBC story about a similar initiative in Norfolk also gives us the industry view:

A spokesman for the National Federation of Fish Friers called the scheme a “gimmick” and said customers would just shake the containers more.

Graham Adderson, 62, who owns the Downham Fryer, in Downham Market, said: “I think the scheme is hilarious. If you want to put salt on your fish and chips and there are only four holes, you’re just going to spend longer putting more on.”

I’m assuming Gateshead Council’s research took account of this effect, although there are so many ways that users’ habits could have been formed through prior experience that this ‘solution’ won’t apply to all users. There might be some customers who always put more salt on, before even tasting their food. There might be people who almost always think the fish & chips they get are too heavily salted anyway – plenty of people, anecdotally at least, used to buy Smith’s Salt ‘n’ Shake and not use the salt at all.

And there are probably plenty of people who will, indeed, end up consuming less salt, because of the heuristic of “hold salt shaker over food for n seconds” built up over many years of experience.

Overall: I actually quite like this idea: it’s clever, simple, and non-intrusive, but I can see how the interpretation, the framing, is crucial. Clearly, when presented in the way that the councils media have done here (as a government programme to eliminate customer choice, and force us all down the road decided by health bureaucrats), the initiative’s likely to elicit an angry reaction from a public sick of a “nanny state” interfering in every area of our lives. Politicians jumping on the Nudge bandwagon need to be very, very careful that this isn’t the way their initiatives are perceived and portrayed by the press (and many of them will be, of course): it needs to be very, very clear how each such measure actually benefits the public, and that message needs to be given extremely persuasively.

Final thought: Many cafés, canteens and so on have used sachets of salt, that customers apply themselves, for many years. The decision made by the manufacturers about the size of these portions is a major determinant of how much salt is used, because of the unit bias (people assume that one portion is the ‘right’ amount), and, just as with washing machine detergent, manipulation of this portion size could well be used as part of a strategy to influence the quantity used by customers. But would a similar salt sachet strategy (perhaps driven by manufacturers rather than councils) have provoked similar reactions? I’m not sure that it would. ‘Nanny manufacturer’ is less despised than ‘nanny state’, I think, certainly in the UK.

What do you think?

The asymmetry of the indescribable

Like the itchy label in my shirt, there’s something which has been niggling away at the back of my mind, ever since I started being exposed to ‘academic fields’, and boundaries between ‘subjects’ (probably as a young child). I’m sure others have expressed it much better, and, ironically, it probably has a name itself, and a whole discipline devoted to studying it.

It’s this:
The set of things/ideas/concepts/relationships/solutions/sets that have been named/defined is much, much, much smaller than the set of actual things/ideas/concepts/relationships/solutions/sets.

And yet without a name or definition for what you’re researching, you’ll find it difficult to research it, or at least to tell anyone what you’re doing. The set of things we can comprehend researching is thus limited to what we’ve already defined.

How do we ever advance, then? Are we not just forever sub-dividing the same limited field with which we’re already familiar? Or am I missing something? Is this a kind of (obvious) generalisation of the Sapir-Whorf hypothesis?

Relating it to my current research, as I ought to, the problems of choice architecture, defaults, framing, designed-in perceived affordances and so on are clearly special cases of the idea: the decision options people perceive as available to them can be, and are, used strategically to limit what decisions people make and how they understand things (e.g. Orwell’s Newspeak). But whether it’s done deliberately or not, the problem exists anyway.

Richard Thaler at the RSA

Richard H Thaler at the RSA

Richard Thaler, co-author of Nudge (which is extremely relevant to the Design with Intent research), gave a talk at the RSA in London today, and, though only mentioned briefly, he clearly drew the links between design and behaviour change. Some notes/quotes I scribbled down:
Read More

Nudges and the power of choice architecture

Nudge book cover
An ‘advance uncorrected page proof’ of Nudge I managed to get off Abebooks. Thanks to Hien Nguyen for the photo.

Nudge, by Richard Thaler and Cass Sunstein, is a publishing sensation of the moment, no doubt helped by Thaler’s work advising Barack Obama (many thanks to Johan Strandell for originally pointing me in Thaler and Sunstein’s direction). I’ve been reading the book in some detail over the last month or so, and while a full section-by-section review of its implications/applicability to ‘Design with Intent’ is in the works, this morning I saw that the Nudge blog’s John Balz had linked here with a post about the Oxford benches, so it seemed apposite to talk about it briefly.

Behavioural economics has/ought to have a lot of parallels with design psychology and usability research: it is effectively looking at how people’s cognitive biases actually cause them to understand, interpret and use economic systems, not necessarily in line with the intentions of the systems’ designers, and not necessarily in accordance with rational man theory. It’s clear there’s a lot in common with examining how people actually understand and use technology and designed elements of the world around them, and there would seem to be a continual bottom-up and top-down iteration of understanding as the field develops: what users actually do is studied, then inferences are made about the thought processes that lead to that behaviour, then the experiment/system/whatever is refined to take into account those thought processes, and what users actually do is then tested again, and so on. This is very much the way that many conscientious user-focused design consultancies work, in fact, often using ethnography and in-context user observation to determine what’s really going on in users’ heads and their interactions with technology.

Dan Ariely‘s Predictably Irrational is an excellent recent book which lays bare many of the cognitive biases and heuristics guiding everyday human decision-making, and he does take the step of suggesting a number of extremely interesting ‘improvements’ to systems which would enable them to match the way people really make decisions – which are, effectively, examples of Design with Intent as I’d define it.

But Thaler and Sunstein go further: Nudge is pretty much an elaborated series of applying techniques derived from understanding these biases to various social and economic ‘problems’, and discussion of how guiding (nudging) people towards ‘better’ choices could have a great impact overall without restricting individual freedom to make different choices. They call it libertarian paternalism and in itself the idea is not without controversy, at least when presented politically, even if it seems intuitively to be very much a part of everyday life already: when we ask someone, anyone, for advice, we are asking to have our decision guided. BJ Fogg might call it as tunnelling; Seth Godin might express it in terms of permission marketing.

Choice architecture

For Thaler and Sunstein, choice architecture is the key: the way that sets of choices are designed, and the way that they are presented to people(/users) is the basis of shaping decisions. (There’s a massive parallel here with designing affordances and perceived affordances into systems, which isn’t difficult to draw.) The establishment of ‘choice architects’, as Thaler and Sunstein describe them, within companies and governments – people with specialised domain knowledge, but also understanding of biases, heuristics and how they affect their customers’ decisions, and how to frame the choices in the ‘right’ way – is an intriguing suggestion.

Clearly, any system which intentionally presents a limited number of choices is in danger of creating false dichotomies and decoy effects – either accidentally or deliberately (e.g. this [PDF, 300 kB]). Manipulation of defaults raises similar questions (Rajiv Shah is doing some great work in this area). But, depending on the degree of ‘paternalism’ (or coercion) intended, it may be that intentionally misleading choice architecture might be considered ‘ethical’ under some circumstances. Who knows?

We’ll look at Nudge in more detail in a future post, but suffice to say: it is a very interesting book – my copy’s annotated with over a hundred torn-up bits of Post-It note at present – and it seems to be placing designers, of various kinds, at the centre of taking these ideas further for social benefit.