Category Archives: Choice Architecture

Report: Most people just trying to get by

Cubicles (image by Michael Lokner, used under CC licence)

Most people, for most of their day, are trying to get by. Every day is essentially a series of problems, some minor, some major, some requiring more thought than others. Some we care a lot about; some we wish we didn’t have to. Some are welcome; some we even bring on ourselves because we enjoy solving them; others are deeply unwelcome. Some we care about initially, but then find we no longer do; some we don’t care about to start with, but they become important to us over time.
Continue reading

If…

(introducing behavioural heuristics)

Some heuristics extracted by workshop participants

EDIT (April 2013): An article based on the ideas in this post has now been published in the International Journal of Design – which is open-access, so it’s free to read/share. The article refines some of the ideas in this post, using elements from CarbonCulture as examples, and linking it all to concepts from human factors, cybernetics and other fields.

There are lots of models of human behaviour, and as the design of systems becomes increasingly focused on people, modelling behaviour has become more important for designers. As Jon Froehlich, Leah Findlater and James Landay note, “even if it is not explicitly recognised, designers [necessarily] approach a problem with some model of human behaviour”, and, of course, “all models are wrong, but some are useful”. One of the points of the DwI toolkit (post-rationalised) was to try to give designers a few different models of human behaviour relevant to different situations, via pattern-like examples.

I’m not going to get into what models are ‘best’ / right / most predictive for designers’ use here. There are people doing that more clearly than I can; also, there’s more to say than I have time to do at present. What I am going to talk about is an approach which has emerged out of some of the ethnographic work I’ve been doing for the Empower project, working on CarbonCulture with More Associates, where asking users questions about how and why they behaved in certain ways with technology (in particular around energy-using systems) led to answers which were resolvable into something like rules: I’m talking about behavioural heuristics.
Continue reading

dConstructing a workshop

dConstruct 2011 workshop

A couple of weeks ago, at dConstruct 2011 in Brighton, 15 brave participants took part in my full-day workshop ‘Influencing behaviour: people, products, services and systems’, with which I was very kindly assisted by Sadhna Jain from Central Saint Martins. As a reference for the people who took part, for me, and for anyone else who might be intrigued, I thought I would write up what we did. The conference itself was extremely interesting, as usual, with a few talks which provoked more discussion than others, as much about presentation style as content, I think (others have covered the conference better than I can). And, of course, I met (and re-connected with) some brilliant people.

I’ve run quite a few workshops in both corporate and educational settings using the Design with Intent cards or worksheets (now also available as a free iPad app from James Christie) but this workshop aimed to look more broadly at how designers can understand and influence people’s behaviour. This is also the first ‘public’ workshop that I’ve done under the Requisite Variety name, which doesn’t mean much different in practice, but is something of a milestone for me as a freelancer.

In the previous post I outlined what I had planned, and while in the event the programme deviated somewhat from this, I think overall it was reasonably successful. Rather than using a case study (I feel uneasy, when people are paying to come to a workshop, to ask them effectively to do work for someone else) we ran through a series of exercises intended to explore different aspects of how design and people’s behaviour relate to each other, and perhaps uncover some insights which would make it easier to incorporate a consideration of this into a design process.

Continue reading

dConstruct workshop: Influencing behaviour: people, products, services and systems

Sign above a radiator, Brighton Dome

I’m running a workshop on Wednesday 31st August at dConstruct 2011 in Brighton, and I thought it would be worthwhile explaining in a bit more detail what it’s about, and what we’ll be doing.

Here’s the summary from the dConstruct website:

 
dConstruct 2011
Whether we choose to do it or not, what we design is going to affect how users behave, so we might as well think about it, and—if we can—actually get good at it. Bridging the gap between physical and digital product design, a systems approach can help us understand how people interact with the different touchpoints they experience, how mental models and cognitive biases and heuristics influence the way people make decisions about what to do, and hence how we might apply that knowledge (for good).

In this full-day practical workshop, we’ll try a novel approach to design and behaviour, using ourselves as both designers and cybernetic guinea pigs in exploring and developing a combination of physical and digital experiences. You’ll learn how to improve your own decision-making and understanding of how your behaviour is influenced by the systems around you, as well as ways to influence others’ behaviour, through a new approach to designing at the intersection of people, products, services and systems.
 

So what will the day actually involve? (You’re entitled to ask: the above is admittedly vague.) I’ve run quite a lot of workshops in the last couple of years, mainly using the Design with Intent toolkit in one form or another to help groups generate concepts for specific behaviour change contexts, but this one is slightly different, taking advantage of a full day to explore more areas of how design and behaviour interact, in a way which I hope complements dConstruct’s overall theme this year of “bridging the gap between physical and digital product design” usefully and interestingly. Also, the concept of ‘design for behaviour change’ is probably no longer new and exciting (at least to the dConstruct audience) in quite the way it might have been a few years ago: a more nuanced, developed, thoughtful exploration is needed. We’ll be using some of the Design with Intent cards throughout the workshop, but they’re not the main focus.

My plan is for the workshop to have four stages (3 shorter ones in the morning, 1 longer one for the afternoon):
Continue reading

frog design on Design with Intent

Robert Fabricant of frog design – with whom I had a great discussion a couple of weeks ago in London – has an insightful new article up at frog’s Design Mind, titled, oddly enough, ‘Design with Intent: how designers can influence behaviour’ – which tackles the question of how, and whether, designers can and should see their work as being directed towards behaviour change, and the power that design can have in this kind of application.

It builds on a trend evident in frog’s own work in this field, most notably the Project Masiluleke initiative (which seems to have been incredibly successful in behaviour change terms), as well as a theme Robert’s identified talking to a range of practitioners as well as young designers: “We’re experiencing a sea change in the way designers engage with the world. Instead of aspiring to influence user behaviour from a distance, we increasingly want the products we design to have more immediate impact through direct social engagement.”

The recognition of this nascent trend echoes some of the themes of transformation design – a manifesto developed by Hilary Cottam’s former RED team at the Design Council – and also fits well into what’s increasingly called social design, or socially conscious design – a broad, diverse movement of designers from many disciplines, from service design to architecture, who are applying their expertise to social problems from healthcare to environment to education to communication. With the mantra that ‘we cannot not change the world’, groups such as Design21 and Project H Design, along with alert chroniclers such as Kate Andrews, are inspiring designers to see the potential that there is for ‘impact through direct social engagement': taking on the mantle of Victor Papanek and Buckminster Fuller, motivated by the realisation that design can be more than ‘the high pitched scream of consumer selling‘, more than simply reactive. Nevertheless, Robert’s focus on influencing people’s behaviour (much as I’ve tried to make clear with my own work on Design with Intent over the last few years), is an explicit emerging theme in itself, and catching the interest of forward-looking organisations such as the RSA.

People

User centred design, constraint and reality

One of the issues Robert discusses is a question I’ve put to the audience in a number of presentations recently – fundamentally, is it still ‘User-Centred Design’ when the designer’s aim is to change users’ behaviour rather than accommodating it? As he puts it, “we influence behaviour and social practice from a distance through the products and services that we create based on our research and understanding of behaviour. We place users at the centre and develop products and services to support them. With UCD, designers are encouraged not to impose their own values on the experience.” Thus, “committing to direct behaviour design [my italics] would mean stepping outside the traditional frame of user-centred design (UCD), which provides the basis of most professional design today.”

Now, ‘direct behaviour design’ as a concept is redolent of determinism in architecture, or the more extreme end of behaviourism, where people (users / inhabitants / subjects) are seen as, effectively, components in a designed system which will respond to their environment / products / conditioning in a known, predictable way, and can thus be directed to behave in particular ways by changing the design of the system. It privileges the architect, the designer, the planner, the hidden persuader, the controller as a kind of director of behaviour, standing on the top floor observing what he’s wrought down below.

I’ll acknowledge that, in a less extreme form, this is often the intent (if not necessarily the result) behind much design for behaviour change (hence my definition for Design with Intent: ‘design that’s intended to influence, or result in, certain user behaviour’). But in practice, people don’t, most of the time, behave as predictably as this. Our behaviour – as Kurt Lewin, James Gibson, Albert Bandura, Don Norman, Herbert Simon, Daniel Kahneman, Amos Tversky and a whole line of psychologists from different fields have made clear – is a (vector) function of our physical environment (and how we perceive and understand it), our social environment (and how we perceive and understand it) and our cognitive decision processes about what to do in response to our perceptions and understanding, working within a bounded rationality that (most of the time) works pretty well. If we perceive that a design is trying to get us to behave in a way we don’t want, we display reactance to it. This is going to happen when you constrain people against pursuing a goal: even the concept of ‘direct behaviour design’ itself is likely to provoke some reactance from you, the reader. Go on: you felt slightly irritated by it, didn’t you?*

SIM Card poka-yoke

In some fields, of course, design’s aim really is to constrain and direct behaviour absolutely – e.g. “safety critical systems, like air traffic control or medical monitors, where the cost of failure [due to user behaviour] is never acceptable” (from Cairns & Cox, p.16). But decades of ergonomics, human factors and HCI research suggests that errorproofing works best when it helps the user achieve the goal he or she already has in mind. It constrains our behaviour, but it also makes it easier to avoid errors we don’t want. We don’t mind not being able to run the microwave oven with the door open (even though we resented seatbelt interlocks). We don’t mind being only being able to put a SIM card in one way round. The design constraint doesn’t conflict with our goal: it helps us achieve it. (It would be interesting to know of cases in Japanese vs. Western manufacturing industry where employees resented the introduction of poka-yoke measures – were there any? What were the specific measures that irritated?)

Returning to UCD, then, I would argue that in cases where design with intent, or design for behaviour change, is aligned with what the user wants to achieve, it’s very much still user-centred design, whether enabling, motivating or constraining. It’s the best form of user-centred design, supporting a user’s goals while transforming his or her behaviour. Some of the most insightful current work on influencing user behaviour, from people such as Ed Elias at Bath and Tang Tang at Loughborough [PPT], starts with achieving a deeper understanding of user behaviour with existing products and systems, to identify better how to improve the design; it seems as though companies such as Onzo are also taking this approach.

Is design ever neutral?

Robert also makes the point that “every [design] decision we make exerts an influence of some kind, whether intended or not”. This argument parallels one of the defences made by Richard Thaler and Cass Sunstein to criticism of their libertarian paternalism concept: however you design a system, whatever choices you decide to give users, you inevitably frame understanding and influence behaviour. Even not making a design decision at all influences behaviour.

staggered crossing

If you put chairs round a table, people will sit down. You might see it as supporting your users’ goals – they want to be able to sit down – but by providing the chairs, you’ve influenced their behaviour. (Compare Seth Godin’s ‘no chair meetings’.) If you constrain people to three options, they will pick one of the three. If you give them 500 options, they won’t find it easy to choose well. If you give them no options, they can’t make a choice, but might not realise that they’ve been denied it. And so on. (This is sometimes referred to as ‘choice editing’, a phrase which provokes substantial reactance!) If you design a pedestrian crossing to guide pedestrians to make eye contact with drivers, you’ve privileged drivers over pedestrians and reinforced the hegemony of the motor car. If you don’t, you’ve shown contempt for pedestrians’ needs. Richard Buchanan and Johan Redström have both also dealt with this aspect of ‘design as rhetoric’, while Kristina Niedderer’s ‘performative objects’ intended to increase user mindfulness of the interactions occurring.

Thaler and Sunstein’s argument (heavily paraphrased, and transposed from economics to design) is that as every decision we make about designing a system will necessarily influence user behaviour, we might as well try and put some thought into influencing the behaviour that’s going to be best for users (and society)**. And that again, to me, seems to come within the scope of user-centred design. It’s certainly putting the user – and his or her behaviour – at the centre of the design process. But then to a large extent – as Robert’s argued before – all (interaction) design is about behaviour. And perhaps all design is really interaction design (or ought to be considered as such during at least part of the process).

Persuasion, catalyst and performance design

Robert identifies three broad themes in using design to influence behaviour – persuasion design, catalyst design and performance design. ‘Persuasion design’ correlates very closely with the work on persuasive technology and persuasive design which has grown over the past decade, from B.J. Fogg’s Persuasive Technology Lab at Stanford to a world-wide collaboration of researchers and practitioners – including designers and psychologists – meeting at the Persuasive conferences (2010’s will be in Copenhagen), of which I’m proud to be a very small part. Robert firmly includes behavioural economics and choice architecture in his description of Persuasion Design, which is something that (so far at least) has not received an explicit treatment in the persuasive technology literature, although individual cognitive biases and heuristics have of course been invoked. I think I’d respectfully argue that choice architecture as discussed in an economic context doesn’t really care too much about persuasion itself: it aims to influence behaviours, but doesn’t explicitly see changing attitudes as part of that, which is very much part of persuasion.

‘Catalyst design’ is a great term – I’m not sure (other than as the name of lots and lots of small consultancies) whether it has any precedent in the design literature or whether Robert coined it himself (something Fergus Bisset asked me the other day on reading the article). On first sight, catalyst design sounds as though it might be identical with Buckminster Fuller’s trimtab metaphor – a small component added to a system which initiates or enables a much larger change to happen more easily (what I’ve tried to think of as ‘enabling behaviour‘). However, Robert broadens the discussion beyond this idea to talk about participatory and open design with users (such as Jan Chipchase‘s work – or, if we’re looking further back, Christopher Alexander and his team’s groundbreaking Oregon Experiment). In this sense, the designer is the catalyst, facilitating innovation and behaviour change. User-led innovation is a massive, and growing, field, with examples of both completely ground-up development (with no ‘designer as catalyst’ involved) and programmes where a designer or external expert can, through engaging with people who use and work with a system, really help transform it (Clare Brass’s SEED Foundation’s HiRise project comes to mind here). But it isn’t often spoken about explicitly in terms of behaviour change, so it’s interesting to see Robert present it in this context.

Finally, ‘performance design’, as Robert explains it, involves designers performing in some way, becoming immersed in the lives of the people for whom they are designing. From a behaviour change perspective, empathising with users’ mental models, understanding what motivates users during a decision-making process, and why certain choices are made (or not made), must make it easier to identify where and how to intervene to influence behaviour successfully.

Implications for designers working on behaviour change

It’s fantastic to see high-profile, influential design companies such as frog explicitly recognising the opportunities and possibilities that designers have to influence user behaviour for social benefit. The more this is out in the open as a defined trend, a way of thinking, the more examples we’ll have of real-life thinking along these lines, embodied in a whole wave of products and services which (potentially) help users, and help society solve problems with a significant behavioural component. (And, more to the point, give us a degree of evidence about which techniques actually work, in which contexts, with which users, and why – there are some great examples around at present, both concepts and real products – e.g. as collated here by Debra Lilley – but as yet we just don’t have a great body of evidence to base design decisions on.) It will also allow us, as users, to become more familiar with the tactics used to influence our behaviour, so we can actively understand the thinking that’s gone into the systems around us, and choose to reject or opt out of things which aren’t working in our best interests.

The ‘behavioural layer’ (credit to James Box of Clearleft for this term) is something designers need to get to grips with – even knowing where to start when you’re faced with a design problem involving influencing behaviour is something we don’t currently have a very good idea about. With my Design with Intent toolkit work, I’m trying to help this bit of the process a bit, alongside a lot of people interested, on many levels, in how design influences behaviour. It will be interesting over the next few years to see how frog and other consultancies develop expertise and competence in this field, how they choose to recruit the kind of people who are already becoming experts in it – and how they sell that expertise to clients and governments.

Update: Robert responds – The ‘Ethnography Defense’

Dan Lockton, Design with Intent / Brunel University, June 2009

*TU Eindhoven’s Maaike Roubroeks used this technique to great effect in her Persuasive 2009 presentation.
**The debate comes over who decides – and how – what’s ‘best’ for users and for society. Governments don’t necessarily have a good track record on this; neither do a lot of companies.

Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

■ Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

■ How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

■ Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

■ Can be irritating or helpful depending on how much it interferes with normal user activity—e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width=”47%” padding=”6%”]

Lock-in & Lock-out

■ Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

■ Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width=”47%” padding=”0%”]

Extra step

■ Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

■ Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width=”47%” padding=”6%”]

Specialised affordances

 
■ Design elements so that they can only be used in particular contexts or arrangements

Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width=”47%” padding=”0%”]

Partial self-correction

■ Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction – e.g. Google’s “Did you mean…?” feature

■ An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width=”47%” padding=”6%”]

Portions

■ Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

■ Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width=”47%” padding=”0%”]

Conditional warnings

■ Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

■ Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.