Category Archives: Product design

Sort some cards and win a copy of The Hidden Dimension

The Hidden Dimension

UPDATE: Thanks everyone – 10 participants in just a few hours! The study’s closed now – congratulations to Ville Hjelm whose book is now on its way…

If you’ve got a few minutes spare, are interested in the Design with Intent techniques, and fancy having a 1/10 chance of winning a brand-new copy of The Hidden Dimension, Edward T Hall’s classic 1966 work on proxemics (very worthwhile reading if you’re involved in any way with the design of environments, either architecturally or in an interaction design sense), then please do have a go at this quick card-sorting exercise [now closed].

It makes use of the pinball / shortcut / thoughtful user models I introduced in the last post, so it would probably make sense to have that page open alongside the exercise. The DwI techniques will be presented to you distinct from the ‘lenses’ (Errorproofing, Cognitive etc) so don’t worry about them.

The free WebSort account I’m using for this only allows 10 participants, so be quick and get a chance of winning the book! Once 10 people have done it, I’ll draw one of the participants out of some kind of hat or bucket and email you to get your postal address.

The purpose here (a closed card-sort, to use Donna Spencer‘s terminology) is, basically, to find out whether the pinball / shortcut / thoughtful models allow the DwI techniques to be assigned to particular ways of thinking about users – that make sense to a reasonable proportion of designers. There’s no right or wrong answer, but if 80% of you tell me that one technique seems to fit well with one model, while for another there’s no agreement at all, then that’s useful for me to know in developing the method.

Thanks for your help!

Card sorting

Cover photo from Amazon

Modelling users: Pinballs, shortcuts and thoughtfulness

The different approaches to influencing people’s behaviour outlined in the Design with Intent toolkit are pretty diverse. Working out how to apply them to your design problem, and when they might be useful, probably requires you, as a designer, to think of “the user” or “users” in a number of different ways in relation to the behaviour you’re trying to influence. I’ve thought about this a bit, and reckon there are maybe three main ways of thinking about users – models, if you like – that are relevant here. (These are distinct from the enabling / motivating / constraining idea.)

The ‘Pinball’ User

In this case, you think of users as, pretty much, very simple components of your system, to be shunted and pushed and pulled around by what you design, whether it’s physical or digital architecture. This view basically doesn’t assume that the user thinks at all, beyond basic reflex responses: the user’s a pinball (maybe a slightly spongey one) pushed and pulled this way and that, but with no requirement for understanding coming from within [1,2].

While things like deliberately uncomfortable benches or the Mosquito act against the Pinball User – effectively treating users like animals – this view need not always take such a negative approach – lots of safety systems, even down to making sure different shape connectors are used on medical equipment to prevent mistaken connections, don’t mind whether the user understands what’s going on or not: it’s in everyone’s interests to influence behaviour on the most basic level possible, without requiring thought.

The ‘Shortcut’ User

Here, you think of users as being primarily interested in getting things done in the easiest way possible, with the least effort. So you assume that they’ll take shortcuts [3], or make decisions based on intuitive judgements (Is this like something I’ve used before? How does everyone else use this? I expect this does what it looks like it does), habits, and recognising simple patterns that influence how they behave.

The Shortcut User is assumed not to want to think too much about what’s going on behind the scenes, beyond getting things done. He or she’s not always thinking about the best way of doing things, but a way that seems to work [4]. If systems are designed well to accommodate this, they can feel very easy to use, intuitively usable, and influence user behaviour through these kinds of shortcut mechanisms rather than anything deeper [5]. But there’s clearly potential for manipulation, or leading users into behaviour they wouldn’t choose for themselves if they weren’t taking the shortcuts.

The ‘Thoughtful’ User

Thoughtful Users are assumed to think about what they are doing, and why, analytically: open to being persuaded through reasoned arguments [6] about why some behaviours are better than others, maybe motivating them to change their attitudes about a subject as a precursor to changing their behaviour mindfully. If you think of your users as being Thoughtful, you will probably be presenting them with information and feedback which allows them to explore the implications of what they’re doing, and understand the world around them better.

Most of us like to model ourselves as Thoughtful Users, even though we know we don’t always fit the model. It’s probably the same with most people: so knowing when it’s appropriate to assume that users are being mindful of their behaviour, and when they’re not, will be important for the ‘success’ of a design.

_______________________________________

Of course there are many other ways you can model the user. But these seem like they might be useful ways of thinking, and of classifying the actual design techniques for influencing behaviour [PDF] according to what assumptions they make about users. I will try to test their validity / usefulness as part of my trials.

See the next post for how you can get involved with that…

Note:
From an academic psychology (or behavioural economics) point of view, the boundaries between these models of the user are maybe too blurry. Shortcut User is assumed to be pretty much like a System 1 thinker, while Thoughtful User is System 2. Straying inadvisedly into areas I know little about, Pinball User may well be assumed to be a user only using the R-complex, though I’m not sure this fits especially well. But if the distinctions are useful to designers, in the context of actually developing products and services, that (to be honest) is what matters from my point of view.

To develop the three models described above, I was inspired by this Interactions article (also here) by Hugh Dubberly, Paul Pangaro and Usman Haque, which draws on some of Kenneth Boulding’s General Systems Theory [PDF] to characterise a range of ordered system ‘combinations’ in which the user can be a part. The Pinball User corresponds pretty much to the ‘Reacting’ system; the Thoughtful User is a ‘Learning’ system; the Shortcut User is perhaps a special case of a ‘Regulating’ system (self-regulating negative feedback to damp variation, to minimise effort, boundedly rational).

I haven’t yet explored applying Leonard Talmy’s Force Dynamics, as suggested by Simon Winter to these aspects of modelling the user / interaction. I will do, in due course.

[1] Perhaps analogous to Lawrence Lessig’s ‘pathetic dot’
[2] I’m grateful to Sebastian Deterding for the explicit concept of user-as-pinball
[3] Heuristics & biases (Kahneman & Tversky)
[4] Satisficing (Simon)
[5] Peripheral route persuasion (Petty & Cacioppo)
[6] Central route persuasion (Petty & Cacioppo)

Pinball photo by ktpupp on Flickr, CC-licensed. Shortcut photo (desire path) by Alan Stanton on Flickr, CC-licensed. Thoughtful photo by Esther Dyson on Flickr, CC-licensed.

‘Smart meters': some thoughts from a design point of view

Here’s my (rather verbose) response to the three most design-related questions in DECC’s smart meter consultation that I mentioned earlier today. Please do get involved in the discussion that Jamie Young’s started on the Design & Behaviour group and on his blog at the RSA.

Q12 Do you agree with the Government’s position that a standalone display should be provided with a smart meter?

Meter in the cupboard

Free-standing displays (presumably wirelessly connected to the meter itself, as proposed in [7, p.16]) could be an effective way of bringing the meter ‘out of the cupboard‘, making an information flow visible which was previously hidden. As Donella Meadows put it when comparing electricity meter placements [1, pp. 14-15] this provides a new feedback loop, “delivering information to a place where it wasn’t going before” and thus allowing consumers to modify their behaviour in response.

“An accessible display device connected to the meter” [2, p.8] or “series of modules connected to a meter” [3, p. 28] would be preferable to something where an extra step has to be taken for a consumer to access the data, such as only having a TV or internet interface for the information, but as noted [3, p.31] “flexibility for information to be provided through other formats (for example through the internet, TV) in addition to the provision of a display” via an open API, publicly documented, would be the ideal situation. Interesting ‘energy dashboard’ TV interfaces have been trialled in projects such as live|work‘s Low Carb Lane [6], and offer the potential for interactivity and extra information display supported by the digital television platform, but it would be a mistake to rely on this solely (even if simply because it will necessarily interfere with the primary reason that people have a television).

The question suggests that a single display unit would be provided with each meter, presumably with the householder free to position it wherever he or she likes (perhaps a unit with interchangeable provision for a support stand, a magnet to allow positioning on a refrigerator, a sucker for use on a window and hook to allow hanging up on the wall would be ideal – the location of the display could be important, as noted [4, p. 49]) but the ability to connect multiple display units would certainly afford more possibilities for consumer engagement with the information displayed as well as reducing the likelihood of a display unit being mislaid. For example, in shared accommodation where there are multiple residents all of whom are expected to contribute to a communal electricity bill, each person being aware of others’ energy use (as in, for example, the Watt Watchers project [5]) could have an important social proof effect among peers.

Open APIs and data standards would permit ranges of aftermarket energy displays to be produced, ranging from simple readouts (or even pager-style alerters) to devices and kits which could allow consumers to perform more complex analysis of their data (along the lines of the user-led innovative uses of the Current Cost, for example [8]) – another route to having multiple displays per household.

Q13 Do you have any comments on what sort of data should be provided to consumers as a minimum to help them best act to save energy (e.g. information on energy use, money, CO2 etc)?

Low targets?
This really is the central question of the whole project, since the fundamental assumption throughout is that provision of this information will “empower consumers” and thereby “change our energy habits” [3, p.13]. It is assumed that feedback, including real-time feedback, on electricity usage will lead to behaviour change: “Smart metering will provide consumers with tools with which to manage their energy consumption, enabling them to take greater personal responsibility for the environmental impacts of their own behaviour” [4, p.46]; “Access to the consumption data in real time provided by smart meters will provide consumers with the information they need to take informed action to save energy and carbon” [3, p.31].

Nevertheless, with “the predicted energy saving to consumers… as low as 2.8%” [4, p.18], the actual effects of the information on consumer behaviour are clearly not considered likely to be especially significant (this figure is more conservative than the 5-15% range identified by Sarah Darby [9]). It would, of course, be interesting to know whether certain types of data or feedback, if provided in the context of a well-designed interface could improve on this rather low figure: given the scale of the proposed roll-out of these meters (every household in the country) and the cost commitment involved, it would seem incredibly short-sighted not to take this opportunity to design and test better feedback displays which can, perhaps, improve significantly on the 2.8% figure.

(Part of the problem with a suggested figure as low as 2.8% is that it makes it much more difficult to defend the claim that the meters will offer consumers “important benefits” [3, p.27]. The benefits to electricity suppliers are clearer, but ‘selling’ the idea of smart meters to the public is, I would suggest, going to be difficult when the supposed benefits are so meagre.)

If we consider the use context of the smart meter from a consumer’s point of view, it should allow us to identify better which aspects are most important. What is a consumer going to do with the information received? How does the feedback loop actually occur in practice? How would this differ with different kinds of information?

Levels of display
Even aside from the actual ‘units’ debate (money / energy / CO2), there are many possible types and combinations of information that the display could show consumers, but for the purposes of this discussion, I’ll divide them into three levels:

(1) Simple feedback on current (& cumulative) energy use / cost (self-monitoring)
(2) Social / normative feedback on others’ energy use and costs (social proof + self-monitoring)
(3) Feedforward, giving information about the future impacts of behavioural decisions (simulation & feedforward + kairos + self-monitoring)

These are by no means mutually exclusive and I’d assume that any system providing (3) would also include (1), for example.

Nevertheless, it is likely that (1) would be the cheapest, lowest-common-denominator system to roll out to millions of homes, without (2) or (3) included – so if thought isn’t given to these other levels, it may be that (1) is all consumers get.

I’ve done mock-ups of the sort of thing each level might display (of course these are just ideas, and I’m aware that a) I’m not especially skilled in interface design, despite being very interested in it; and b) there’s no real research behind these) in order to have something to visualise / refer to when discussing them.

Simple feedback on current (& cumulative) energy use, cost
(1) Simple feedback on current (& cumulative) energy use and cost

I’ve tried to express some of the concerns I have over a very simple, cheap implementation of (1) in a scenario, which I’m not claiming to be representative of what will actually happen – but the narrative is intended to address some of the ways this kind of display might be useful (or not) in practice:

Jenny has just had a ‘smart meter’ installed by someone working on behalf of her electricity supplier. It comes with a little display unit that looks a bit like a digital alarm clock. There’s a button to change the display mode to ‘cumulative’ or ‘historic’ but at present it’s set on ‘realtime’: that’s the default setting.

Jenny attaches it to her kitchen fridge with the magnet on the back. It’s 4pm and it’s showing a fairly steady value of 0.5 kW, 6 pence per hour. She opens the fridge to check how much milk is left, and when she closes the door again Jenny notices the figure’s gone up to 0.7 kW but drops again soon after the door’s closed, first to 0.6 kW but then back down to 0.5 kW again after a few minutes. Then her two teenage children, Kim and Laurie arrive home from school – they switch on the TV in the living room and the meter reading shoots up to 0.8 kW, then 1.1 kW suddenly. What’s happened? Jenny’s not sure why it’s changed so much. She walks into the living room and Kim tells her that Laurie’s gone upstairs to play on his computer. So it must be the computer, monitor, etc.

Two hours later, while the family’s sitting down eating dinner (with the TV on in the background), Jenny glances across at the display and sees that it’s still reading 1.1 kW, 13 pence per hour.

“Is your PC still switched on, Laurie?” she asks.
“Yeah, Mum,” he replies
“You should switch it off when you’re not using it; it’s costing us money.”
“But it needs to be on, it’s downloading stuff.”

Jenny’s not quite sure how to respond. She can’t argue with Laurie: he knows a lot more than her about computers. The phone rings and Kim puts the TV on standby to reduce the noise while talking. Jenny notices the display reading has gone down slightly to 1.0 kW, 12 pence per hour. She walks over and switches the TV off fully, and sees the reading go down to 0.8 kW.

Later, as it gets dark and lights are switched on all over the house, along with the TV being switched on again, and Kim using a hairdryer after washing her hair, with her stereo on in the background and Laurie back at his computer, Jenny notices (as she loads the tumble dryer) that the display has shot up to 6.5 kW, 78 pence per hour. When the tumble dryer’s switched on, that goes up even further to 8.5 kW, £1.02 per hour. The sight of the £ sign shocks her slightly – can they really be using that much electricity? It seems like the kids are costing her even more than she thought!

But what can she really do about it? She switches off the TV and sees the display go down to 8.2 kW, 98 pence per hour, but the difference seems so slight that she switches it on again – it seems worth 4 pence per hour. She decides to have a cup of tea and boils the kettle that she filled earlier in the day. The display shoots up to 10.5 kW, £1.26 pence per hour. Jenny glances at the display with a pained expression, and settles down to watch TV with her tea. She needs a rest: paying attention to the display has stressed her out quite a lot, and she doesn’t seem to have been able to do anything obvious to save money.

Six months later, although Jenny’s replaced some light bulbs with compact fluorescents that were being given away at the supermarket, and Laurie’s new laptop has replaced the desktop PC, a new plasma TV has more than cancelled out the reductions. The display is still there on the fridge door, but when the batteries powering the display run out, and it goes blank, no-one notices.

The main point I’m trying to get across there is that with a very simple display, the possible feedback loop is very weak. It relies on the consumer experimenting with switching items on and off and seeing the effect it has on the readings, which – while it will initially have a certain degree of investigatory, exploratory interest – may well quickly pall when everyday life gets in the way. Now, without the kind of evidence that’s likely to come out of research programmes such as the CHARM project [10], it’s not possible to say whether levels (2) or (3) would fare any better, but giving a display the ability to provide more detailed levels of information – particularly if it can be updated remotely – massively increases the potential for effective use of the display to help consumers decide what to do, or even to think about what they’re doing in the first place, over the longer term.

Social / normative feedback on others’ energy use and costs

(2) Social / normative feedback on others’ energy use and costs

A level (2) display would (in a much less cluttered form than what I’ve drawn above!) combine information about ‘what we’re doing’ (self-monitoring) with a reference, a norm – what other people are doing (social proof), either people in the same neighbourhood (to facilitate community discussion), or a more representative comparison such as ‘other families like us’, e.g. people with the same number of children of roughly the same age, living in similar size houses. There are studies going back to the 1970s (e.g. [11, 12]) showing dramatic (2 × or 3 ×) differences in the amount of energy used by similar families living in identical homes, suggesting that the behavioural component of energy use can be significant. A display allowing this kind of comparison could help make consumers aware of their own standing in this context.

However, as Wesley Schultz et al [13] showed in California, this kind of feedback can lead to a ‘boomerang effect’, where people who are told they’re doing better than average then start to care less about their energy use, leading to it increasing back up to the norm. It’s important, then, that any display using this kind of feedback treats a norm as a goal to achieve only on the way down. Schultz et al went on to show that by using a smiley face to demonstrate social approval of what people had done – affective engagement – the boomerang effect can be mitigated.

Feedforward, giving information about the future impacts of behavioural decisions

(3) Feedforward, giving information about the future impacts of behavioural decisions

A level (3) display would give consumers feedforward [14] – effectively, simulation of what the impact of their behaviour would be (switching on this device now rather than at a time when there’s a lower tariff – Economy 7 or a successor), and tips about how to use things more efficiently at the right moment (kairos), and in the right kind of environment, for them to be useful. Whereas ‘Tips of the Day’ in software frequently annoy users [15] because they get in the way of a user’s immediate task, with something relatively passive such as a smart meter display, this could be a more useful application for them. The networked capability of the smart meter means that the display could be updated frequently with new sets of tips, perhaps based on seasonal or weather conditions (“It’s going to be especially cold tonight – make sure you close all the curtains before you go to bed, and save 20p on heating”) or even special tariff changes for particular periods of high demand (“Everyone’s going to be putting the kettle on during the next ad break in [major event on TV]. If you’re making tea, do it now instead of in 10 minutes; time, and get a 50p discount on your next bill”).

Disaggregated data: identifying devices
This level (3) display doesn’t require any ability to know what devices a consumer has, or to be able to disaggregate electricity use by device. It can make general suggestions that, if not relevant, a consumer can ignore.

But what about actually disaggregating the data for particular devices? Surely this must be an aim for a really ‘smart’ meter display. Since [4, p.52] notes – in the context of discussing privacy – that “information from smart meters could… make it possible…to determine…to a degree, the types of technology that were being used in a property,” this information should clearly be offered to consumers themselves, if the electricity suppliers are going to do the analysis (I’ve done a bit of a possible mockup, using a more analogue dashboard style).

Disaggregated data dashboard

Whether the data are processed in the meter itself, or upstream at the supplier and then sent back down to individual displays, and whether the devices are identified from some kind of signature in their energy use patterns, or individual tags or extra plugs of some kind, are interesting technology questions, but from a consumer’s point of view (so long as privacy is respected), the mechanism perhaps doesn’t matter so much. Having the ability to see what device is using what amount of electricity, from a single display, would be very useful indeed. It removes the guesswork element.

Now, Sentec’s Coracle technology [16] is presumably ready for mainstream use, with an agreement signed with Onzo [17], and ISE’s signal-processing algorithms can identify devices down to the level of makes and models [18], so it’s quite likely that this kind of technology will be available for smart meters for consumers fairly soon. But the question is whether it will be something that all customers get – i.e. as a recommendation of the outcome of the DECC consultation – or an expensive ‘upgrade’. The fact that the consultation doesn’t mention disaggregation very much worries me slightly.

If disaggregated data by device were to be available for the mass-distributed displays, clearly this would significantly affect the interface design used: combining this with, say a level (2) type social proof display could – even if via a website rather than on the display itself – let a consumer compare how efficient particular models of electrical goods are in use, by using the information from other customers of the supplier.

In summary, for Q13 – and I’m aware I haven’t addressed the “energy use, money, CO2 etc” aspect directly – there are people much better qualified to do that – I feel that the more ability any display has to provide information of different kinds to consumers, the more opportunities there will be to do interesting and useful things with that information (and the data format and API must be open enough to allow this). In the absence of more definitive information about what kind of feedback has the most behaviour-influencing effect on what kind of consumer, in what context, and so on, it’s important that the display be as adaptable as possible.

Q14 Do you have comments regarding the accessibility of meters/display units for particular consumers (e.g. vulnerable consumers such as the disabled, partially sighted/blind)?

The inclusive design aspects of the meters and displays could be addressed through an exclusion audit, applying something such as the University of Cambridge’s Exclusion Calculator [19] to any proposed designs. Many solutions which would benefit particular consumers with special needs would also potentially be useful for the population as a whole – e.g. a buzzer or alarm signalling that a device has been left on overnight which isn’t normally, or (with disaggregation capability) notifying the consumer that, say, the fridge has been left open, would be pretty useful for everyone, not just the visually impaired or people with poor memory.

It seems clear that having open data formats and interfaces for any device will allow a wider range of things to be done with the data, many of which could be very useful for vulnerable users. Still, fundamental physical design questions about the device – how long the batteries last for, how easy they are to replace for someone with poor eyesight or arthritis, how heavy the unit is, whether it will break if dropped from hand height – will all have an impact on its overall accessibility (and usefulness).

Thinking of ‘particular consumers’ more generally, as the question asks, suggests a few other issues which need to be addressed:

– A website-only version of the display data (as suggested at points in the consultation document) would exclude a lot of consumers who are without internet access, without computer understanding, with only dial-up (metered) internet, or simply not motivated or interested enough to check – i.e., it would be significantly exclusionary.

– Time-of-Use (ToU) pricing will rely heavily on consumers actually understanding it, and what the implications are, and changing their behaviour in accordance. Simply charging consumers more automatically, without them having good enough feedback to understand what’s going on, only benefits electricity suppliers. If demand- or ToU-related pricing is introduced – “the potential for customer confusion… as a result of the greater range of energy tariffs and energy related information” [4, p. 49] is going to be significant. The design of the interface, and how the pricing structure works, is going to be extremely important here, and even so may still exclude a great many consumers who do not or cannot understand the structure.

– The ability to disable supply remotely [4, p. 12, p.20] will no doubt provoke significant reaction from consumers, quite apart from the terrible impact it will have on the most vulnerable consumers (the elderly, the very poor, and people for whom a reliable electricity supply is essential for medical reasons), regardless of whether they are at fault (i.e. non-payment) or not. There WILL inevitably be errors: there is no reason to suppose that they will not occur. Imagine the newspaper headlines when an elderly person dies from hypothermia. Disconnection may only occur in “certain well-defined circumstances” [3, p. 28] but these will need to be made very explicit.

– “Smart metering potentially offers scope for remote intervention… [which] could involve direct supplier or distribution company interface with equipment, such as refrigerators, within a property, overriding the control of the householder” [4, p. 52] – this simply offers further fuel for consumer distrust of the meter programme (rightly so, to be honest). As Darby [9] notes, “the prospect of ceding control over consumption does not appeal to all customers”. Again, this remote intervention, however well-regulated it might be supposed to be if actually implemented, will not be free from error. “Creating consumer confidence and awareness will be a key element of successfully delivering smart meters” [4, p.50] does not sit well with the realities of installing this kind of channel for remote disconnection or manipulation in consumers’ homes, and attempting to bury these issues by presenting the whole thing as entirely beneficial for consumers will be seen through by intelligent people very quickly indeed.

– Many consumers will simply not trust such new meters with any extra remote disconnection ability – it completely removes the human, the compassion, the potential to reason with a real person. Especially if the predicted energy saving to consumers is as low as 2.8% [4, p.18], many consumers will (perhaps rightly) conclude that the smart meter is being installed primarily for the benefit of the electricity company, and simply refuse to allow the contractors into their homes. Whether this will lead to a niche for a supplier which does not mandate installation of a meter – and whether this would be legal – are interesting questions.

Dan Lockton, Researcher, Design for Sustainable Behaviour
Cleaner Electronics Research Group, Brunel Design, Brunel University, London, June 2009

[1] Meadows, D. Leverage Points: Places to Intervene in a System. Sustainability Institute, 1999.

[2] DECC. Impact Assessment of smart / advanced meters roll out to small and medium businesses, May 2009.

[3] DECC. A Consultation on Smart Metering for Electricity and Gas, May 2009.

[4] DECC. Impact Assessment of a GB-wide smart meter roll out for the domestic sector, May 2009.

[5] Fischer, J. and Kestner, J. ‘Watt Watchers’, 2008.

[6] DOTT / live|work studio. ‘Low Carb Lane’, 2007.

[7] BERR. Impact Assessment of Smart Metering Roll Out for Domestic Consumers and for Small Businesses, April 2008.

[8] O’Leary, N. and Reynolds, R. ‘Current Cost: Observations and Thoughts from Interested Hackers’. Presentation at OpenTech 2008, London. July 2008.

[9] Darby S. The effectiveness of feedback on energy consumption. A review for DEFRA of the literature on metering, billing and direct displays. Environmental Change Institute, University of Oxford. April 2006.

[10] Kingston University, CHARM Project. 2009

[11] Socolow, R.H. Saving Energy in the Home: Princeton’s Experiments at Twin Rivers. Ballinger Publishing, Cambridge MA, 1978

[12] Winett, R.A., Neale, M.S., Williams, K.R., Yokley, J. and Kauder, H., 1979 ‘The effects of individual and group feedback on residential electricity consumption: three replications’. Journal of Environmental Systems, 8, p. 217-233.

[13] Schultz, P.W., Nolan, J.M., Cialdini, R.B., Goldstein, N.J. and Griskevicius, V., 2007.
‘The Constructive, Destructive and Reconstructive Power of Social Norms’. Psychological Science, 18 (5), p. 429-434.

[14] Djajadiningrat, T., Overbeeke, K. and Wensveen, S., 2002. ‘But how, Donald, tell us how?: on the creation of meaning in interaction design through feedforward and inherent feedback’. Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques. ACM Press, New York, p. 285-291.

[15] Business of Software discussion community (part of ‘Joel on Software’), ‘”Tip of the Day” on startup, value to the customer’, August 2006

[16] Sentec. ‘Coracle: a new level of information on energy consumption’, undated.

[17] Sentec. ‘Sentec and Onzo agree UK deal for home energy displays’, 28th April 2008

[18] ISE Intelligent Sustainable Energy, ‘Technology’, undated

[19] Engineering Design Centre, University of Cambridge. Inclusive Design Toolkit: Exclusion Calculator, 2007-8

frog design on Design with Intent

Robert Fabricant of frog design – with whom I had a great discussion a couple of weeks ago in London – has an insightful new article up at frog’s Design Mind, titled, oddly enough, ‘Design with Intent: how designers can influence behaviour’ – which tackles the question of how, and whether, designers can and should see their work as being directed towards behaviour change, and the power that design can have in this kind of application.

It builds on a trend evident in frog’s own work in this field, most notably the Project Masiluleke initiative (which seems to have been incredibly successful in behaviour change terms), as well as a theme Robert’s identified talking to a range of practitioners as well as young designers: “We’re experiencing a sea change in the way designers engage with the world. Instead of aspiring to influence user behaviour from a distance, we increasingly want the products we design to have more immediate impact through direct social engagement.”

The recognition of this nascent trend echoes some of the themes of transformation design – a manifesto developed by Hilary Cottam’s former RED team at the Design Council – and also fits well into what’s increasingly called social design, or socially conscious design – a broad, diverse movement of designers from many disciplines, from service design to architecture, who are applying their expertise to social problems from healthcare to environment to education to communication. With the mantra that ‘we cannot not change the world’, groups such as Design21 and Project H Design, along with alert chroniclers such as Kate Andrews, are inspiring designers to see the potential that there is for ‘impact through direct social engagement': taking on the mantle of Victor Papanek and Buckminster Fuller, motivated by the realisation that design can be more than ‘the high pitched scream of consumer selling‘, more than simply reactive. Nevertheless, Robert’s focus on influencing people’s behaviour (much as I’ve tried to make clear with my own work on Design with Intent over the last few years), is an explicit emerging theme in itself, and catching the interest of forward-looking organisations such as the RSA.

People

User centred design, constraint and reality

One of the issues Robert discusses is a question I’ve put to the audience in a number of presentations recently – fundamentally, is it still ‘User-Centred Design’ when the designer’s aim is to change users’ behaviour rather than accommodating it? As he puts it, “we influence behaviour and social practice from a distance through the products and services that we create based on our research and understanding of behaviour. We place users at the centre and develop products and services to support them. With UCD, designers are encouraged not to impose their own values on the experience.” Thus, “committing to direct behaviour design [my italics] would mean stepping outside the traditional frame of user-centred design (UCD), which provides the basis of most professional design today.”

Now, ‘direct behaviour design’ as a concept is redolent of determinism in architecture, or the more extreme end of behaviourism, where people (users / inhabitants / subjects) are seen as, effectively, components in a designed system which will respond to their environment / products / conditioning in a known, predictable way, and can thus be directed to behave in particular ways by changing the design of the system. It privileges the architect, the designer, the planner, the hidden persuader, the controller as a kind of director of behaviour, standing on the top floor observing what he’s wrought down below.

I’ll acknowledge that, in a less extreme form, this is often the intent (if not necessarily the result) behind much design for behaviour change (hence my definition for Design with Intent: ‘design that’s intended to influence, or result in, certain user behaviour’). But in practice, people don’t, most of the time, behave as predictably as this. Our behaviour – as Kurt Lewin, James Gibson, Albert Bandura, Don Norman, Herbert Simon, Daniel Kahneman, Amos Tversky and a whole line of psychologists from different fields have made clear – is a (vector) function of our physical environment (and how we perceive and understand it), our social environment (and how we perceive and understand it) and our cognitive decision processes about what to do in response to our perceptions and understanding, working within a bounded rationality that (most of the time) works pretty well. If we perceive that a design is trying to get us to behave in a way we don’t want, we display reactance to it. This is going to happen when you constrain people against pursuing a goal: even the concept of ‘direct behaviour design’ itself is likely to provoke some reactance from you, the reader. Go on: you felt slightly irritated by it, didn’t you?*

SIM Card poka-yoke

In some fields, of course, design’s aim really is to constrain and direct behaviour absolutely – e.g. “safety critical systems, like air traffic control or medical monitors, where the cost of failure [due to user behaviour] is never acceptable” (from Cairns & Cox, p.16). But decades of ergonomics, human factors and HCI research suggests that errorproofing works best when it helps the user achieve the goal he or she already has in mind. It constrains our behaviour, but it also makes it easier to avoid errors we don’t want. We don’t mind not being able to run the microwave oven with the door open (even though we resented seatbelt interlocks). We don’t mind being only being able to put a SIM card in one way round. The design constraint doesn’t conflict with our goal: it helps us achieve it. (It would be interesting to know of cases in Japanese vs. Western manufacturing industry where employees resented the introduction of poka-yoke measures – were there any? What were the specific measures that irritated?)

Returning to UCD, then, I would argue that in cases where design with intent, or design for behaviour change, is aligned with what the user wants to achieve, it’s very much still user-centred design, whether enabling, motivating or constraining. It’s the best form of user-centred design, supporting a user’s goals while transforming his or her behaviour. Some of the most insightful current work on influencing user behaviour, from people such as Ed Elias at Bath and Tang Tang at Loughborough [PPT], starts with achieving a deeper understanding of user behaviour with existing products and systems, to identify better how to improve the design; it seems as though companies such as Onzo are also taking this approach.

Is design ever neutral?

Robert also makes the point that “every [design] decision we make exerts an influence of some kind, whether intended or not”. This argument parallels one of the defences made by Richard Thaler and Cass Sunstein to criticism of their libertarian paternalism concept: however you design a system, whatever choices you decide to give users, you inevitably frame understanding and influence behaviour. Even not making a design decision at all influences behaviour.

staggered crossing

If you put chairs round a table, people will sit down. You might see it as supporting your users’ goals – they want to be able to sit down – but by providing the chairs, you’ve influenced their behaviour. (Compare Seth Godin’s ‘no chair meetings’.) If you constrain people to three options, they will pick one of the three. If you give them 500 options, they won’t find it easy to choose well. If you give them no options, they can’t make a choice, but might not realise that they’ve been denied it. And so on. (This is sometimes referred to as ‘choice editing’, a phrase which provokes substantial reactance!) If you design a pedestrian crossing to guide pedestrians to make eye contact with drivers, you’ve privileged drivers over pedestrians and reinforced the hegemony of the motor car. If you don’t, you’ve shown contempt for pedestrians’ needs. Richard Buchanan and Johan Redström have both also dealt with this aspect of ‘design as rhetoric’, while Kristina Niedderer’s ‘performative objects’ intended to increase user mindfulness of the interactions occurring.

Thaler and Sunstein’s argument (heavily paraphrased, and transposed from economics to design) is that as every decision we make about designing a system will necessarily influence user behaviour, we might as well try and put some thought into influencing the behaviour that’s going to be best for users (and society)**. And that again, to me, seems to come within the scope of user-centred design. It’s certainly putting the user – and his or her behaviour – at the centre of the design process. But then to a large extent – as Robert’s argued before – all (interaction) design is about behaviour. And perhaps all design is really interaction design (or ought to be considered as such during at least part of the process).

Persuasion, catalyst and performance design

Robert identifies three broad themes in using design to influence behaviour – persuasion design, catalyst design and performance design. ‘Persuasion design’ correlates very closely with the work on persuasive technology and persuasive design which has grown over the past decade, from B.J. Fogg’s Persuasive Technology Lab at Stanford to a world-wide collaboration of researchers and practitioners – including designers and psychologists – meeting at the Persuasive conferences (2010’s will be in Copenhagen), of which I’m proud to be a very small part. Robert firmly includes behavioural economics and choice architecture in his description of Persuasion Design, which is something that (so far at least) has not received an explicit treatment in the persuasive technology literature, although individual cognitive biases and heuristics have of course been invoked. I think I’d respectfully argue that choice architecture as discussed in an economic context doesn’t really care too much about persuasion itself: it aims to influence behaviours, but doesn’t explicitly see changing attitudes as part of that, which is very much part of persuasion.

‘Catalyst design’ is a great term – I’m not sure (other than as the name of lots and lots of small consultancies) whether it has any precedent in the design literature or whether Robert coined it himself (something Fergus Bisset asked me the other day on reading the article). On first sight, catalyst design sounds as though it might be identical with Buckminster Fuller’s trimtab metaphor – a small component added to a system which initiates or enables a much larger change to happen more easily (what I’ve tried to think of as ‘enabling behaviour‘). However, Robert broadens the discussion beyond this idea to talk about participatory and open design with users (such as Jan Chipchase‘s work – or, if we’re looking further back, Christopher Alexander and his team’s groundbreaking Oregon Experiment). In this sense, the designer is the catalyst, facilitating innovation and behaviour change. User-led innovation is a massive, and growing, field, with examples of both completely ground-up development (with no ‘designer as catalyst’ involved) and programmes where a designer or external expert can, through engaging with people who use and work with a system, really help transform it (Clare Brass’s SEED Foundation’s HiRise project comes to mind here). But it isn’t often spoken about explicitly in terms of behaviour change, so it’s interesting to see Robert present it in this context.

Finally, ‘performance design’, as Robert explains it, involves designers performing in some way, becoming immersed in the lives of the people for whom they are designing. From a behaviour change perspective, empathising with users’ mental models, understanding what motivates users during a decision-making process, and why certain choices are made (or not made), must make it easier to identify where and how to intervene to influence behaviour successfully.

Implications for designers working on behaviour change

It’s fantastic to see high-profile, influential design companies such as frog explicitly recognising the opportunities and possibilities that designers have to influence user behaviour for social benefit. The more this is out in the open as a defined trend, a way of thinking, the more examples we’ll have of real-life thinking along these lines, embodied in a whole wave of products and services which (potentially) help users, and help society solve problems with a significant behavioural component. (And, more to the point, give us a degree of evidence about which techniques actually work, in which contexts, with which users, and why – there are some great examples around at present, both concepts and real products – e.g. as collated here by Debra Lilley – but as yet we just don’t have a great body of evidence to base design decisions on.) It will also allow us, as users, to become more familiar with the tactics used to influence our behaviour, so we can actively understand the thinking that’s gone into the systems around us, and choose to reject or opt out of things which aren’t working in our best interests.

The ‘behavioural layer’ (credit to James Box of Clearleft for this term) is something designers need to get to grips with – even knowing where to start when you’re faced with a design problem involving influencing behaviour is something we don’t currently have a very good idea about. With my Design with Intent toolkit work, I’m trying to help this bit of the process a bit, alongside a lot of people interested, on many levels, in how design influences behaviour. It will be interesting over the next few years to see how frog and other consultancies develop expertise and competence in this field, how they choose to recruit the kind of people who are already becoming experts in it – and how they sell that expertise to clients and governments.

Update: Robert responds – The ‘Ethnography Defense’

Dan Lockton, Design with Intent / Brunel University, June 2009

*TU Eindhoven’s Maaike Roubroeks used this technique to great effect in her Persuasive 2009 presentation.
**The debate comes over who decides – and how – what’s ‘best’ for users and for society. Governments don’t necessarily have a good track record on this; neither do a lot of companies.

What is demand, really?

A publicly visible electricity meter in Claremont, CA

In a lot of the debate and discussion about energy, future electricity generation and metering, improved efficiency and influencing consumer behaviour – at least from an engineering perspective – the term “demand” is used, in conjunction with “supply”, to represent the energy required to be supplied to consumers, much as in conventional “supply and demand” economics.

Now, I’m sure others have investigated this and characterised it economically much better than I can, but it seems to me that demand for energy (and sometimes water) is significantly different to, say, demand for most consumer products in that, for the most part, consumers only “demand” it indirectly. It is the products and systems around us which draw the current: they are important actors and have the agency, in a sense (at least unless we really understand the impacts of how they operate).

While with, say, a car’s fuel consumption, we experience the car’s demand for fuel, and pay for it, directly in proportion to our demand for travel, with most household electricity use, we not only generally wait a month or more before having to confront the “demand” (via the bill), but separating the background demand (such as a refrigerator’s continuous energy use simply to operate) from conscious demand (such as our decision to use a fan heater all day) is very difficult for us to do as consumers: from a very simple consumer perspective (ignoring things like reactive power flow), electricity is interchangeable, and the feedback we get on our behaviour is only very weakly linked to the specifics of that behaviour.

An on-off switch with a proce label

Basically, then, a lot of “demand” is not conscious demand at all. Most consumers don’t make an in-the-moment decision to use more electricity if it gets cheaper (though it may happen over time, e.g. if someone decides to get electric heating because oil heating has become more expensive) or vice versa. The demand is a function of the products and systems around us, our habits, lifestyle and behaviours but it is very difficult for us to see this, and make decisions which have an impact on this. If there are major changes, such as a massively changed price, then real conscious demand changes may happen (so a kind of stepped curve rather than anything smooth) but this is surely not what happens in everyday life. At least at present.

Maybe, then, part of what design could offer here is to help translate this unconscious, product-led, delayed payment demand into a visible, tangible, immediate demand which makes us consider it like any other everyday buying / consumption choice. Real-time self-monitoring feedback from clever metering technology (e.g. Onzo or Wattson) could go a long way here, but what about feedforward? Can we go as far as on-off switches with price labels on them? (Digital, updated, real-time, of course.) Would it make us more price-sensitive to energy costs? Would that influence our behaviour?

Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

■ Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

■ How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

■ Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

■ Can be irritating or helpful depending on how much it interferes with normal user activity—e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width=”47%” padding=”6%”]

Lock-in & Lock-out

■ Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

■ Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width=”47%” padding=”0%”]

Extra step

■ Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

■ Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width=”47%” padding=”6%”]

Specialised affordances

 
■ Design elements so that they can only be used in particular contexts or arrangements

Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width=”47%” padding=”0%”]

Partial self-correction

■ Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction – e.g. Google’s “Did you mean…?” feature

■ An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width=”47%” padding=”6%”]

Portions

■ Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

■ Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width=”47%” padding=”0%”]

Conditional warnings

■ Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

■ Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.