Coming up for air, briefly

Thanks for all the responses to the Design with Intent Toolkit – it’s got a heartening reception from lots of very interesting people, and has brought some great opportunities. I hope to be able to deal with all this effectively!

Thanks too to all the people who’ve blogged about it, included it in a podcast, and spread it via Twitter. Your attention’s much appreciated and if anyone does try it out on some problems, please do let me know how you get on, what would improve it, and so on. And more examples for each of the patterns are, of course, always welcome!

Printed copies (A2 poster, 135gsm silk finish) are available – the nominal listing on Amazon is £15 including postage, but if you’d like one for much less than that, let me know! (In fact, if you’re willing to try it out on a design problem, fill in a survey about how you did it, and let me use it as a brief case study, you can have it free.)

Persuasive 2009

I say I’m just coming up for air briefly, as for the last couple of weeks, among some other major work (which could possibly bear some very nice fruit), I’ve been putting together my presentation* for Persuasive 2009, the Fourth International Conference on Persuasive Technology in Claremont, California, next week, and at present am desperately trying to finish a lot of other things before flying out on Saturday. It’ll be my first time across the Atlantic and my girlfriend and I will be having a bit of a holiday afterwards, so I hope a lack of updates and replies, while little different to my usual pattern, will be excusable. But while the conference is on, if there’s time and no hoo-hah with the wireless and it seems appropriate, I’ll try and do a bit of blogging, or more likely, Twittering about it (#persuasive2009 ?). There are some very interesting people presenting their work.

Anyway, if you missed the update to my earlier post, a preprint version of my paper (with David Harrison, Tim Holley and Neville A. Stanton), Influencing Interaction: Development of the Design with Intent Method [PDF, 1.6MB] is available. At some point soon this version of the paper will downloadable from Brunel’s research archive, while the ‘proper’ version will be available in the ACM Digital Library. ACM requires me to state the following alongside the link to the preprint:

© ACM, 2009. This is the authors’ version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version will be published in Proceedings of Persuasive 2009: Fourth International Conference on Persuasive Technology, Claremont, CA, 26-29 April 2009, ACM Digital Library. ISBN 978-1-60558-376-1.

The presentation will include many parts of the paper, but the nature of academic papers like this (submitted in December) is that they are out of date before anyone reads them. So, much of the presentation will be about the DwI toolkit and the reasoning behind bits of it, rather than just sticking to the state of the research six months ago – I hope that’s reasonable. Last year, presenting on the last day of the conference meant that I was able to spend many hours in a hotel room in Oulu editing and re-editing the presentation (mostly listening to the Incredible Bongo Band’s version of In-a-Gadda-da-Vida on repeat) to match what I thought the audience would like, and incorporate things I’d learned during the conference, but this time I’m on the first day so there isn’t that opportunity…

Interfaces article

Also this month, I have a brief article about my research in Interfaces, the magazine of Interaction, the British Computer Society’s HCI Group, in its ‘My PhD’ series (p. 20-21). Interfaces no. 78 is available to download here (make sure to click on the link below the cover image, as – at time of writing – the cover’s linked to the previous issue). It’s a great magazine – redesigned for this issue – with some really interesting features about aspects of HCI by some well-known names in the field. Thanks to Eduardo Calvillo and Stephen Hassard for making the article possible.

The table in the article was unfortunately truncated during editing so (if I get it in in time) there’ll be a brief addendum in the next issue with the full table, but I might as well make it available here too [PDF, 8kb] – it’s a brief, not especially exciting summary of some concepts for influencing householders to close curtains at night to save energy. (At some point I’ll do a full case study on this as there are some interesting ideas as well as some very impractical ones.)

*Taking Parkinson’s Law as an instruction manual seems to be a perpetual habit of mine, so the maximum time allocated to get the presentation done has been more than entirely taken up by getting the presentation done… it’s still not quite there, and I’m not sure whether the format of the auditorium’s going to allow an interactive element which I would very much like to include but probably won’t be able to. Also – while Prezi looks like it might be everything I’ve ever wanted in presentation software – the workflow of “doing a PowerPoint” for me has evolved into a long chain of “Photoshop – Illustrator – export – Photoshop – Save for Web – insert into PowerPoint” which I’m sure I could do more quickly, but lots of conferences and seminars want PPTs rather than PDFs, and the only Mac I have (which once – kind of – belonged to the Duke of Edinburgh [interesting story]) is too slow and old to run anything better.

Security Lens: The Patterns

Bonjour / Goeiendag to visitors from Design for Persuasion: while you’re here, you might also like to download a free poster [PDF] which has 12 of the Design with Intent patterns on it in a handy reference form. Thanks for stopping by!

The Security Lens represents a ‘security’ worldview, i.e. that undesired user behaviour is something to deter and/or prevent though ‘countermeasures’ designed into products, systems and environments, both physically and online, with examples such as digital rights management.

From a designer’s point of view, this can be an ‘unfriendly’ – and in some circumstances unethical – view to take, effectively treating users as ‘guilty until proven innocent’. However, thinking more closely about the patterns, it’s possible to think of ways that they could be applied to help users control their own habits or behaviour for their own benefit – encouraging exercise, reducing energy use, and so on.

Surveillance

“What do I do when other people might be watching?”

■ If people think others can see what they’re doing, they often change their behaviour in response, through guilt, fear of censure, embarrassment or another mechanism

■ Techniques range from monitoring users’ actions with reporting to authorities, to simpler ‘natural surveillance’, where the layout of an area allows everyone to see what each other is doing. Statistics making public details about users’ contributions to a fund might fit in here too. Surveillance can benefit the user where monitoring allows a desired intervention, e.g. a fall alarm for the elderly

CCTV warning signSecurity lighting

Examples: The ubiquitous CCTV—or the threat of it—and security lighting, are both intended to influence user behaviour, in terms of being a deterrent to crime in the first place

Constraining behaviourThis pattern is about constraining user behaviour.

Atmospherics

“I can’t hang around here with that racket going on”

■ Use (or removal) of ambient sensory effects (sound, light, smell, taste, etc) to influence user behaviour

■ Atmospherics can be ‘discriminatory’, i.e. targeted at particular classes of users, based on some characteristic enabling them to be singled out – such as the pink lights supposed to make teenagers with acne too embarrassed to hang around – or ‘blanket’, i.e. targeted at all users, e.g. Bitrex, a bitter substance, used to discourage drinking weedkiller or biting your nails.

The Mosquito anti-teenager sound weapon Blue lighting

Examples: Two examples of ‘discriminatory’ atmospherics: the Mosquito emits a 17.4 kHz tone to drive away young people from public places; blue lighting is used in some public toilets to discourage drug injection by making veins difficult to see
Constraining behaviourThis pattern is mainly about constraining user behaviour…
Motivating behaviourbut can also motivate a user, e.g. pleasant sensations such as the fresh bread smell used in supermarkets can encourage purchases.

[column width=”47%” padding=”6%”]

Threat of damage

“That’s going to hurt”

■ It’s not nice, but the threat of damage (or injury) lies behind many measures designed to influence behaviour, from tyre damage spikes to barbed wire, electric fences, shards of glass cemented into the top of walls, and so on.

■ In some cases the threat alone is hoped to be enough to dissuade particular behaviours; in others, it’s expected that some mild injury or discomfort will occur but put people off doing it again. Warnings are often used (and may be legally required), but this is not always the case.

Pig ear skate stopper

Example: Various kinds of ‘skate stopper’ in public places, such as this so-called pig ear are designed to cause damage to skateboards (and injury to skateboarders) to dissuade them from skating an area.

[/column][column width=”47%” padding=”0%”]

What you have

“Insert passcard now”

■ ‘What you have’ relies on a user possessing a certain tool or device to enable functionality or gain access.

■ Aside from the obvious (keys, passcards, dongles and so on), there are, for example, specialised screwdrivers for security screws, which rely (largely unsuccessfully) on the distribution channels being kept private. Money itself could be seen as an example of this, especially where it’s intentionally restricted to influence behaviour (e.g. giving children a certain amount of pocket money to limit what they can buy.)

Train tickets

Example: When they’re actually checked, rail or other travel tickets restrict journeys to people who have the right ticket

[/column][column width=”47%” padding=”6%”]

What you know or can do

“Enter password”

■ ‘What you know or can do’ relies on the capabilities of users – some information or ability which only a subset of users can provide. The most obvious examples are passwords and exams (e.g. driving tests) – testing users’ knowledge / understanding before ‘allowing’ them to perform some behaviour. Often one capability stands as a proxy for another, e.g. CAPTCHAs separating humans from automated bots.

■ These are often interlocks – e.g. breathalyser interlocks on car ignitions, or, one stage further, the ‘puzzle’ interlocks tested during the 1970s, where a driver had to complete an electronic puzzle before the car would start, thus (potentially) catching tiredness or drug use as well as intoxication.

Childproof lid

Example: Childproof lids on bottles of potentially dangerous substances – such as this weedkiller – help prevent access by children, but can also make it difficult for adults with limited dexterity.

[/column][column width=”47%” padding=”0%”]

Who you are

“If the glove fits…”

■ Design based on ‘who you are’ intends to allow or prevent behaviour based on some criteria innate to each individual or group – usually biometric – which can’t be acquired by others.

■ The aim is usually strong denial of access to anyone not authenticated, but there are also cases of primarily self-imposed ‘who you are’ security, such as the Mukurtu system, stamping ‘Confidential’ on documents, and so on.

Fingerprint scanner - photo by Josh Bancroft on Flickr

Example: Fingerprint scanners are becoming increasingly common on computer hardware.

[/column][column width=”47%” padding=”6%”]

What you’ve done

“Do 10 minutes more exercise to watch this show”

■ Systems which alter the options available to users based on their current / past behaviour are increasingly easy to imagine as the technology for logging and tracking actions becomes easier to include in products (see also Surveillance). Products which ration people’s use, or require some ‘work’ to achieve a goal, fit in here.

■ These could simply ‘lock out’ someone who has abused/misused a system (as happens with various anti-spam systems), or, more subtly, could divide users into classes based on their previous behaviour and provide different levels of functionality in the future.

Square Eyes by Gillian Swan

Example: Gillian Swan’s Square Eyes restricts children’s TV viewing time based on the amount of exercise they do (measured by these special insoles)

[/column][column width=”47%” padding=”0%”]

Where you are

“This function is disabled for your current location”

■ ‘Where you are’ security selectively restricts or allows a user functions based on a user’s location

■ Examples include buildings intended to have no mobile phone reception (perhaps ‘for security reasons’, or maybe for the benefit of other users, e.g. in a cinema), and IP address geographic filtering, where website users identified as being in different countries are given access to different content.

Trolley wheels lock when taken outside car park

Example: Some supermarket trolleys have devices fitted to lock the wheels, mechanically or electronically when taken outside a defined area. Less high-tech versions have also been used!

[/column][end_columns]

Photos/screenshots by Dan Lockton except fingerprint scanner by Josh Bancroft and Square Eyes photo from Brunel University press office.

____________________
The Design with Intent Toolkit v0.9 by Dan Lockton, David Harrison and Neville A. Stanton
Introduction | Behaviour | Architectural lens | Errorproofing lens | Persuasive lens | Visual lens | Cognitive lens | Security lens

dan@danlockton.co.uk

Visual Lens: The Patterns

Bonjour / Goeiendag to visitors from Design for Persuasion: while you’re here, you might also like to download a free poster [PDF] which has 12 of the Design with Intent patterns on it in a handy reference form. Thanks for stopping by!

The Visual Lens combines ideas from product semantics, semiotics, ecological psychology and Gestalt psychology about how users perceive patterns and meanings as they interact with the systems around them.

These techniques are often applied by interaction designers without necessarily considering how they can influence user behaviour.

Prominence & visibility

“You can’t miss it”

■ Design certain elements so they’re more prominent, obvious, memorable or visible than others, to direct users’ attention towards them, making it easier for users to pick up the message intended, or pick the ‘best’ options from a set of choices

■ Simple prominence is one of the most basic design principles for influencing user behaviour, but visibility can also include using transparency strategically as part of a system—drawing users’ attention to elements which would otherwise be hidden

Rising bollard signWarning sign

Examples: The most important warning signs should be the most prominent—if a user only has time to take in one message, it should be the one that matters the most (above)

A Dyson vacuum’s transparent chamber makes forgetting to empty it unlikely, thus keeping the effectiveness of the cleaner high and improving user satisfaction (below)

Dyson transparent chamber - photo by Skylar Primm

Enabling behaviourThis pattern is about enabling user behaviour: making it easier to make certain choices

Metaphors

“This reminds me of one of those, so I expect
it works that way too”

■ Use design elements from a context the user understands in a new system, to imply how it should be used; make it easy for users to understand a new system in terms they already understand

■ There’s a danger of oversimplification, or misleading users about the consequences of actions, if metaphor use is taken to extremes; it can also trap users in old behaviour patterns

Mac desktop

Examples: Everyday software interfaces (above and below left) combine hundreds of metaphors, from the ‘desktop’, ‘folders’ and ‘trash/recycle bin’ themselves to the icons used for graphics functions such as zoom (magnifying glass), eyedropper and so on. Ford’s SmartGauge (below right) uses ‘leaves’ to represent efficiency of a user’s driving style

Adobe paletteFord Smartgauge

Enabling behaviourMetaphors are mainly about enabling user behaviour…
motivating behaviourbut can also motivate a user to ‘know’ by increasing mindful understanding of how best to use a system.

[column width=”47%” padding=”6%”]

Perceived affordances

“Looks like you use it this way…”

Perceived affordances are what it looks like we can do with something. A button looks like we should push it; a door with a handle looks like we should pull it, whereas a door with a plate looks like we should push it. This is fundamental to interaction design, and in influencing user behaviour, since the actions a design ‘suggests’ to a user will probably be carried out. (There may be hidden affordances too.)

■ Related ideas include mappings – laying out controls so they relate intuitively to the functions they control – and perceived constraints, what users perceive they can’t do with something.

Door handle suggests it should be pulled

Example: Where a door has a handle, we assume we should pull it. When this isn’t the case, usability suffers!

[/column][column width=”47%” padding=”0%”]

Implied sequences

“Easy as 1,2,3…”

■ Presenting items in an implied sequence suggests to users that they should be used / experienced in order. Remember that in while in western countries, our reading direction leads us to assume sequences go left-to-right, in other cultures right-to-left may be the norm, e.g. this Hebrew version of the Mozilla browser with right-facing “back” arrow and left-facing “forward” arrow.

■ The sequence of choices can also suggest levels of priority / hierarchy – there’s a small advantage for candidates who are listed first on a ballot paper [PDF]. The order in which options are revealed can also be important, both in terms of what people remember and how they make comparisons

Toggle switches by trancedmoogle

Example: Rows of switches such as these can suggest a sequential form of operation

[/column][column width=”47%” padding=”6%”]

Possibility trees

“What route should I take?”

■ Possibility trees show users what routes they can follow to achieve a goal, or what results different behaviours can lead to. The way these are presented, via instructions, an interface, or even signage or maps – wayfinding (e.g. these Transport for London studies) – can influence the choices users make. .

They can be used strategically: showing users the routes that planners would prefer them to take, or the actions that designers would like users to take.

The London Underground map

Example: Once people have become used to using a highly stylised map to plan journeys, such as the London Underground map here, it can affect perceptions of places’ location in real life. For example, Willesden Junction and North Acton stations are a 10-15 minute walk apart, but the distortion introduced by the map suggests that the distance is much further, which in turn can influence the transport choices people make.

[/column][column width=”47%” padding=”0%”]

Watermarking

“Taking (or showing) ownership”

■ In this context, watermarking means making the ownership (or background) of something evident to users. If people feel they own a device, through some kind of personalisation or acknowledgement that it’s theirs, they will often use it differently to when it seems like it belongs to someone else.

■ One application of this to influencing behaviour is to make it clear or obvious that some shared resources belong to everyone, or to the community, rather than no-one in particular.

Writing on packaging to 'watermark' it with the purchaser's name

Example: A Gloucestershire shopkeeper has taken to writing customers’ names on the packaging of snacks they buy, to encourage them not to litter by ‘taking ownership’ – it has apparently been especially successful with children.

[/column][column width=”47%” padding=”6%”]

Proximity & similarity

“Those look like they go together”

■ Users will tend to perceive that design elements (buttons, controls etc) which look similar, or are arranged together, will have similar functions or work together as a group (Gestalt proximity and similarity).

■ This can be used strategically to influence user behaviour as a kind of framing technique: group functions that you want users to perceive as going together, or give the controls similar shapes or colours. Likewise, introducing deliberate discontinuity or separation between elements can lead users to treat them very differently.

Group of 6 switches

Example: Bringing light switches together like this allows them all to be switched off at once more easily when leaving a room, but can work against the intuitive mapping linking each switch to the lights it controls.

[/column][column width=”47%” padding=”0%”]

Colour & contrast

“I simply chose the one that stood out the most”

■ Use colour and visual contrast to influence users’ perceptions and moods, suggest associations between particular behaviours and outcomes, and cause users to notice important elements or information (remembering that colour-blindness affects many millions of users, and so has implications for designers)

■ While some research shows that certain colours can have direct effects on behaviour in certain situations (e.g. the colour of pills), the evidence in general is weaker than is sometimes implied. Nevertheless, clever use of colour can help, support and guide user decision-making and so influence behaviour.

Baker-Miller pink

Example: Baker-Miller Pink or “drunk-tank pink” was developed through trials in prisons where painting a cell this colour was found, in certain circumstances, to reduce inmates’ aggression.

[/column][end_columns]

Photos/screenshots by Dan Lockton except Dyson by Skylarprimm, toggle switches by trancedmoogle, Ford Smartgauge from Ford promotional material on Jalopnik, shopkeeper writing on packet from BBC News story; London Underground map screenshot from Transport for London website.

____________________
The Design with Intent Toolkit v0.9 by Dan Lockton, David Harrison and Neville A. Stanton
Introduction | Behaviour | Architectural lens | Errorproofing lens | Persuasive lens | Visual lens | Cognitive lens | Security lens

dan@danlockton.co.uk

Errorproofing Lens: The Patterns

Bonjour / Goeiendag to visitors from Design for Persuasion: while you’re here, you might also like to download a free poster [PDF] which has 12 of the Design with Intent patterns on it in a handy reference form. Thanks for stopping by!

The Errorproofing Lens represents a worldview treating deviations from the target behaviour as ‘errors’ which design can help avoid, either by making it easier for users to work without making errors, or by making errors impossible in the first place.

This view on influencing behaviour is often found in health & safety-related design, medical device design and manufacturing engineering. More commentary…

Defaults

“What happens if I leave the settings how they are?”

■ Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, Thaler & Sunstein and Goldstein at al [PDF article preview] for more detailed examinations of defaults and their impacts)

■ How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

Default print quality settings Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Constraining behaviourThis pattern is mainly about constraining user behaviour…
Enabling behaviourbut can also enable a user to make the ‘right’ choice.

Interlock

“That doesn’t work unless you do this first”

■ Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

■ Can be irritating or helpful depending on how much it interferes with normal user activity—e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

Interlock on microwave oven door Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

Constraining behaviourThis pattern is mainly about constraining user behaviour.

[column width=”47%” padding=”6%”]

Lock-in & Lock-out

“This operation cannot be stopped right now”

■ Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

■ Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width=”47%” padding=”0%”]

Extra step

“Are you sure?”

■ Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most everyday poka-yokes (“useful landmines”) are examples of this pattern

■ Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width=”47%” padding=”6%”]

Specialised affordances

“It only fits one way round”

■ Design elements so that they can only be used in particular contexts or arrangements

Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width=”47%” padding=”0%”]

Partial self-correction

“Did you mean…?”

■ Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction – e.g. Google’s “Did you mean…?” feature

■ An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column][column width=”47%” padding=”6%”]

Portions

“That’s the size it comes in”

■ Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

■ Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width=”47%” padding=”0%”]

Conditional warnings

“It’s warning me I haven’t put my seatbelt on”

■ Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

■ Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.

____________________
The Design with Intent Toolkit v0.9 by Dan Lockton, David Harrison and Neville A. Stanton
Introduction | Behaviour | Architectural lens | Errorproofing lens | Persuasive lens | Visual lens | Cognitive lens | Security lens

dan@danlockton.co.uk

Persuasive Lens: The Patterns

Bonjour / Goeiendag to visitors from Design for Persuasion: while you’re here, you might also like to download a free poster [PDF] which has 12 of the Design with Intent patterns on it in a handy reference form. Thanks for stopping by!

The Persuasive Lens represents the emerging field of persuasive technology, where computers, mobile phones and other systems with interfaces are used to persuade users: changing attitudes and so changing behaviour through contextual information, advice and guidance. The patterns here are based mainly on ideas from BJ Fogg’s Persuasive Technology: Using Computers to Change What We Think and Do and related work.

The major applications so far have been in influencing behaviour for social benefit, e.g. persuading people to give up bad habits, adopt healthier lifestyles or reduce their energy use.

Self-monitoring

“How is my behaviour affecting the system?”

■ Give the user feedback on the impact of the way a product is being used, or how well he or she is doing relative to a target or goal

■ Self-monitoring can involve real-time feedback on the consequences of different behaviours, so that the ‘correct’ next step can immediately be taken, but in other contexts, ‘summary’ monitoring may also be useful, such as giving the user a report of behaviour and its efficacy over a certain period. Over time, this can effectively ‘train’ the user into a better understanding of the system

Energy meters

Examples: Energy meters (above) of many kinds allow householders to see which appliances use the most electricity, and how much this is costing, whether or not they choose to act.

GreenPrint, a ‘better print preview’, provides users (and, in an office context, their bosses!) with a summary of the resources it’s helped save, environmentally and financially (below)

Greenprint report

Enabling behaviourThis pattern is about enabling user behaviour: making it easier to make certain choices

Kairos

“What’s the best action for me to take right now?”

■ Suggest a behaviour to a user at the ‘opportune’ moment, i.e. when it would be most efficient or the most desirable next step to take

■ Often a system can ‘cue’ the suggested behaviour by reminding the user; suggestions can also help steer users away from incorrect behaviour next time they use the system, even if it’s too late this time

Automatic speed display

Examples: Automatic warning signs (above) can alert drivers to upcoming dangers at the right point for them to respond and slow down accordingly

Volvo once offered a gearchange suggestion light (below), helping drivers drive more efficiently and save fuel

Volvo gearchange suggestion light

Enabling behaviourKairos can be about enabling user behaviour at exactly the right moment…
motivating behaviourbut can also motivate a user by increasing mindfulness right before action is taken.

[column width=”47%” padding=”6%”]

Reduction

“Just one click away…”

■ Simplification of tasks – thoughtful reduction in John Maeda’s terminology – makes it easier for users to follow the intended behaviour.

■ Using ‘shortcuts’ to remove cognitive load from the user (e.g. energy labels) can be very powerful, but be aware of the manipulation potential (see also framing). By removing stages where the user has to think about what he or she’s doing, you may also risk creating exactly the kind of mindless interaction that lies behind many of the problems you may be trying to solve!

Eco Button

Example: The Eco Button reduces the steps needed to put a computer into a low-power state, thus making it much easier for users to save energy.

[/column][column width=”47%” padding=”0%”]

Tailoring

“It’s like it knows me”

■ Tailor / personalise the information displayed or the way a system responds to individual users’ needs / abilities / situations, to engage users to interact in the intended way

■ Adaptive systems can learn about their users’ habits, preferences, etc and respond accordingly; simpler systems which can ‘detect’ some salient criteria and offer behavioural suggestions could also be effective

PAM Personal Activity Monitor

Example: The Pam personal activity monitor, by measuring acceleration rather than simply numbers of steps, allows the feedback it gives and exercise régimes it suggests to be tailored to the user, which allows it to be much more like a ‘personal trainer’ than a conventional pedometer.

[/column][column width=”47%” padding=”6%”]

Tunnelling

“Guide me, O thou great persuader”

■ Guided persuasion: a user ‘agrees’ to be routed through a sequence of pre-specified actions or events; commitment to following the process motivates the user to accept the outcome

■ B.J. Fogg uses the example of people voluntarily hiring personal trainers to guide them through fitness programmes (which also involves tailoring). Many software wizards which go beyond merely simplifying a process, into the area of shaping users’ choices, would also fit in here; there is the potential to lead users into taking actions they wouldn’t do in circumstances outside the tunnel, which must be carefully considered ethically.

Tunnelling in the Foxit Reader installer

Example: The installation wizard for the Foxit PDF Reader tries to get the user to ‘choose’ extra bundled installation options such as making ask.com the default search engine, by presenting them as default parts of the process. By this stage the user cannot exit the tunnel.

[/column][column width=”47%” padding=”0%”]

Feedback through form

“Look and feel”

■ Use the form of an object itself as a kind of interface, providing the user with feedback on the state of the system, or cues/suggestions of what to do next. It could be visual changes to the form, or haptic (i.e. sensed through touch)

■ This technique is often overlooked in rushing towards high-tech display solutions; can be as simple as something which intentionally deforms when used in a particular manner to give the user feedback, or changes shape to draw attention to the state it’s in

AWARE puzzle switch

Example: The AWARE Puzzle Switch – designed by Loove Broms and Karin Ehrnberger gives more obvious feedback that a light switch is left on, through obvious ‘disorder’.

[/column][column width=”47%” padding=”6%”]

Simulation & feedforward

“What would happen if I did this?”

■ Provide a simulation or ‘feedforward’ showing users what consequences particular behaviours will have, compared with others: make cause and effect clearer to users

■ Showing users what will happen if they click ‘here’, or how many miles‘ fuel they have left if they continue driving as they are, tooltips, and even the ‘Preview’ and ‘Undo’ functions of common software, where changes can be easily tried out and then reversed/not applied, can all be considered kinds of feedforward or simulation

Loan repayment simulator

Example: Jakob Nielsen suggests that “a financial website could…encourage users to save more for retirement [by showing] a curve of the user’s growing nest egg and a photo of ‘the hotel you can afford to stay at when travelling to Hawaii after you retire’ for different levels of monthly investment”; interactive savings / loan simulators such as this from Yahoo! are increasingly common, and have the potential to influence user behaviour.

[/column][column width=”47%” padding=”0%”]

Operant conditioning

“Rewards for good behaviour”

■ Operant conditioning means reinforcing or ‘training’ particular user behaviour by rewarding it (or, indeed, punishing it). This could be a system where a user chooses to work towards a target behaviour, being rewarded for every bit of progress towards it, or something which periodically (perhaps unpredictably) rewards continued engagement, thus keeping users interacting (e.g. a fruit machine)

■ Sometimes the reward is a function of the system itself: saving energy naturally results in lower electricity bills. The system must make the user aware of this, though, otherwise a reinforcing effect is less likely to occur.

KPT 5 Shapeshifter

Example: E.g. Kai’s Power Tools (pioneering visual effects software) revealed ‘bonus functions’ to reward users who developed their skills with particular tools

[/column][column width=”47%” padding=”6%”]

Respondent conditioning

“Force of habit”

■ Respondent conditioning, also known as classical conditioning, can be applied to influence behaviour through helping users subconsciously associate particular actions with particular stimuli or settings, and responding accordingly: basically, developing habits which become reflexes. If you automatically feel for the light switch when you enter or leave a room, or brake when something appears in front of you on the road, this has effectively become a reflex action.

■ Using design, we could try to associate existing routines with new behaviours we would like – e.g. checking the house’s energy use when we look out of the window to see what the weather’s like outside, by fixing an energy display to the window (a concept by More Assocates / Onzo used this idea). Or we could try to undo these conditioned reflexes where they are damaging in some way to the user, by putting something else in the way.

Nicostopper

Example: Smoking is often a conditioned reflex; many devices have been designed to try and undo or thwart this reflex when the user wants to quit, such as the Nicostopper, which stores 10 cigarettes and releases them only at pre-determined intervals.

[/column][column width=”47%” padding=”0%”]

Computers as social actors

“I like my Mac because it’s so friendly

■ The media equation is the idea that “media equals real life”, i.e. that many people treat media (computers, TV, other systems) as if they were real people in terms of social interaction. If users believe that a computer (/system) is ‘on their side’, and helping them achieve their goals, it’s probably more likely they’ll follow advice given by the system: you can design systems to use ‘persuasive agents’, whether explicitly using simulated characters (e.g. in games) or by somehow giving the interface a personality.

■ If the system frustrates the user, advice is more likely to be ignored; equally, beware of the uncanny valley. As pervasive computing and artificial intelligence develop, establishing computers as ‘social actors’ in everyday life offers a lot of potential for more ‘persuasive products’.

Microsoft Office Clippit

Example: Microsoft’s Office Assistants, including Clippit / Clippy here, were an attempt to give a helpful personality to Office, but proved unpopular enough with many users that Microsoft phased them out.

[/column][end_columns]

Photos/screenshots by Dan Lockton except Volvo 340/360 dashboard courtesy Volvo 300 Mania forum, Eco Button from Eco Button website, Pam personal activity monitor from About.com, AWARE Puzzle Switch from Interactive Institute website, loan simulator screenshot from Yahoo! 7 Finance, and Nicostopper from Nicostopper website.

____________________
The Design with Intent Toolkit v0.9 by Dan Lockton, David Harrison and Neville A. Stanton
Introduction | Behaviour | Architectural lens | Errorproofing lens | Persuasive lens | Visual lens | Cognitive lens | Security lens

dan@danlockton.co.uk

Cognitive Lens: The Patterns

Bonjour / Goeiendag to visitors from Design for Persuasion: while you’re here, you might also like to download a free poster [PDF] which has 12 of the Design with Intent patterns on it in a handy reference form. Thanks for stopping by!

The Cognitive Lens draws on research in behavioural economics and cognitive psychology looking at how people make decisions, and how this is affected by ‘heuristics’ and ‘biases’. If designers understand how users make interaction decisions, that knowledge can be used to influence interaction behaviour.

Equally, where users often make poor decisions, design can help counter this, although this risks the accusation of design becoming a tool of the ‘nanny state’ which ‘knows what’s best’.

Many dozens of cognitive biases and heuristics have been identified by psychologists and behavioural economists, a lot of which could potentially be applied to the design of products and services. The seven detailed below are some of the most commonly used; this selection draws heavily on the work of Robert Cialdini.

Social proof

“What do other users like me do in this situation?”

■ Users will often decide what to do based on what those around them do (the conformity bias), or how popular an option is; make use of this strategically to influence behaviours

■ Social proof works especially well when there is a peer group or users identify with (or aspire to joining) the group against whose behaviour theirs is being compared; an element of competition can be intentionally introduced

Facebook application demonstrating social proof

Examples: Facebook’s ‘n of your friends added x application’ (above), Amazon’s various recommendation features, and statistics announcing the popularity of a particular website or product, such as the Feedburner ‘chicklet’ here all imply that ‘people like you are doing this, therefore you might want to as well’Feedburner's chicklet demonstrates social proof

Amazon's recommendation features demonstrate social proof

Motivating behaviourSocial proof is mainly about motivating user behaviour…
Enabling behaviourbut can also enable a user to ‘know’ what to do, by making it easier to see how others are doing it.

Framing

“Well, if you put it that way…”

■ Present choices to a user in a way that ‘frames’ perceptions and so influences behaviour, e.g. framing energy saving as ‘saving you money’ rather than ‘saving the environment’; categorise functions strategically so that users perceive them as being related

■ An obvious principle to many designers (and politicians, and estate agents); there are many possible framing tactics, such as use of language to give positive / negative associations to options (e.g. ‘sports suspension’ sounds better than ‘hard suspension’). Often used to deceive customers

Starbucks' menu demonstrating framing - image by Miss Shari

Examples: Starbucks’ drink sizes—at least on the menu (above)—start with ‘tall’, framing the implied range of sizes much further up the scale, to avoid any negative or mediocre implications that ‘small’ or ‘medium’ might have.

The ‘Knock-off Nigel’ anti-DVD-copying campaign (below) frames crimes against another person, such as theft of money, in the same bracket as downloading a movie, to imply that people who engage in one also engage in the other.

The 'Knock-off Nigel' campaign equates theft of money with downloading a movie

Motivating behaviourFraming is about motivating people to behave in particular ways.

[column width=”47%” padding=”6%”]

Reciprocation

“Return the favour”

■ Users often feel obliged to return ‘favours’: design systems which encourage users to trade or share information or resources

■ Can involve ‘guilting’ the user, but best if the user genuinely wants to return a favour

Azureus message encouraging users to reciprocate for having downloaded a file by continuing to seed it

Example: This message from the BitTorrent client Azureus (now Vuze) encourages users to ‘reciprocate’ for having downloaded a file by continuing to seed it

[/column][column width=”47%” padding=”0%”]

Commitment & consistency

“Stick to the plan”

■ Get users to commit in some way to an idea or goal; they’re then more likely to behave in accordance with this to appear or feel ‘consistent’

■ Can be used less ethically (e.g. the ‘irrational escalation of commitment’ involved in Swoopo)

Choosing to have a water meter installed demosntrates some commitment to saving water. Photo by Phatcontroller

Example: Voluntarily choosing to have a water meter installed can demonstrate some commitment to reducing water, which may persist as a household tries to remain consistent with the commitment.

[/column][column width=”47%” padding=”6%”]

Affective engagement

“Getting emotionally involved”

■ Design ‘affective’ products and systems to evoke emotional response as a way of engaging users and influencing attitudes and behaviours

■ Designers have traditionally been very good at manipulating aesthetics to inspire emotional response, but new technologies allow new opportunities, especially with gaming.

Smiling and frowning faces on electricity bills engage consumers affectively

Example: Using smiling (or frowning) faces on customers’ electricity bills can increase the emotional response associated with the bill, and lead to (slight) reductions in electricity use. By comparing customers’ use to their neighbours, this strategy also made use of social proof

[/column][column width=”47%” padding=”0%”]

Authority

“She know’s what she’s doing”

■ Many users will behave as suggested by an ‘authority figure’ or expert even if that behaviour is outside what they would consider normal; systems can be designed to make use of this effect

■ At least three mechanisms at work here: ‘appeal to authority’ in terms of attitude / behaviour guidance, perceived threat to users who ‘disobey’ authoritative messages, and desire to become more like the ‘pros’ by imitation

Barack Obama on Twitter
Stephen Fry on Twitter

Example: How much of Twitter’s success at engaging users to join and participate has been due to well-publicised ‘authority’ figures embracing it?

[/column][column width=”47%” padding=”6%”]

Scarcity

“Not much left, better use it wisely”

■ Whether scarcity is real or not in a situation, if it’s perceived to be, users may value something more, and so alter their behaviour in response: design systems strategically to emphasise the scarcity of a resource

■ Can be down to loss aversion; artificial scarcity can also be introduced (e.g. digital rights management)

Miles left on this tankful of fuel

Example: Digital fuel gauges showing the remaining range on the current tank can help concentrate drivers’ minds on the scarcity value of the fuel. See also self-monitoring.

[/column][end_columns]

Photos and screenshots by Dan Lockton, except Starbucks menu by Miss Shari on Flickr and water meter by Phatcontroller.

____________________
The Design with Intent Toolkit v0.9 by Dan Lockton, David Harrison and Neville A. Stanton
Introduction | Behaviour | Architectural lens | Errorproofing lens | Persuasive lens | Visual lens | Cognitive lens | Security lens

dan@danlockton.co.uk