Category Archives: Do artifacts have politics?

Anti-teenager “pink lights to show up acne”

Pink lights in Mansfield. Photo from BBC

In a similar vein to the Mosquito, intentionally shallow steps (and, superficially at least–though not really–blue lighting in toilets, which Raph d’Amico dissects well here), we now have residents’ associations installing pink lighting to highlight teenagers’ acne and so drive them away from an area:

Residents of a Nottinghamshire housing estate have installed pink lights which show up teenagers’ spots in a bid to stop them gathering in the area.

Members of Layton Burroughs Residents’ Association, Mansfield say they have bought the lights in a bid to curb anti-social behaviour. The lights are said to have a calming influence, but they also highlight skin blemishes.

The National Youth Agency said it would just move the problem somewhere else. Peta Halls, development officer for the NYA, said: “Anything that aims to embarrass people out of an area is not on. “The pink lights are indiscriminate in that they will impact on all young people and older people who do not, perhaps, have perfect skin.

I had heard about this before (thanks, Ed!) but overlooked posting it on the blog – other places the pink lights have been used include Preston and Scunthorpe, to which this quote refers (note the youths=yobs equation):

Yobs are being shamed out of anti-social behaviour by bright pink lights which show up their acne.

The lights are so strong they highlight skin blemishes and have been successful in moving on youths from troublespots who view pink as being “uncool.”

Manager Dave Hey said: “With the fluorescent pink light we are trying to embarass young people out of the area. “The pink is not seen as particularly macho among young men and apparently it highlights acne and blemishes in the skin.

A North Lincolnshire Council spokesman said: “[...]“On the face of it this sounds barmy. But do young people really want to hang around in an area with a pink glow that makes any spots they have on their face stand out?”

With the Mansfield example making the news, it’s good to see that there is, at least, quite a lot of comment pointing out the idiocy of the hard-of-thinking who believe that this sort of measure will actually ‘solve the problem of young people’, whatever that might mean, as well as the deeply discriminatory nature of the plan. For example, this rather dim (if perhaps tongue-in-cheek) light in the Nottingham Evening Post has been comprehensively rebutted by a commenter:

Trying to use someone’s personal looks against them simply because they meet up with friends and have a social life…

If this is the case then I would personally love to see adults banned from meeting up in pubs, parties and generally getting drunk. I would also love to see something making fun of their elderlyness and wrinkle problems.

I don’t understand why Britain hates its young people so much. But I can see it storing up a great deal of problems for the future.

Photo from this BBC story

Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

■ Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

■ How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

■ Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

■ Can be irritating or helpful depending on how much it interferes with normal user activity—e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width="47%" padding="6%"]

Lock-in & Lock-out

■ Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

■ Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width="47%" padding="0%"]

Extra step

■ Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

■ Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width="47%" padding="6%"]

Specialised affordances

 
■ Design elements so that they can only be used in particular contexts or arrangements

Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width="47%" padding="0%"]

Partial self-correction

■ Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction – e.g. Google’s “Did you mean…?” feature

■ An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width="47%" padding="6%"]

Portions

■ Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

■ Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width="47%" padding="0%"]

Conditional warnings

■ Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

■ Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.

The Hacker’s Amendment

Screwdrivers

Congress shall pass no law limiting the rights of persons to manipulate, operate, or otherwise utilize as they see fit any of their possessions or effects, nor the sale or trade of tools to be used for such purposes.

From Artraze commenting on this Slashdot story about the levels of DRM in Windows 7.

I think it maybe needs some qualification about not using your things to cause harm to other people, but it’s an interesting idea. See also Mister Jalopy’s Maker’s Bill of Rights from Make magazine a couple of years ago.

Designed environments as learning systems

West London from Richmond Park - Trellick Tower in the centre

How much of designing an environment is consciously about influencing how people use it? And how much of that influence is down to users learning what the environment affords them, and acting accordingly?

The first question’s central what this blog’s been about over the last four years (with ‘products’, ‘systems’, ‘interfaces’ and so on variously standing in for ‘environment’), but many of the examples I’ve used, from anti-sit features to bathrooms and cafés designed to speed up user throughput, only reveal the architect’s (presumed) behaviour-influencing intent in hindsight, i.e. by reviewing them and trying to understand, if it isn’t obvious, what the motivation is behind a particular design feature. While there are examples where the intent is explicitly acknowledged, such as crime prevention through environmental design, and traffic management, it can still cause surprise when a behaviour-influencing agenda is revealed.

Investigating what environmental and ecological psychology have to say about this, a few months ago I came across The Organization of Spatial Stimuli, an article by Raymond G. Studer, published in 1970 [1] – it’s one of the few explicit calls for a theory of designing environments to influence user behaviour, and it raises some interesting issues:

“The nature of the environmental designer’s problem is this: A behavioral system has been specified (within the constraints imposed by the particular human participants and by the goals of the organization of which they are members.) The participants are not presently emitting the specified behaviors, otherwise there would be no problem. It is necessary that they do emit these behaviors if their individual and collective goals are to be realized. The problem then is to bring about the acquisition or modification of behaviors towards the specified states (without in any way jeopardizing their general well-being in the process). Such a change in state we call learning. Designed environments are basically learning systems, arranged to bring about and maintain specified behavioral topologies. Viewed as such, stimulus organization becomes a more clearly directed task. The question then becomes not how can stimuli be arranged to stimulate, but how can stimuli be arranged to bring about a requisite state of behavioral affairs.

[E]vents which have traditionally been regarded as the ends in the design process, e.g. pleasant, exciting, stimulating, comfortable, the participant’s likes and dislikes, should be reclassified. They are not ends at all, but valuable means which should be skilfully ordered to direct a more appropriate over-all behavioral texture. They are members of a class of (designed environmental) reinforcers. These aspects must be identified before behavioral effects of the designed environment can be fully understood.”

Now, I think it’s probably rare nowadays for architects or designers to talk of design features as ‘stimuli’, even if they are intended to influence behaviour. Operant conditioning and B.F. Skinner’s behaviourism are less fashionable than they once were. But the “designed environments are learning systems” point Studer makes can well be applied beyond simply ‘reinforcing’ particular behaviours.

Think how powerful social norms and even framing can be at influencing our behaviour in environments – the sober environment of a law court gives (most of) us a different range of perceived affordances to our own living room (social norms, mediated by architecture) – and that’s surely something we learn. Frank Lloyd Wright intentionally designed dark, narrow corridors leading to large, bright open rooms (e.g. in the Yamamura House) so that the contrast – and people’s experience – was heightened (framing, of a sort) – but this effect would probably be lessened by repeated exposure. It still influenced user behaviour though, even if only the first few times, but the memory of the effect that such a room had those first few times probably lasted a lifetime. Clearly, the process of forming a mental model about how to use a product, or how to behave in an environment, or how to behave socially, is about learning, and the design of the systems around us does educate us, in one way or another.

Stewart Brand’s classic How Buildings Learn (watch the series too) perhaps suggests (among other insights) an extension of the concept: if, when we learn what our environment affords us, this no longer suits our needs, the best architecture may be that which we can adapt, rather than being constrained by the behavioural assumptions designed into our environments by history.

I’m not an architect, though, or a planner, and – as I’ve mentioned a few times on the blog – it would be very interesting to know, from people who are: to what extent are notions of influencing behaviour taught as part of architectural training? This series of discussion board posts suggests that the issue is definitely there for architecture students, but is it framed as a conscious, positive process (e.g. “funnel pedestrians past the shops”), a reactionary one (e.g. “use pebbled paving to make it painful for hippies to congregate“), one of educating users through architectural features (as in Studer’s suggestion), or as something else entirely?

[1] Studer, R.G. ‘The Organization of Spatial Stimuli.’ In Pastalan, L.A. and Carson, D.H. (eds.), Spatial Behavior of Older People. Ann Arbor: University of Michigan, 1970.

Dan Lockton

What’s the deal with angled steps?

Angled StepsIt’s a simple question, really, to any readers with experience in urban planning and specifying architectural features: what is the reasoning behind positioning steps at an angle such as this set (left and below) leading down to the Queen’s Walk near London Bridge station?

Obviously one reason is to connect two walkways that are offset slightly where there is no space to have a perpendicular set of steps, but are they ever used strategically? They’re much more difficult to run down or up than conventionally perpendicular steps, which would seem like it might help constrain escaping thieves, or make it less likely that people will be able to run from one walkway to another without slowing down and watching their step.

Like the configuration of spiral staircases in mediaeval castles to favour a defender running down the steps anticlockwise, holding a sword in his right hand, over the attacker running up to meet him (e.g. as described here), the way that town marketplaces were often built with pinch points at each end to make it more difficult for animals (or thieves) to escape, or even the ‘enforced reverence’ effect of the very steep steps at Ta Keo in Cambodia, are angled steps and staircases ever specified deliberately with this intent?

Angled Steps

The first time I thought of this was confronting these steps (below) leading from the shopping centre next to Waverley Station in Edinburgh a couple of years ago: they seemed purpose-built to slow fleeing shoplifters, but I did consider that it might just be my tendency to see everything with a ‘Design with Intent’ bias – a kind of conspiracy bias, ascribing to design intent that which is perhaps more likely to be due to situational factors (a kind of fundamental attribution error for design), or inferring the intention behind a design by looking at its results!

What’s your angle on the steps?

Angled Steps

Stuff that matters: Unpicking the pyramid

Most things are unnecessary. Most products, most consumption, most politics, most writing, most research, most jobs, most beliefs even, just aren’t useful, for some scope of ‘useful’.

I’m sure I’m not the first person to point this out, but most of our civilisation seems to rely on the idea that “someone else will sort it out”, whether that’s providing us with food or energy or money or justice or a sense of pride or a world for our grandchildren to live in. We pay the politicians who are best at lying to us because we don’t want to have to think about problems. We bail out banks in one enormous spasm of cognitive dissonance. We pay ‘those scientists’ to solve things for us and them hate them when they tell us we need to change what we’re doing. We pay for new things because we can’t fix the old ones and then our children pay for the waste.

Economically, ecologically, ethically, we have mortgaged the planet. We’ve mortgaged our future in order to get what we have now, but the debt doesn’t die with us. On this model, the future is one vast pyramid scheme stretching out of sight. We’ve outsourced functions we don’t even realise we don’t need to people and organisations of whom we have no understanding. Worse, we’ve outsourced the functions we do need too, and we can’t tell the difference.

Maybe that’s just being human. But so is learning and tool-making. We must be able to do better than we are. John R. Ehrenfeld’s Sustainability by Design, which I’m reading at present, explores the idea that reducing unsustainability will not create sustainability, which ought to be pretty fundamental to how we think about these issues: going more slowly towards the cliff edge does not mean changing direction.

I’m especially inspired by Tim O’Reilly’s “Work on stuff that matters” advice. If we go back to the ‘most things are unnecessary’ idea, the plan must be to work on things that are really useful, that will really advance things. There is little excuse for not trying to do something useful. It sounds ruthless, and it does have the risk of immediately putting us on the defensive (“I am doing something that matters…”).

The idea I can’t get out of my head is that if we took more responsibility for things (i.e. progressively stopped outsourcing everything to others as in paragraphs 2 and 3 above, and actively learned how to do them ourselves), this would make a massive difference in the long run. We’d be independent from those future generations we’re currently recruiting into our pyramid scheme before they even know about it. We’d all of us be empowered to understand and participate and create and make and generate a world where we have perspicacity, where we can perceive the affordances that different options will give us in future and make useful decisions based on an appreciation of the longer term impacts.

An large part of it is being able to understand consequences and implications of our actions and how we are affected, and in turn affect, the situations we’re in – people around us, the environment, the wider world. Where does this water I’m wasting come from? Where does it go? How much does Google know about me? Why? How does a bank make its money? How can I influence a new law? What do all those civil servants do? How was my food produced? Why is public transport so expensive? Would I be able to survive if X or Y happened? Why not? What things that I do everyday are wasteful of my time and money? How much is the purchase of item Z going to cost me over the next year? What will happen when it breaks? Can I fix it? Why not? And so on.

You might think we need more transparency of the power structures and infrastructures around us – and we do – but I prefer to think of the solution as being tooling us up in parallel: we need to have the ability to understand what we can see inside, and focus on what’s actually useful/necessary and what isn’t. Our attention is valuable and we mustn’t waste it.

How can all that be taught?

I remember writing down as a teenager, in some lesson or other, “What we need is a school subject called How and why things are, and how they operate.” Now, that’s broad enough that probably all existing academic subjects would lay claim to part of it. So maybe I’m really calling for a higher overall standard of education.

But the devices and systems we encounter in everyday life, the structures around us, can also help, by being designed to show us (and each other) what they’re doing, whether that’s ‘good’ or ‘bad’ (or perhaps ‘useful’ or not), and what we can do to improve their performance. And by influencing the way we use them, whether nudging, persuading or preventing us getting it wrong in the first place, we can learn as we use. Everyday life can be a constructionist learning process.

This all feeds into the idea of ‘Design for Independence’:

Reducing society’s resource dependence
Reducing vulnerable users’ dependence on other people
Reducing users’ dependence on ‘experts’ to understand and modify the technology they own.

One day I’ll develop this further as an idea – it’s along the lines of Victor Papanek and Buckminster Fuller – but there’s a lot of other work to do first. I hope it’s stuff that matters.

Dan Lockton