Category Archives: Do artifacts have politics?

What is demand, really?

A publicly visible electricity meter in Claremont, CA

In a lot of the debate and discussion about energy, future electricity generation and metering, improved efficiency and influencing consumer behaviour – at least from an engineering perspective – the term “demand” is used, in conjunction with “supply”, to represent the energy required to be supplied to consumers, much as in conventional “supply and demand” economics.

Now, I’m sure others have investigated this and characterised it economically much better than I can, but it seems to me that demand for energy (and sometimes water) is significantly different to, say, demand for most consumer products in that, for the most part, consumers only “demand” it indirectly. It is the products and systems around us which draw the current: they are important actors and have the agency, in a sense (at least unless we really understand the impacts of how they operate).

While with, say, a car’s fuel consumption, we experience the car’s demand for fuel, and pay for it, directly in proportion to our demand for travel, with most household electricity use, we not only generally wait a month or more before having to confront the “demand” (via the bill), but separating the background demand (such as a refrigerator’s continuous energy use simply to operate) from conscious demand (such as our decision to use a fan heater all day) is very difficult for us to do as consumers: from a very simple consumer perspective (ignoring things like reactive power flow), electricity is interchangeable, and the feedback we get on our behaviour is only very weakly linked to the specifics of that behaviour.

An on-off switch with a proce label

Basically, then, a lot of “demand” is not conscious demand at all. Most consumers don’t make an in-the-moment decision to use more electricity if it gets cheaper (though it may happen over time, e.g. if someone decides to get electric heating because oil heating has become more expensive) or vice versa. The demand is a function of the products and systems around us, our habits, lifestyle and behaviours but it is very difficult for us to see this, and make decisions which have an impact on this. If there are major changes, such as a massively changed price, then real conscious demand changes may happen (so a kind of stepped curve rather than anything smooth) but this is surely not what happens in everyday life. At least at present.

Maybe, then, part of what design could offer here is to help translate this unconscious, product-led, delayed payment demand into a visible, tangible, immediate demand which makes us consider it like any other everyday buying / consumption choice. Real-time self-monitoring feedback from clever metering technology (e.g. Onzo or Wattson) could go a long way here, but what about feedforward? Can we go as far as on-off switches with price labels on them? (Digital, updated, real-time, of course.) Would it make us more price-sensitive to energy costs? Would that influence our behaviour?

Anti-teenager “pink lights to show up acne”

Pink lights in Mansfield. Photo from BBC

In a similar vein to the Mosquito, intentionally shallow steps (and, superficially at least–though not really–blue lighting in toilets, which Raph d’Amico dissects well here), we now have residents’ associations installing pink lighting to highlight teenagers’ acne and so drive them away from an area:

Residents of a Nottinghamshire housing estate have installed pink lights which show up teenagers’ spots in a bid to stop them gathering in the area.

Members of Layton Burroughs Residents’ Association, Mansfield say they have bought the lights in a bid to curb anti-social behaviour. The lights are said to have a calming influence, but they also highlight skin blemishes.

The National Youth Agency said it would just move the problem somewhere else. Peta Halls, development officer for the NYA, said: “Anything that aims to embarrass people out of an area is not on. “The pink lights are indiscriminate in that they will impact on all young people and older people who do not, perhaps, have perfect skin.

I had heard about this before (thanks, Ed!) but overlooked posting it on the blog – other places the pink lights have been used include Preston and Scunthorpe, to which this quote refers (note the youths=yobs equation):

Yobs are being shamed out of anti-social behaviour by bright pink lights which show up their acne.

The lights are so strong they highlight skin blemishes and have been successful in moving on youths from troublespots who view pink as being “uncool.”

Manager Dave Hey said: “With the fluorescent pink light we are trying to embarass young people out of the area. “The pink is not seen as particularly macho among young men and apparently it highlights acne and blemishes in the skin.

A North Lincolnshire Council spokesman said: “[…]”On the face of it this sounds barmy. But do young people really want to hang around in an area with a pink glow that makes any spots they have on their face stand out?”

With the Mansfield example making the news, it’s good to see that there is, at least, quite a lot of comment pointing out the idiocy of the hard-of-thinking who believe that this sort of measure will actually ‘solve the problem of young people’, whatever that might mean, as well as the deeply discriminatory nature of the plan. For example, this rather dim (if perhaps tongue-in-cheek) light in the Nottingham Evening Post has been comprehensively rebutted by a commenter:

Trying to use someone’s personal looks against them simply because they meet up with friends and have a social life…

If this is the case then I would personally love to see adults banned from meeting up in pubs, parties and generally getting drunk. I would also love to see something making fun of their elderlyness and wrinkle problems.

I don’t understand why Britain hates its young people so much. But I can see it storing up a great deal of problems for the future.

Photo from this BBC story

Eight design patterns for errorproofing

Go straight to the patterns

One view of influencing user behaviour – what I’ve called the ‘errorproofing lens’ – treats a user’s interaction with a system as a set of defined target behaviour routes which the designer wants the user to follow, with deviations from those routes being treated as ‘errors’. Design can help avoid the errors, either by making it easier for users to work without making errors, or by making the errors impossible in the first place (a defensive design approach).

That’s fairly obvious, and it’s a key part of interaction design, usability and human factors practice, much of its influence in the design profession coming from Don Norman’s seminal Design of Everyday Things. It’s often the view on influencing user behaviour found in health & safety-related design, medical device design and manufacturing engineering (as poka-yoke): where, as far as possible, one really doesn’t want errors to occur at all (Shingo’s zero defects). Learning through trial-and-error exploration of the interface might be great for, say, Kai’s Power Tools, but a bad idea for a dialysis machine or the control room of a nuclear power station.

It’s worth noting a (the?) key difference between an errorproofing approach and some other views of influencing user behaviour, such as Persuasive Technology: persuasion implies attitude change leading to the target behaviour, while errorproofing doesn’t care whether or not the user’s attitude changes, as long as the target behaviour is met. Attitude change might be an effect of the errorproofing, but it doesn’t have to be. If I find I can’t start a milling machine until the guard is in place, the target behaviour (I put the guard in place before pressing the switch) is achieved regardless of whether my attitude to safety changes. It might do, though: the act of realising that the guard needs to be in place, and why, may well cause safety to be on my mind consciously. Then again, it might do the opposite: e.g. the steering wheel spike argument. The distinction between whether the behaviour change is mindful or not is something I tried to capture with the behaviour change barometer.

Making it easier for users to avoid errors – whether through warnings, choice of defaults, confirmation dialogues and so on – is slightly ‘softer’ than actual forcing the user to conform, and does perhaps offer the chance to relay some information about the reasoning behind the measure. But the philosophy behind all of these is, inevitably “we know what’s best”: a dose of paternalism, the degree of constraint determining the ‘libertarian’ prefix. The fact that all of us can probably think of everyday examples where we constantly have to change a setting from its default, or a confirmation dialogue slows us down (process friction), suggests that simple errorproofing cannot stand in for an intelligent process of understanding the user.

On with the patterns, then: there’s nothing new here, but hopefully seeing the patterns side by side allows an interesting and useful comparison. Defaults and Interlock are the two best ‘inspirations’ I think, in terms of using these errorproofing patterns to innovate concepts for influencing user behaviour in other fields. There will be a lot more to say about each pattern (further classification, and what kinds of behaviour change each is especially applicable to) in the near future as I gradually progress with this project.

 

Defaults

“What happens if I leave the settings how they are?”

■ Choose ‘good’ default settings and options, since many users will stick with them, and only change them if they feel they really need to (see Rajiv Shah’s work, and Thaler & Sunstein)

■ How easy or hard it is to change settings, find other options, and undo mistakes also contributes to user behaviour here

          Default print quality settings  Donor card

Examples: With most printer installations, the default print quality is usually not ‘Draft’, even though this would save users time, ink and money.
In the UK, organ donation is ‘opt-in’: the default is that your organs will not be donated. In some countries, an ‘opt-out’ system is used, which can lead to higher rates of donation

Interlock

“That doesn’t work unless you do this first”

■ Design the system so users have to perform actions in a certain order, by preventing the next operation until the first is complete: a forcing function

■ Can be irritating or helpful depending on how much it interferes with normal user activity—e.g. seatbelt-ignition interlocks have historically been very unpopular with drivers

          Interlock on microwave oven door  Interlock on ATM - card returned before cash dispensed

Examples: Microwave ovens don’t work until the door is closed (for safety).
Most cash machines don’t dispense cash until you remove your card (so it’s less likely you forget it)

[column width=”47%” padding=”6%”]

Lock-in & Lock-out

■ Keep an operation going (lock-in) or prevent one being started (lock-out) – a forcing function

■ Can be helpful (e.g. for safety or improving productivity, such as preventing accidentally cancelling something) or irritating for users (e.g. diverting the user’s attention away from a task, such as unskippable DVD adverts before the movie)

Right-click disabled

Example: Some websites ‘disable’ right-clicking to try (misguidedly) to prevent visitors saving images.

[/column][column width=”47%” padding=”0%”]

Extra step

■ Introduce an extra step, either as a confirmation (e.g. an “Are you sure?” dialogue) or a ‘speed-hump’ to slow a process down or prevent accidental errors – another forcing function. Most of the everyday poka-yokes (“useful landmines”) we looked at last year are examples of this pattern

■ Can be helpful, but if used excessively, users may learn “always click OK”

British Rail train door extra step

Example: Train door handles requiring passengers to lower the window

[/column][column width=”47%” padding=”6%”]

Specialised affordances

 
■ Design elements so that they can only be used in particular contexts or arrangements

Format lock-in is a subset of this: making elements (parts, files, etc) intentionally incompatible with those from other manufacturers; rarely user-friendly design

Bevel corners on various media cards and disks

Example: The bevelled corner on SIM cards, memory cards and floppy disks ensures that they cannot be inserted the wrong way round

[/column][column width=”47%” padding=”0%”]

Partial self-correction

■ Design systems which partially correct errors made by the user, or suggest a different action, but allow the user to undo or ignore the self-correction – e.g. Google’s “Did you mean…?” feature

■ An alternative to full, automatic self-correction (which does not actually influence the user’s behaviour)

Partial self-correction (with an undo) on eBay

Example: eBay self-corrects search terms identified as likely misspellings or typos, but allows users the option to ignore the correction

[/column]
[column width=”47%” padding=”6%”]

Portions

■ Use the size of ‘portion’ to influence how much users consume: unit bias means that people will often perceive what they’re provided with as the ‘correct’ amount

■ Can also be used explicitly to control the amount users consume, by only releasing one portion at a time, e.g. with soap dispensers

Snack portion packs

Example: ‘Portion packs’ for snacks aim to provide customers with the ‘right’ amount of food to eat in one go

[/column][column width=”47%” padding=”0%”]

Conditional warnings

■ Detect and provide warning feedback (audible, visual, tactile) if a condition occurs which the user would benefit from fixing (e.g. upgrading a web browser), or if the user has performed actions in a non-ideal order

■ Doesn’t force the user to take action before proceeding, so not as ‘strong’ an errorproofing method as an interlock.

Seatbelt warning light

Example: A seatbelt warning light does not force the user to buckle up, unlike a seatbelt-ignition interlock.

[/column][end_columns]

Photos/screenshots by Dan Lockton except seatbelt warning image (composite of photos by Zoom Zoom and Reiver) and donor card photo by Adrienne Hart-Davis.

The Hacker’s Amendment

Screwdrivers

Congress shall pass no law limiting the rights of persons to manipulate, operate, or otherwise utilize as they see fit any of their possessions or effects, nor the sale or trade of tools to be used for such purposes.

From Artraze commenting on this Slashdot story about the levels of DRM in Windows 7.

I think it maybe needs some qualification about not using your things to cause harm to other people, but it’s an interesting idea. See also Mister Jalopy’s Maker’s Bill of Rights from Make magazine a couple of years ago.

Designed environments as learning systems

West London from Richmond Park - Trellick Tower in the centre

How much of designing an environment is consciously about influencing how people use it? And how much of that influence is down to users learning what the environment affords them, and acting accordingly?

The first question’s central what this blog’s been about over the last four years (with ‘products’, ‘systems’, ‘interfaces’ and so on variously standing in for ‘environment’), but many of the examples I’ve used, from anti-sit features to bathrooms and cafés designed to speed up user throughput, only reveal the architect’s (presumed) behaviour-influencing intent in hindsight, i.e. by reviewing them and trying to understand, if it isn’t obvious, what the motivation is behind a particular design feature. While there are examples where the intent is explicitly acknowledged, such as crime prevention through environmental design, and traffic management, it can still cause surprise when a behaviour-influencing agenda is revealed.

Investigating what environmental and ecological psychology have to say about this, a few months ago I came across The Organization of Spatial Stimuli, an article by Raymond G. Studer, published in 1970 [1] – it’s one of the few explicit calls for a theory of designing environments to influence user behaviour, and it raises some interesting issues:

“The nature of the environmental designer’s problem is this: A behavioral system has been specified (within the constraints imposed by the particular human participants and by the goals of the organization of which they are members.) The participants are not presently emitting the specified behaviors, otherwise there would be no problem. It is necessary that they do emit these behaviors if their individual and collective goals are to be realized. The problem then is to bring about the acquisition or modification of behaviors towards the specified states (without in any way jeopardizing their general well-being in the process). Such a change in state we call learning. Designed environments are basically learning systems, arranged to bring about and maintain specified behavioral topologies. Viewed as such, stimulus organization becomes a more clearly directed task. The question then becomes not how can stimuli be arranged to stimulate, but how can stimuli be arranged to bring about a requisite state of behavioral affairs.

[E]vents which have traditionally been regarded as the ends in the design process, e.g. pleasant, exciting, stimulating, comfortable, the participant’s likes and dislikes, should be reclassified. They are not ends at all, but valuable means which should be skilfully ordered to direct a more appropriate over-all behavioral texture. They are members of a class of (designed environmental) reinforcers. These aspects must be identified before behavioral effects of the designed environment can be fully understood.”

Now, I think it’s probably rare nowadays for architects or designers to talk of design features as ‘stimuli’, even if they are intended to influence behaviour. Operant conditioning and B.F. Skinner’s behaviourism are less fashionable than they once were. But the “designed environments are learning systems” point Studer makes can well be applied beyond simply ‘reinforcing’ particular behaviours.

Think how powerful social norms and even framing can be at influencing our behaviour in environments – the sober environment of a law court gives (most of) us a different range of perceived affordances to our own living room (social norms, mediated by architecture) – and that’s surely something we learn. Frank Lloyd Wright intentionally designed dark, narrow corridors leading to large, bright open rooms (e.g. in the Yamamura House) so that the contrast – and people’s experience – was heightened (framing, of a sort) – but this effect would probably be lessened by repeated exposure. It still influenced user behaviour though, even if only the first few times, but the memory of the effect that such a room had those first few times probably lasted a lifetime. Clearly, the process of forming a mental model about how to use a product, or how to behave in an environment, or how to behave socially, is about learning, and the design of the systems around us does educate us, in one way or another.

Stewart Brand’s classic How Buildings Learn (watch the series too) perhaps suggests (among other insights) an extension of the concept: if, when we learn what our environment affords us, this no longer suits our needs, the best architecture may be that which we can adapt, rather than being constrained by the behavioural assumptions designed into our environments by history.

I’m not an architect, though, or a planner, and – as I’ve mentioned a few times on the blog – it would be very interesting to know, from people who are: to what extent are notions of influencing behaviour taught as part of architectural training? This series of discussion board posts suggests that the issue is definitely there for architecture students, but is it framed as a conscious, positive process (e.g. “funnel pedestrians past the shops”), a reactionary one (e.g. “use pebbled paving to make it painful for hippies to congregate“), one of educating users through architectural features (as in Studer’s suggestion), or as something else entirely?

[1] Studer, R.G. ‘The Organization of Spatial Stimuli.’ In Pastalan, L.A. and Carson, D.H. (eds.), Spatial Behavior of Older People. Ann Arbor: University of Michigan, 1970.

Dan Lockton

What’s the deal with angled steps?

Angled StepsIt’s a simple question, really, to any readers with experience in urban planning and specifying architectural features: what is the reasoning behind positioning steps at an angle such as this set (left and below) leading down to the Queen’s Walk near London Bridge station?

Obviously one reason is to connect two walkways that are offset slightly where there is no space to have a perpendicular set of steps, but are they ever used strategically? They’re much more difficult to run down or up than conventionally perpendicular steps, which would seem like it might help constrain escaping thieves, or make it less likely that people will be able to run from one walkway to another without slowing down and watching their step.

Like the configuration of spiral staircases in mediaeval castles to favour a defender running down the steps anticlockwise, holding a sword in his right hand, over the attacker running up to meet him (e.g. as described here), the way that town marketplaces were often built with pinch points at each end to make it more difficult for animals (or thieves) to escape, or even the ‘enforced reverence’ effect of the very steep steps at Ta Keo in Cambodia, are angled steps and staircases ever specified deliberately with this intent?

Angled Steps

The first time I thought of this was confronting these steps (below) leading from the shopping centre next to Waverley Station in Edinburgh a couple of years ago: they seemed purpose-built to slow fleeing shoplifters, but I did consider that it might just be my tendency to see everything with a ‘Design with Intent’ bias – a kind of conspiracy bias, ascribing to design intent that which is perhaps more likely to be due to situational factors (a kind of fundamental attribution error for design), or inferring the intention behind a design by looking at its results!

What’s your angle on the steps?

Angled Steps