All posts filed under “Control

Stuff that matters: Unpicking the pyramid

Most things are unnecessary. Most products, most consumption, most politics, most writing, most research, most jobs, most beliefs even, just aren’t useful, for some scope of ‘useful’.

I’m sure I’m not the first person to point this out, but most of our civilisation seems to rely on the idea that “someone else will sort it out”, whether that’s providing us with food or energy or money or justice or a sense of pride or a world for our grandchildren to live in. We pay the politicians who are best at lying to us because we don’t want to have to think about problems. We bail out banks in one enormous spasm of cognitive dissonance. We pay ‘those scientists’ to solve things for us and them hate them when they tell us we need to change what we’re doing. We pay for new things because we can’t fix the old ones and then our children pay for the waste.

Economically, ecologically, ethically, we have mortgaged the planet. We’ve mortgaged our future in order to get what we have now, but the debt doesn’t die with us. On this model, the future is one vast pyramid scheme stretching out of sight. We’ve outsourced functions we don’t even realise we don’t need to people and organisations of whom we have no understanding. Worse, we’ve outsourced the functions we do need too, and we can’t tell the difference.

Maybe that’s just being human. But so is learning and tool-making. We must be able to do better than we are. John R. Ehrenfeld’s Sustainability by Design, which I’m reading at present, explores the idea that reducing unsustainability will not create sustainability, which ought to be pretty fundamental to how we think about these issues: going more slowly towards the cliff edge does not mean changing direction.

I’m especially inspired by Tim O’Reilly’s “Work on stuff that matters” advice. If we go back to the ‘most things are unnecessary’ idea, the plan must be to work on things that are really useful, that will really advance things. There is little excuse for not trying to do something useful. It sounds ruthless, and it does have the risk of immediately putting us on the defensive (“I am doing something that matters…”).

The idea I can’t get out of my head is that if we took more responsibility for things (i.e. progressively stopped outsourcing everything to others as in paragraphs 2 and 3 above, and actively learned how to do them ourselves), this would make a massive difference in the long run. We’d be independent from those future generations we’re currently recruiting into our pyramid scheme before they even know about it. We’d all of us be empowered to understand and participate and create and make and generate a world where we have perspicacity, where we can perceive the affordances that different options will give us in future and make useful decisions based on an appreciation of the longer term impacts.

An large part of it is being able to understand consequences and implications of our actions and how we are affected, and in turn affect, the situations we’re in – people around us, the environment, the wider world. Where does this water I’m wasting come from? Where does it go? How much does Google know about me? Why? How does a bank make its money? How can I influence a new law? What do all those civil servants do? How was my food produced? Why is public transport so expensive? Would I be able to survive if X or Y happened? Why not? What things that I do everyday are wasteful of my time and money? How much is the purchase of item Z going to cost me over the next year? What will happen when it breaks? Can I fix it? Why not? And so on.

You might think we need more transparency of the power structures and infrastructures around us – and we do – but I prefer to think of the solution as being tooling us up in parallel: we need to have the ability to understand what we can see inside, and focus on what’s actually useful/necessary and what isn’t. Our attention is valuable and we mustn’t waste it.

How can all that be taught?

I remember writing down as a teenager, in some lesson or other, “What we need is a school subject called How and why things are, and how they operate.” Now, that’s broad enough that probably all existing academic subjects would lay claim to part of it. So maybe I’m really calling for a higher overall standard of education.

But the devices and systems we encounter in everyday life, the structures around us, can also help, by being designed to show us (and each other) what they’re doing, whether that’s ‘good’ or ‘bad’ (or perhaps ‘useful’ or not), and what we can do to improve their performance. And by influencing the way we use them, whether nudging, persuading or preventing us getting it wrong in the first place, we can learn as we use. Everyday life can be a constructionist learning process.

This all feeds into the idea of ‘Design for Independence’:

Reducing society’s resource dependence
Reducing vulnerable users’ dependence on other people
Reducing users’ dependence on ‘experts’ to understand and modify the technology they own.

One day I’ll develop this further as an idea – it’s along the lines of Victor Papanek and Buckminster Fuller – but there’s a lot of other work to do first. I hope it’s stuff that matters.

Dan Lockton

On ‘Design and Behaviour’ this week: Do you own your stuff? And a strange council-run ‘Virtual World for young people’

GPS-aided repo and product-service systems

GPS tracking - image by cmpalmer

Ryan Calo of Stanford’s Center for Internet and Society brought up the new phenomenon of GPS-aided car repossession and the implications for the concepts of property and privacy:

A group of car dealers in Oregon apparently attached GPS devices to cars sold to customers with poor credit so as to be able to track them down more easily in the event of repossession.

…this practice also relates to an emerging phenomenon wherein sold property remains oddly connected to the seller as though it were merely leased. Whereas once we purchased an album and did with it as we please, today we need to register (up to five) devices in order to play our songs.

…and Kingston University’s Rosie Hornbuckle linked this to the concept of product-service systems:

This puts a whole new slant on product-service-systems, a current (and popular) sustainability methodology whereby people are weaned off the concept of owning products, instead they lease them off the manufacturer who is then responsible for take-back, repair, recycling or disposal. So in that scenario it’s quite likely that a manufacturer will want to keep tabs on their equipment/material, will this bring up privacy issues or is it simply the case that if it’s done overtly (and not in the negative frame of potential repossession), the customer knows about it and agrees, it’s ok? Or will it be a long time before people can overcome the perceived encroachment on their liberty that not owning might bring?

It reminds me of something Bill Thompson suggested to me once, that (paraphrasing) the idea that we ‘own’ the technology we use might well turn out to be a short phase in overall human history. That could perhaps be ‘good’ in contexts where sharing/renting/pooling things allows much greater efficiency and brings benefits for users. Nevertheless, as the repossession example (and DRM, etc, in general) show, the tendency in practice is often to use these methods to exert increasing dominance over users, erode assumed rights, and extract more value from people who no longer have control of the things they use.

See the whole thread so far (and join in!)

Above image of GPS trails (unrelated to the story, but a cool picture) from cmpalmer’s Flickr

The Mosquito, and plans for an odd ‘walk-in virtual world’

McDonald's Restaurant, Windsor, Berkshire

Rosie discussed the Mosquito (above image: an example outside a McDonald’s opposite Windsor Castle*) and asked “could we use our design skills and knowledge to influence these sorts of behaviours with a less aggressive and longer-term approach?” while Adrian Short summed up the issue pretty well:

There are a lot of problems in principle and in practice with these devices, but the core problem for me is that they tend to be directed at users rather than uses (i.e. people by identity, not behaviour) and are entirely arbitrary. The street outside a shop is public space and the shop owners have no more right than anyone else to dictate who goes there.

In as much as these things work (which is highly disputed), they are never going to encourage a meaningful debate about norms of behaviour among users of a space. This approach is not so much negotiation as warfare.

Sutton’s Rosehill steps (which Adrian let me know about originally) were also discussed and Adrian brought us the story of something very odd: a ‘virtual world to teach good behaviour to young people’:

Half a mile away, the same council is proposing to spend at least £4 million on a facility that will include a high-tech virtual street environment, a “street simulator” if you like, to teach safety and good behaviour to some of the same young people.

“Part movie-set, part theme park, the learning complex will be the first of its kind in the UK and will also house an indoor street with shop fronts, pavements and a road. The idea is to give young people the confidence to make the best of their lives and have a positive impact on their peers and their local community.”

I don’t really know what to make of that. I actually woke up this morning thinking about it assuming that it was a dream I’d been having, then realised where I’d read about it. It sounds like a mish-mash of Scaramanga’s Fun House from The Man With The Golden Gun and the Ludovico Centre** from A Clockwork Orange.

Scaramanga's FunhouseLudovico Centre

See the whole thread here.

*This particular McDonald’s, with the Mosquito going every evening and clearly audible to me and my girlfriend (both mid-20s) also features a vicious array of anti-sit spikes (below) which rather negate the ‘welcoming’ efforts made with the flowerbed.

**I actually gave a talk about my research to Environmentally Sensitive Design students in this building a couple of weeks ago: it’s Brunel’s main Lecture Centre.

McDonalds Restaurant, Windsor, Berkshire
McDonalds Restaurant, Windsor, Berkshire

Placebo buttons, false affordances and habit-forming

Elevator graph


This is a great graph
from GraphJam, by ‘Bloobeard’. It raises the question, of course, whether the ‘door close’ buttons on lifts/elevators really do actually do anything, or are simply there to ‘manage expectations‘ or act as a placebo.

The Straight Dope has quite a detailed answer from 1986:

The grim truth is that a significant percentage of the close-door buttons [CDB] in this world, for reasons that we will discuss anon, don’t do anything at all.

In the meantime, having consulted with various elevator repairmen, I would say that apparent CDB nonfunctionality may be explained by one of the following:

(1) The button really does work, it’s just set on time delay.
Suppose the elevator is set so that the doors close automatically after five seconds. The close-door button can be set to close the doors after two or three seconds. The button may be operating properly when you push it, but because there’s still a delay, you don’t realize it.

(2) The button is broken. Since a broken close-door button will not render the elevator inoperable and thus does not necessitate an emergency service call, it may remain unrepaired for weeks.

(3) The button has been disconnected, usually because the building owner received too many complaints from passengers who had somebody slam the doors on them.

(4) The button was never wired up in the first place. One repair type alleges that this accounts for the majority of cases.

Gizmodo, more recently, contends that:

…the Door Close button is there mostly to give passengers the illusion of control. In elevators built since the early ’90s. The button is only enabled in emergency situations with a key held by an authority.

Door close button

This is clearly not always true; I’ve just tested the button in the lift down the corridor here at Brunel (installed around a year ago) and it works fine. So it would seem that enabling the functionality (or not) or modifying it (e.g. time delays) is a decision that can be made for each installation, along the lines of the Straight Dope information.

If there’s a likelihood (e.g. in a busy location) that people running towards a lift will become antagonised by those already inside pressing the button (deliberately or otherwise) and closing the door on them, maybe it’s sensible to disable it, or introduce a delay. If the installation’s in a sparsely populated corner of a building where there’s only likely to be one lift user at a time, it makes sense for the button to be functional. Or maybe for the doors to close more quickly, automatically.

But thinking about this more generally: how often are deceptive buttons/controls/options – deliberate false affordances – used strategically in interaction design? What other examples are there? Can it work when a majority of users ‘know’ that the affordance is false, or don’t believe it any more? Do people just give up believing after a while – the product has “cried Wolf” too many times?

Matt Webb (Mind Hacks, Schulze & Webb) has an extremely interesting discussion of the extinction burst in conditioning, which seems relevant here:

There’s a nice example I read, I don’t recall where, about elevators. Imagine you live on the 10th floor and you take the elevator up there. One day it stops working, but for a couple of weeks you enter the elevator, hit the button, wait a minute, and only then take the stairs. After a while, you’ll stop bothering to check whether the elevator’s working again–you’ll go straight for the stairs. That’s called extinction.

Here’s the thing. Just before you give up entirely, you’ll go through an extinction burst. You’ll walk into the elevator and mash all the buttons, hold them down, press them harder or repeatedly, just anything to see whether it works. If it doesn’t work, hey, you’re not going to try the elevator again.

But if it does work! If it does work then bang, you’re conditioned for life. That behaviour is burnt in.

I think this effect has a lot more importance in everyday interaction with products/systems/environments than we might realise at first – a kind of mild Cargo Cult effect – and designers ought to be aware of it. (There’s a lot more I’d like to investigate about this effect, and how it might be applied intentionally…)

We’ve looked before at the thermostat wars and the illusion of control in this kind of context. It’s related to the illusion of control psychological effect studied by Ellen Langer and others, where people are shown to believe they have some control over things they clearly don’t: in most cases, a button does afford us control, and we would rationally expect it to: an expectation does, presumably, build up that similar buttons will do similar things in all lifts we step into, and if we’re used to it not doing anything, we either no longer bother pressing it, or we still press it every time “on the off-chance that one of these days it’ll work”.

How those habits form can have a large effect on how the products are, ultimately, used, since they often shake out into something binary (you either do something or you don’t): if you got a bad result the first time you used the 30 degree ‘eco’ mode on your washing machine, you may not bother ever trying it again, on that machine or on any others. If pressing the door close button seems to work, that behaviour gets transferred to all lifts you use (and it takes some conscious ‘extinction’ to change it).

There’s no real conclusion to this post, other than that it’s worth investigating this subject further.

Donella Meadows’ Leverage Points

Scott Wilson first pointed me in the direction of Donella Meadows’ ‘Leverage Points – Places to Intervene in a System‘ [PDF, 93 kB], and it’s been very useful in thinking about the ‘Design with Intent’ idea at a system level rather than just the myopic preoccupation with armrests on park benches and interface design which it could have become.

Read More

Designing Safe Living

New Sciences of Protection logo Lancaster University’s interdisciplinary Institute for Advanced Studies (no, not that one) has been running a research programme, New Sciences of Protection, culminating in a conference, Designing Safe Living, on 10-12 July, “investigat[ing] ‘protection’ at the intersections of security, sciences, technologies, markets and design.”

The keynote speakers include the RCA’s Fiona Raby, Yahoo!’s Benjamin Bratton and Virginia Tech’s Timothy Luke, and the conference programme [PDF, 134 kB] includes some intriguing sessions on subjects such as ‘The Art/Design/Politics of Public Engagement’, ‘Designing Safe Citizens’, ‘Images of Safety’ and even ‘Aboriginal Terraformation (performance panel)’.

I’ll be giving a presentation called ‘Design with Intent: Behaviour-Shaping through Design’ on the morning of Saturday 12 July in a session called ‘Control, Design and Resistance’. There isn’t a paper to accompany the presentation, but here’s the abstract I sent in response to being invited by Mark Lacy:

Design with Intent: Behaviour-Shaping through Design
Dan Lockton, Brunel Design, Brunel University, Uxbridge, Middlesex UB8 3PH

“Design can be used to shape user behaviour. Examples from a range of fields – including product design, architecture, software and manufacturing engineering – show a diverse set of approaches to shaping, guiding and forcing users’ behaviour, often for intended socially beneficial reasons of ‘protection’ (protecting users from their own errors, protecting society from ‘undesirable’ behaviour, and so on). Artefacts can have politics. Commercial benefit – finding new ways to extract value from users – is also a significant motivation behind many behaviour-shaping strategies in design; social and commercial benefit are not mutually exclusive, and techniques developed in one context may be applied usefully in others, all the while treading the ethical line of persuasion-vs-coercion.

Overall, a field of ‘Design with Intent’ can be identified, synthesising approaches from different fields and mapping them to a range of intended target user behaviours. My research involves developing a ‘suggestion tool’ for designers working on social behaviour-shaping, and testing it by application to sustainable/ecodesign product use problems in particular, balancing the solutions’ effectiveness at protecting the environment, with the ability to cope with emergent behaviours.”

The programme’s rapporteur, Jessica Charlesworth, has been keeping a very interesting blog, Safe Living throughout the year.

I’m not sure what my position on the idea of ‘designing safe living’ is, really – whether that’s the right question to ask, or whether ‘we’ should be trying to protect ‘them’, whoever they are. But it strikes me that any behaviour, accidental or deliberate, however it’s classified, can be treated/defined as an ‘error’ by someone, and design can be used to respond accordingly, whether viewed through an explicit mistake-proofing lens or simply designing choice architecture to suggest the ‘right’ actions over the ‘wrong’ ones.

User intent and emergence

Something which came out of the seminar at Brunel earlier this week (thanks to everyone who came along) was the idea that any method of selecting ways to design products that aim to shape or guide users’ behaviour really must incorporate some evaluation of users’ actual goals in using the product – users’ intent – alongside that of the designer/planner. This seems obvious, but I hadn’t explicitly thought of it before as a prerequisite for the actual selection method (instead, I’d assumed these kinds of issues could be shaken out during the design process, based on designers’ experience and judgement, and then in user testing). In retrospect it really does need to be considered much earlier in the process, while actually choosing which approaches are going to be explored. (Given how long I’ve spent reading about bad design and poor usability, you’d think I’d have twigged this earlier.)
Read More