Category Archives: Foucault

Shaping behaviour: Part 1

A couple of months ago I posted about the ‘shaping behaviour’ research of RED, part of the UK Design Council. At the time I noted in passing a classification of design approaches for shaping behaviour, mentioned by RED’s Chris Vanstone: “stick*, carrot or speedometer.” It’s worth looking further at this classification and how it relates to the spectrum of control, especially in a technology context:

Yes, it's a stick (well, a branch), next to a PCB

Stick

If we define ‘stick’ as ‘punishing the user for attempted deviation from prescribed behaviour’, then many of the architectures of control we’ve examined on this site demonstrate the stick approach. They’re not explicitly ‘technologies of punishment’ in Foucault‘s phrase, but rather a form of structural punishment. The thinking seems to be (for example):

  • If you try to sleep on this bench, you will be uncomfortable (and hence won’t do it again)
  • If you try to copy a DVD, your copy will be degraded and your time and blank DVD wasted (and hence you won’t do it again, or will buy another authorised original)
  • If you try to view our website using a competitor’s browser, your experience will be broken (and hence you’ll switch to our browser)
  • If you try to skateboard here, your board will be damaged and you will be maimed (and hence you won’t do it again)
  • …and so on. There are numerous other examples from software and urban planning, especially.

    The thing is, though, for each of those ‘sticks’, a large percentage of people will not be obedient in the face of the ‘punishment’. They’ll try to find a way round it: a way of achieving their original objective but avoiding the punishment. They’ll search for what others in similar situations have done (e.g. DeCSS in the DVD example) or ask among friends until they find someone with the required expertise or who knows about an alternative. They may even actively destroy the ‘stick’ that punishes them. In some cases they might not even understand that they’re being punished, simply seeing ‘the system’ as beyond their comprehension or stacked against them.

    Equally, there isn’t always a rational strategy behind the ‘stick’ in the first place. The anti-homeless bench doesn’t ‘solve’ the ‘problem of homelessness’. It just punishes those who try to lie down on it without offering an alternative. It’s punishment with no attempt at resolving the problem.

    If a stick does get people to change their behaviour in the intended way, it will be accompanied by resentment, anger and dissatisfaction. It may only be fear of the consequences which prevent actual rebellion. In short: using sticks to change people’s behaviour is not a good idea.

    Carrots: image from image.frame
    Image from image.frame

    Carrot

    A ‘carrot’ means offering users an incentive to change their behaviour. This moves away from actual control to something closer to some aspects of captology – making a persuasive case for behaviour change through demonstrating its benefits rather than punishing those who disobey.

    To some extent, control and incentives may be incompatible. Taking away functionality from users then showing them how they can get it back (usually by paying something) might be a classic combined “carrot and stick” technique, but it’s also bordering on a protection racket, and it doesn’t fool many people.

    However, can control be used in conjunction with genuine incentives to serve the agendas of both sides? Electric lights that turn off automatically if no-one’s in the room take some control away from the user, but also offer benefits to both the user (lower electricity bills) and society as a whole (less energy used). But if they turn off automatically, is there actually any incentive for the user to change his or her behaviour? If we’re always spoon-fed, will we ever learn?

    Perhaps mistake-proofing measures or forcing functions which allow a user to increase his or her productivity or safety, in return for giving up some ‘control’ – which may not be highly valued anyway – fit the definition best. If I’m working in a factory painting coachlines on hand-built bicycles, a steady guide arm that damps my arm vibrations – but only if I also take care as well – takes some control away from me, but also prevents me making mistakes, allowing me to paint more coachlines per hour, more accurately. It also helps my employer.

    But that’s a very weak degree of control. Unless anyone can come up with any counter-examples, I would suggest that providing real incentives for users to change their behaviour is fundamentally a very different approach to the ‘control mindset’ (unless you are trying to trick people by offering false incentives, or by understating what they could lose by changing their behaviour).

    I’ll get round to speedometers in a future post, since this approach is worthy of a deeper treatment.

    *The phrase “carrot and stick” seems now universally to imply “offering incentives with one hand and punishment with the other” (though not necessarily at the same time), rather than the “carrot dangling from a stick, just out of reach” meaning (i.e. “motivating people to perform with incentives which will never be fulfilled”) which I first assumed it to have when I heard the phrase as a kid (I’m not the only one with this issue). In this post, I’ll use “stick” to mean “punishment”.

    A vein attempt?

    Blue lighting makes it more difficult to see veins
    Blue lighting makes it more difficult to see veins

    Blue lighting is sometimes used in public toilets (restrooms) to make it more difficult for drug users to inject themselves (veins are harder to see). The above implementation is in Edinburgh, next to the Tron Kirk.

    It was more difficult to see my veins through my skin, but there was normal-coloured lighting in the street outside, and one would assume that the users would thus just go outside instead, though the risk of detection is greater. (An additional result of the blue lighting is that, on going outside after spending more than a few seconds in the toilets, the daytime world appears much brighter and more optimistic, even on an overcast day: could retail designers or others make use of this effect? Do they already?)

    So the blue lighting ‘works’, but is it really a good idea to increase the risk that an injection will be done wrongly – maybe multiple times? This is perhaps a similar argument to that surrounding delibrately reducing visibility at junctions: the architecture of control makes it more dangerous for the few users (and those their actions affect) who ignore or bypass the control. This seems to be an architecture of control with the potential to endanger life, although the actual stated intention behind it probably includes ‘saving lives’.

    Without knowing more about addiction, however, I can’t say whether making it difficult for people to inject will really help stop them doing it; it would seem more likely that (as in the linked Argus story), the aim of the blue lighting is to move the ‘problem’ somewhere else rather than actually ‘solve’ it – as with the anti-homeless benches, in fact.

    Another example in this kind of area is the use of smoke alarms specifically to prevent people smoking in toilets, e.g. on aeroplanes (the noise, and embarrassment, is a sufficient deterrent). There’s even been the suggestion of using the Mosquito high-pitched alarm coupled to a smoke detector to ‘prevent’ children smoking in school toilets (I’d expect that quite a few would deliberately try to set them off; I know I would have as a kid). A friend mentioned the practice of siting smoking shelters a long way from office buildings so that smokers are discouraged from going so often; this backfired for the company concerned, as smokers just took increasingly long breaks to make it ‘worth their while’ to walk the extra distance.

    Bruce Schneier : Architecture & Security

    The criminology students at Cambridge have an excellent view of dystopian architecture

    Bruce Schneier talks about ‘Architecture and Security’: architectural decisions based on the immediate fear of certain threats (e.g. car bombs, rioters) continuing to affect users of the buildings long afterwards. And he makes the connexion to architectures of control outside of the built environment, too:

    “The same thing can be seen in cyberspace as well. In his book, Code and Other Laws of Cyberspace, Lawrence Lessig describes how decisions about technological infrastructure — the architecture of the internet — become embedded and then impracticable to change. Whether it’s technologies to prevent file copying, limit anonymity, record our digital habits for later investigation or reduce interoperability and strengthen monopoly positions, once technologies based on these security concerns become standard it will take decades to undo them.

    It’s dangerously shortsighted to make architectural decisions based on the threat of the moment without regard to the long-term consequences of those decisions.”

    Indeed.

    The commenters detail a fantastic array of ‘disciplinary architecture‘ examples, including:

  • Pierce Hall, University of Chicago, “built to be “riotproof” by elevating the residence part of the dorm on large concrete pillars and developing chokepoints in the entranceways so that rioting mobs couldn’t force their way through.” (There must be lots of university buildings like this)
  • “The Atlanta Fed building has a beautiful lawn which surrounds the building, and is raised 4 or 5 feet from the surrounding street, with a granite restraining wall. It’s a very effective protection against truck bombs.”
  • The wide boulevards of Baron Haussmann’s Paris, intended to prevent barricading (a frequently invoked example on this blog)
  • The UK Ministry of Defence’s Defence Procurement Agency site at Abbey Wood, Bristol, “is split into car-side and buildings; all parking is as far away from the buildings (car bomb defence), especially the visitor section. you have to walk over a narrow footbridge to get in.

    Between the buildings and the (no parking enforced by armed police) road is ‘lake’. This stops suicide bomber raids without the ugliness of the concrete barriers.

    What we effectively have is a modern variant of an old castle. The lake supplants the moat, but it and the narrow choke point/drawbridge.”

  • SUNY Binghamton’s “College in the Woods, a dorm community… features concrete “quads” with steps breaking them into multiple levels to prevent charges; extremely steep, but very wide, stairs, to make it difficult to defend the central quad”
  • University of Texas at Austin: “The west mall (next to the Union) used to be open and grassy. They paved it over with pebble-y pavement to make it painful for hippies to walk barefoot and installed giant planters to break up the space. They also installed those concrete walls along Guadalupe (the drag) to create a barrier between town and gown, and many other “improvements.””
  • I’m especially amused by the “making it painful for hippies to walk barefoot” comment! This is not too far from the anti-skateboarding corrugation sometimes used (e.g. the third photo here), though it seems that in our current era, there is a more obvious disconnect between ‘security’ architecture (which may also involve vast surveillance or everyware networks, such as the City of London’s Ring of Steel) and that aimed at stopping ‘anti-social’ behaviour, such as homeless people sleeping, skateboarders, or just young people congregating.

    BBC: Surveillance drones in Merseyside

    From the BBC: ‘Police play down spy planes idea’:

    “Merseyside Police’s new anti-social behaviour (ASB) task force is exploring a number of technology-driven ideas.

    But while the use of surveillance drones is among them, they would be a “long way off”, police said.

    “The idea of the drone is a long way off, but it is about exploring all technological possibilities to support our war on crime and anti-social behaviour.”

    Note that “anti-social behaviour” is mentioned separately to “crime.” Why? Also, nice appropriation of the “war on xxx” phrasing.

    “It plans to utilise the latest law enforcement technology, including automatic number plate recognition (ANPR), CCTV “head-cams” and metal-detecting gloves.”

    This country’s had it.

    We’ve got Avon & Somerset Police using helicopters with high-intensity floodlights to “blind groups of teenagers temporarily” and councils using tax-payers’ money to install devices to cause deliberate auditory pain to a percentage of the population, again, whether or not they have committed a crime. Anyone would think that those in power despised their public. Perhaps they do.

    Has it ever occurred to the police that tackling the causes of the problem might be a better solution than attacking the symptoms with a ridiculous battery of ‘technology’?

    Reversing the emphasis of a control environment

    Image from Flickr user Monkeys & Kiwis

    Image from Monkeys & Kiwis (Flickr)

    Chris Weightman let me know about how it felt to watch last Thursday’s iPod Flashmob at London’s Liverpool Street station: the dominant sense was of a mass of people overturning the ‘prescribed’ behaviour designed into an environment, and turning the area into their own canvas, overlaying individualised, externally silent experiences on the usual commuter traffic.

    Probably wouldn’t get away with that sort of thing at an airport any more anyway, but what will happen to this kind of informal gathering in the era of the societies of control? When everyware monitors exactly who’s where and forces the barriers closed for anyone hoping to use the space for something other than that for which it was intended?

    Casino programmable*

    Part of the cover of a late-60s Pan edition of Casino Royale

    Signal vs Noise talks about the casino experience – a world awash with designed-in architectures of control, both physical and psychological (and physiological, perhaps), truly environments designed specifically to manipulate and reinforce certain behaviour, from maze-like layouts (intentional route obfuscation – perhaps even more so than in supermarkets) to the deliberate funnelling of winners past many other places to spend their chips on the way to the cashier’s window.

    While the commenters (including ‘Hunter’ who runs a blog on casino design) attempt to clarify/debunk some of the more legendary ‘casino tricks’ including restricting daylight and pumping extra oxygen onto the floor, it’s clear that an enormous wealth of expertise has developed over the years to maximise the control of players and thus maximise casinos’ takings.

    A couple of months ago, Scott Craver mentioned another interesting casino trick:

    “This casino had a cell-phone blocker, and of course our conference room would have no wi-fi. Apparently the goal is to attract people to machines and disconnect them from everything else in the world. From the gambling areas you cannot tell if it is day or night. And the way everything was designed to suck people in had all the subtlety of a mousetrap.”

    (Despite spending most of my formative years reading the James Bond books over and over again, and being fascinated by Thomas Bass’s The Newtonian Casino, I’ve only ever actually been in one ‘proper’ casino, in London, and I spent most of that time watching a friend play blackjack and trying to apply what I could remember from Bringing Down The House, so I’m not really very familiar with the subject. But it’s extremely interesting, and worthy of more research – and comparison with other ‘public’ environments.)

    *Yeah, it’s a calculated pun!