In the earlier days of this blog, many of the posts were about code, in the Lawrence Lessig sense: the idea that the structure of software and the internet and the rules designed into these systems don’t just parallel the law (in a legal sense) in influencing and restricting public behaviour, but are qualitatively different, enabling distinct forms of affordance and constraint. Designers (and developers) — or in many cases those overseeing the process — in this sense potentially wield a lot of (political) power.
EDIT (April 2013): An article based on the ideas in this post has now been published in the International Journal of Design – which is open-access, so it’s free to read/share. The article refines some of the ideas in this post, using elements from CarbonCulture as examples, and linking it all to concepts from human factors, cybernetics and other fields.
There are lots of models of human behaviour, and as the design of systems becomes increasingly focused on people, modelling behaviour has become more important for designers. As Jon Froehlich, Leah Findlater and James Landay note, “even if it is not explicitly recognised, designers [necessarily] approach a problem with some model of human behaviour”, and, of course, “all models are wrong, but some are useful”. One of the points of the DwI toolkit (post-rationalised) was to try to give designers a few different models of human behaviour relevant to different situations, via pattern-like examples.
I’m not going to get into what models are ‘best’ / right / most predictive for designers’ use here. There are people doing that more clearly than I can; also, there’s more to say than I have time to do at present. What I am going to talk about is an approach which has emerged out of some of the ethnographic work I’ve been doing for the Empower project, working on CarbonCulture with More Associates, where asking users questions about how and why they behaved in certain ways with technology (in particular around energy-using systems) led to answers which were resolvable into something like rules: I’m talking about behavioural heuristics.
A couple of weeks ago, at dConstruct 2011 in Brighton, 15 brave participants took part in my full-day workshop ‘Influencing behaviour: people, products, services and systems’, with which I was very kindly assisted by Sadhna Jain from Central Saint Martins. As a reference for the people who took part, for me, and for anyone else who might be intrigued, I thought I would write up what we did. The conference itself was extremely interesting, as usual, with a few talks which provoked more discussion than others, as much about presentation style as content, I think (others have covered the conference better than I can). And, of course, I met (and re-connected with) some brilliant people.
I’ve run quite a few workshops in both corporate and educational settings using the Design with Intent cards or worksheets (now also available as a free iPad app from James Christie) but this workshop aimed to look more broadly at how designers can understand and influence people’s behaviour. This is also the first ‘public’ workshop that I’ve done under the Requisite Variety name, which doesn’t mean much different in practice, but is something of a milestone for me as a freelancer.
In the previous post I outlined what I had planned, and while in the event the programme deviated somewhat from this, I think overall it was reasonably successful. Rather than using a case study (I feel uneasy, when people are paying to come to a workshop, to ask them effectively to do work for someone else) we ran through a series of exercises intended to explore different aspects of how design and people’s behaviour relate to each other, and perhaps uncover some insights which would make it easier to incorporate a consideration of this into a design process.
Whether we choose to do it or not, what we design is going to affect how users behave, so we might as well think about it, and—if we can—actually get good at it. Bridging the gap between physical and digital product design, a systems approach can help us understand how people interact with the different touchpoints they experience, how mental models and cognitive biases and heuristics influence the way people make decisions about what to do, and hence how we might apply that knowledge (for good).
In this full-day practical workshop, we’ll try a novel approach to design and behaviour, using ourselves as both designers and cybernetic guinea pigs in exploring and developing a combination of physical and digital experiences. You’ll learn how to improve your own decision-making and understanding of how your behaviour is influenced by the systems around you, as well as ways to influence others’ behaviour, through a new approach to designing at the intersection of people, products, services and systems.
So what will the day actually involve? (You’re entitled to ask: the above is admittedly vague.) I’ve run quite a lot of workshops in the last couple of years, mainly using the Design with Intent toolkit in one form or another to help groups generate concepts for specific behaviour change contexts, but this one is slightly different, taking advantage of a full day to explore more areas of how design and behaviour interact, in a way which I hope complements dConstruct’s overall theme this year of “bridging the gap between physical and digital product design” usefully and interestingly. Also, the concept of ‘design for behaviour change’ is probably no longer new and exciting (at least to the dConstruct audience) in quite the way it might have been a few years ago: a more nuanced, developed, thoughtful exploration is needed. We’ll be using some of the Design with Intent cards throughout the workshop, but they’re not the main focus.
My plan is for the workshop to have four stages (3 shorter ones in the morning, 1 longer one for the afternoon):
Central heating systems have interfaces, and many of us interact with them every day, even if only by experiencing their effects.
But there’s a lot of room for improvement. They’re systems where (unlike, say, a car) we don’t generally get instantaneous feedback on the changes we make to settings or the interactions we have with the interface. It’s a slow feedback loop. We don’t necessarily have correct mental models of how they work, yet the systems cost us (a lot of) money. How effectively do we use them? Around 60% of UK domestic energy use goes on space heating, and 24% on water heating. (See this Building Research Establishment report [PDF] for more detailed breakdowns.) That 84% cost me and my girlfriend £430 last year. It’s worth thinking about from a financial point of view, regardless of the environmental aspects.
Heating systems are something we all interact with, especially in the depths of winter where we depend on them, and yet there seems to have been very little evolution in the design of their interfaces. What’s more, with an ever increasing focus on energy efficiency, both from an environmental and economic standpoint, there’s a need for heating systems and their interfaces to be smarter, more efficient and transparent.
The Rattle team think through existing systems and consider a number of possible revisions to improve the way that information is presented to users, and the level of control that it might be useful for users to have. This is a great piece of work, impressive and very thorough, and it’s interesting to see how their thinking evolved: I get the impression that (as service designers) they’re a lot more focused on users’ needs than the designers of many heating systems are. It’s also an exciting thing for a design company to be able to take time to address problems outside their immediate sphere, since they’re bringing a whole new level of domain expertise to it.
The ‘I’m working’ indicator is a really good idea – it reminds me of some higher-end car tyre air pumps at petrol stations where you can just set the pressure you want to achieve, and the pump cuts out (and alerts you) when it reaches it. But the idea of doing away with the ‘desired temperature’ setting and just having warmer/colder is also interesting – “forc[ing] people to always make decisions based upon how they’re feeling right now”.
Equally the ‘shift to service’ approach of having an API and making clever use of it has a big potential to help in energy saving (and cost saving for the user), especially if the usage data were (anonymised or otherwise) available for analysis. Just being able to tell users “it’s costing you £X more to heat your home than it does for a similar family in a similar house down the road – if you insulated better you could save £X every month” would be an interesting mechanism for persuasion. As with so many things, it relies on having that API or other interface available in the first place…
There are two commonly held folk theories about thermostats: the timer theory and the valve theory. The timer theory proposes that the thermostat simply controls the relative proportion of time that the device stays on. Set the thermostat midway, and the device is on about half the time; set it all the way up and the device is on all the time. Hence, to heat or cool something most quickly, set the thermostat so that the device is on all the time. The valve theory proposes that the thermostat controls how much heat (or cold) comes out of the device. Turn the thermostat all the way up, and you get the maximum heating or cooling. The correct story is that the thermostat is just an on-off switch. Setting the thermostat at one extreme cannot affect how long it takes to reach the desired temperature.
Say you come in from outdoors, and are cold. Because of the delay in your exposed skin warming up to room temperature, it surely does warm you more quickly if you stand near something that’s hotter than you actually want to be, e.g. a log fire / stove. So the heuristic of ‘turning up the heat to more than you need, in order to feel warmer more quickly’ is pretty understandable, especially when the temperature controlling the thermostat is the temperature of the thermocouple/probe/whatever and not actually the body temperature of the users themselves. (That would be a good innovation in itself, of course!) Am I wrong?
Given that a lot of people do try to control heating systems as if they worked on the valve model, would it be sensible to develop one which did? Do they already exist?
The liquid detection stickers in mobile phones, which allow manufacturers and retailers to ascertain if a phone has got wet, and thus reject warranty claims (whether judiciously/appropriately or not), seem to be concerning a lot of people worldwide. Around a quarter of this site’s visitors are searching for information on this subject, and the comments on last October’s post on the subject contain a wealth of useful experience and advice.
This current thread on uk.legal.moderated goes into more depth on the issue, and how the burden of proof works in this case (at least in the UK). While informed opinion seems to be that the stickers will only change colour when actual liquid is present within the phone, rather than mere moisture or damp, this may well include condensation forming within the casing, as well as the more obvious dropping-of-phone-into-puddle and so on. The main point of contention seems to be that the sticker may change colour (perhaps gradually) and the phone continue working perfectly, but when an unrelated problem occurs and the phone is taken in for repairs under warranty, the presence of the ‘voided’ sticker may be used as a universal warranty get-out even if the actual problem is something different.
Along these lines, one of the posts tells of a similarly interesting design tactic – tilt-detectors on larger hardware:
30 years in the IT industry and associated customer service tells me they are trying it on and most people buy it. In the olden days, hardware used to come with a similar red dot system indicating the kit had been tilted more than 45 degrees and the manufacturers claimed the kit could not be installed and had to be written off.
Of course, 99.9% of the time the kit was fine, but they had a get-out from a warranty claim or so they thought. When the buyers tried to claim on their insurance or against the transport companies insurers the loss adjusters got involved and invariably the kit was installed and worked fine for years rather than the insurers paying out.
In some cases, of course, tilt-detectors were (are still?) necessary in this role. A piece of equipment with multiple vertically cantilevered PCBs laden with heavy components – relays, for example – might well be damaged if the PCBs were tilted away from the vertical. Certainly some devices with small moving coil components would seem as though they may be damaged by being turned upside down, for example. (Do the ultra-fine damper wires on an aperture-grille CRT monitor such as a Trinitron need to be kept in a particular orientation when handling the monitor?)
This patent, published in 1984, from which the above images were extracted, describes an especially clever ‘interlock’ system using two liquid-based detectors arranged so that if the device/package is tilted and then tilted back again, the second detector will then be triggered:
…it is desirable that the tilt detectors not be resettable. In particular, it must be possible to combine a package with at least a pair of the tilt detectors such that attempting to reset one would cause the other to be tilted beyond its pre-determined maximum angle so that the total combination would always afford an indication that the tilt beyond that allowed had been effected.
This is something of a poka-yoke – but as with the phone liquid-detection stickers, it’s being used to detect undesirable customer/handler behaviour rather than actually to prevent it happening. Other than making a package too heavy to tilt, I am not sure exactly how we might design something which actually prevents the tilting problem, aside from rectifying the design problem which makes tilting a problem in the first place (even filling the airspace in the case with non-conductive, low-density foam might help here).
But there’s certainly a way the tilt-detector could be improved to help and inform the handler rather than simply ‘condemn’ the device. For example, it could let out an audible alarm if the package or device is tilted, say, 20 degrees, to allow the handler to rectify his or her mistake before reaching the damaging 45 degrees, whilst still permanently changing colour if 45 degrees is reached. In the long run, it would probably help educated users about how to handle the device rather than just ‘punishing’ them for an infraction. I’m sure that mercury-switch (or whatever the current non-toxic equivalent is) alarms have been used in this way (e.g. on a vending machine), but how often are they used to help the user rather than alert security?
The patent description goes on to mention using tamper-evident methods of attaching the detectors to the device or packaging – this is another interesting area, which I am sure we will cover at some point on the blog.