Category Archives: Democracy of innovation

Designers, literature, abstracts and Concretes

Trinity College Dublin Library, by A little coffee with my cream and sugar on Flickr

Last week, I put a quick survey online asking how actual designers make use of academic literature.

It provoked some interesting discussion on Twitter as well as two great blog posts from Dr Nicola Combe and Clearleft’s Andy Budd exploring different aspects of the question: ways to get access to academic research, and the frustrations of the relationship between design practice and academia. Comments on Andy’s article from Vicky Teinaki and Sebastian Deterding helped draw out some of the issues in more detail (and highlighted some of the differences between fields). Kevin Couling has also blogged from the perspective of an engineer, drawing on Nicola’s post. Steven Shorrock pointed to his work with Amy Chung and Ann Williamson addressing similar issues, much more rigorously, within human factors and ergonomics [PDF]. Someone also reminded me that I’d already blogged about related issues back in 2007.

As of now, about 50 people have filled in the survey, a mixture of digital, physical and service design practitioners: thank you everyone, and thanks too to people who emailed comments in addition.

Here’s the full spreadsheet of survey responses (Google Docs) so far. I’ve had some good suggestions for other places to publicise it, so I’ll do this in due course to get a wider scope of practitioners’ opinions.
Continue reading

How do actual designers use academic literature?

The whole point of doing research is to extract reliable knowledge from either the natural or artificial world, and to make that knowledge available to others in re-usable form.

Nigel Cross, ‘Design Research: A Disciplined Conversation’, Design Issues 15(2), 1999, p.9 [PDF link]

>>>Link to a very quick survey

It’s incredibly sad that it took Aaron Swartz’s death, but the issue of open access to academic literature has been dramatically brought to the fore again, coincident with interesting practical developments, some ‘official’ and some less so. The movement towards open access is not going to stop, and in some academic disciplines will leave the ‘landscape’ of journals and publication methods very different.
Continue reading

frog design on Design with Intent

Robert Fabricant of frog design – with whom I had a great discussion a couple of weeks ago in London – has an insightful new article up at frog’s Design Mind, titled, oddly enough, ‘Design with Intent: how designers can influence behaviour’ – which tackles the question of how, and whether, designers can and should see their work as being directed towards behaviour change, and the power that design can have in this kind of application.

It builds on a trend evident in frog’s own work in this field, most notably the Project Masiluleke initiative (which seems to have been incredibly successful in behaviour change terms), as well as a theme Robert’s identified talking to a range of practitioners as well as young designers: “We’re experiencing a sea change in the way designers engage with the world. Instead of aspiring to influence user behaviour from a distance, we increasingly want the products we design to have more immediate impact through direct social engagement.”

The recognition of this nascent trend echoes some of the themes of transformation design – a manifesto developed by Hilary Cottam’s former RED team at the Design Council – and also fits well into what’s increasingly called social design, or socially conscious design – a broad, diverse movement of designers from many disciplines, from service design to architecture, who are applying their expertise to social problems from healthcare to environment to education to communication. With the mantra that ‘we cannot not change the world’, groups such as Design21 and Project H Design, along with alert chroniclers such as Kate Andrews, are inspiring designers to see the potential that there is for ‘impact through direct social engagement’: taking on the mantle of Victor Papanek and Buckminster Fuller, motivated by the realisation that design can be more than ‘the high pitched scream of consumer selling‘, more than simply reactive. Nevertheless, Robert’s focus on influencing people’s behaviour (much as I’ve tried to make clear with my own work on Design with Intent over the last few years), is an explicit emerging theme in itself, and catching the interest of forward-looking organisations such as the RSA.

People

User centred design, constraint and reality

One of the issues Robert discusses is a question I’ve put to the audience in a number of presentations recently – fundamentally, is it still ‘User-Centred Design’ when the designer’s aim is to change users’ behaviour rather than accommodating it? As he puts it, “we influence behaviour and social practice from a distance through the products and services that we create based on our research and understanding of behaviour. We place users at the centre and develop products and services to support them. With UCD, designers are encouraged not to impose their own values on the experience.” Thus, “committing to direct behaviour design [my italics] would mean stepping outside the traditional frame of user-centred design (UCD), which provides the basis of most professional design today.”

Now, ‘direct behaviour design’ as a concept is redolent of determinism in architecture, or the more extreme end of behaviourism, where people (users / inhabitants / subjects) are seen as, effectively, components in a designed system which will respond to their environment / products / conditioning in a known, predictable way, and can thus be directed to behave in particular ways by changing the design of the system. It privileges the architect, the designer, the planner, the hidden persuader, the controller as a kind of director of behaviour, standing on the top floor observing what he’s wrought down below.

I’ll acknowledge that, in a less extreme form, this is often the intent (if not necessarily the result) behind much design for behaviour change (hence my definition for Design with Intent: ‘design that’s intended to influence, or result in, certain user behaviour’). But in practice, people don’t, most of the time, behave as predictably as this. Our behaviour – as Kurt Lewin, James Gibson, Albert Bandura, Don Norman, Herbert Simon, Daniel Kahneman, Amos Tversky and a whole line of psychologists from different fields have made clear – is a (vector) function of our physical environment (and how we perceive and understand it), our social environment (and how we perceive and understand it) and our cognitive decision processes about what to do in response to our perceptions and understanding, working within a bounded rationality that (most of the time) works pretty well. If we perceive that a design is trying to get us to behave in a way we don’t want, we display reactance to it. This is going to happen when you constrain people against pursuing a goal: even the concept of ‘direct behaviour design’ itself is likely to provoke some reactance from you, the reader. Go on: you felt slightly irritated by it, didn’t you?*

SIM Card poka-yoke

In some fields, of course, design’s aim really is to constrain and direct behaviour absolutely – e.g. “safety critical systems, like air traffic control or medical monitors, where the cost of failure [due to user behaviour] is never acceptable” (from Cairns & Cox, p.16). But decades of ergonomics, human factors and HCI research suggests that errorproofing works best when it helps the user achieve the goal he or she already has in mind. It constrains our behaviour, but it also makes it easier to avoid errors we don’t want. We don’t mind not being able to run the microwave oven with the door open (even though we resented seatbelt interlocks). We don’t mind being only being able to put a SIM card in one way round. The design constraint doesn’t conflict with our goal: it helps us achieve it. (It would be interesting to know of cases in Japanese vs. Western manufacturing industry where employees resented the introduction of poka-yoke measures – were there any? What were the specific measures that irritated?)

Returning to UCD, then, I would argue that in cases where design with intent, or design for behaviour change, is aligned with what the user wants to achieve, it’s very much still user-centred design, whether enabling, motivating or constraining. It’s the best form of user-centred design, supporting a user’s goals while transforming his or her behaviour. Some of the most insightful current work on influencing user behaviour, from people such as Ed Elias at Bath and Tang Tang at Loughborough [PPT], starts with achieving a deeper understanding of user behaviour with existing products and systems, to identify better how to improve the design; it seems as though companies such as Onzo are also taking this approach.

Is design ever neutral?

Robert also makes the point that “every [design] decision we make exerts an influence of some kind, whether intended or not”. This argument parallels one of the defences made by Richard Thaler and Cass Sunstein to criticism of their libertarian paternalism concept: however you design a system, whatever choices you decide to give users, you inevitably frame understanding and influence behaviour. Even not making a design decision at all influences behaviour.

staggered crossing

If you put chairs round a table, people will sit down. You might see it as supporting your users’ goals – they want to be able to sit down – but by providing the chairs, you’ve influenced their behaviour. (Compare Seth Godin’s ‘no chair meetings’.) If you constrain people to three options, they will pick one of the three. If you give them 500 options, they won’t find it easy to choose well. If you give them no options, they can’t make a choice, but might not realise that they’ve been denied it. And so on. (This is sometimes referred to as ‘choice editing’, a phrase which provokes substantial reactance!) If you design a pedestrian crossing to guide pedestrians to make eye contact with drivers, you’ve privileged drivers over pedestrians and reinforced the hegemony of the motor car. If you don’t, you’ve shown contempt for pedestrians’ needs. Richard Buchanan and Johan Redström have both also dealt with this aspect of ‘design as rhetoric’, while Kristina Niedderer’s ‘performative objects’ intended to increase user mindfulness of the interactions occurring.

Thaler and Sunstein’s argument (heavily paraphrased, and transposed from economics to design) is that as every decision we make about designing a system will necessarily influence user behaviour, we might as well try and put some thought into influencing the behaviour that’s going to be best for users (and society)**. And that again, to me, seems to come within the scope of user-centred design. It’s certainly putting the user – and his or her behaviour – at the centre of the design process. But then to a large extent – as Robert’s argued before – all (interaction) design is about behaviour. And perhaps all design is really interaction design (or ought to be considered as such during at least part of the process).

Persuasion, catalyst and performance design

Robert identifies three broad themes in using design to influence behaviour – persuasion design, catalyst design and performance design. ‘Persuasion design’ correlates very closely with the work on persuasive technology and persuasive design which has grown over the past decade, from B.J. Fogg’s Persuasive Technology Lab at Stanford to a world-wide collaboration of researchers and practitioners – including designers and psychologists – meeting at the Persuasive conferences (2010′s will be in Copenhagen), of which I’m proud to be a very small part. Robert firmly includes behavioural economics and choice architecture in his description of Persuasion Design, which is something that (so far at least) has not received an explicit treatment in the persuasive technology literature, although individual cognitive biases and heuristics have of course been invoked. I think I’d respectfully argue that choice architecture as discussed in an economic context doesn’t really care too much about persuasion itself: it aims to influence behaviours, but doesn’t explicitly see changing attitudes as part of that, which is very much part of persuasion.

‘Catalyst design’ is a great term – I’m not sure (other than as the name of lots and lots of small consultancies) whether it has any precedent in the design literature or whether Robert coined it himself (something Fergus Bisset asked me the other day on reading the article). On first sight, catalyst design sounds as though it might be identical with Buckminster Fuller’s trimtab metaphor – a small component added to a system which initiates or enables a much larger change to happen more easily (what I’ve tried to think of as ‘enabling behaviour‘). However, Robert broadens the discussion beyond this idea to talk about participatory and open design with users (such as Jan Chipchase‘s work – or, if we’re looking further back, Christopher Alexander and his team’s groundbreaking Oregon Experiment). In this sense, the designer is the catalyst, facilitating innovation and behaviour change. User-led innovation is a massive, and growing, field, with examples of both completely ground-up development (with no ‘designer as catalyst’ involved) and programmes where a designer or external expert can, through engaging with people who use and work with a system, really help transform it (Clare Brass’s SEED Foundation’s HiRise project comes to mind here). But it isn’t often spoken about explicitly in terms of behaviour change, so it’s interesting to see Robert present it in this context.

Finally, ‘performance design’, as Robert explains it, involves designers performing in some way, becoming immersed in the lives of the people for whom they are designing. From a behaviour change perspective, empathising with users’ mental models, understanding what motivates users during a decision-making process, and why certain choices are made (or not made), must make it easier to identify where and how to intervene to influence behaviour successfully.

Implications for designers working on behaviour change

It’s fantastic to see high-profile, influential design companies such as frog explicitly recognising the opportunities and possibilities that designers have to influence user behaviour for social benefit. The more this is out in the open as a defined trend, a way of thinking, the more examples we’ll have of real-life thinking along these lines, embodied in a whole wave of products and services which (potentially) help users, and help society solve problems with a significant behavioural component. (And, more to the point, give us a degree of evidence about which techniques actually work, in which contexts, with which users, and why – there are some great examples around at present, both concepts and real products – e.g. as collated here by Debra Lilley – but as yet we just don’t have a great body of evidence to base design decisions on.) It will also allow us, as users, to become more familiar with the tactics used to influence our behaviour, so we can actively understand the thinking that’s gone into the systems around us, and choose to reject or opt out of things which aren’t working in our best interests.

The ‘behavioural layer’ (credit to James Box of Clearleft for this term) is something designers need to get to grips with – even knowing where to start when you’re faced with a design problem involving influencing behaviour is something we don’t currently have a very good idea about. With my Design with Intent toolkit work, I’m trying to help this bit of the process a bit, alongside a lot of people interested, on many levels, in how design influences behaviour. It will be interesting over the next few years to see how frog and other consultancies develop expertise and competence in this field, how they choose to recruit the kind of people who are already becoming experts in it – and how they sell that expertise to clients and governments.

Update: Robert responds – The ‘Ethnography Defense’

Dan Lockton, Design with Intent / Brunel University, June 2009

*TU Eindhoven’s Maaike Roubroeks used this technique to great effect in her Persuasive 2009 presentation.
**The debate comes over who decides – and how – what’s ‘best’ for users and for society. Governments don’t necessarily have a good track record on this; neither do a lot of companies.

The Hacker’s Amendment

Screwdrivers

Congress shall pass no law limiting the rights of persons to manipulate, operate, or otherwise utilize as they see fit any of their possessions or effects, nor the sale or trade of tools to be used for such purposes.

From Artraze commenting on this Slashdot story about the levels of DRM in Windows 7.

I think it maybe needs some qualification about not using your things to cause harm to other people, but it’s an interesting idea. See also Mister Jalopy’s Maker’s Bill of Rights from Make magazine a couple of years ago.

Stuff that matters: Unpicking the pyramid

Most things are unnecessary. Most products, most consumption, most politics, most writing, most research, most jobs, most beliefs even, just aren’t useful, for some scope of ‘useful’.

I’m sure I’m not the first person to point this out, but most of our civilisation seems to rely on the idea that “someone else will sort it out”, whether that’s providing us with food or energy or money or justice or a sense of pride or a world for our grandchildren to live in. We pay the politicians who are best at lying to us because we don’t want to have to think about problems. We bail out banks in one enormous spasm of cognitive dissonance. We pay ‘those scientists’ to solve things for us and them hate them when they tell us we need to change what we’re doing. We pay for new things because we can’t fix the old ones and then our children pay for the waste.

Economically, ecologically, ethically, we have mortgaged the planet. We’ve mortgaged our future in order to get what we have now, but the debt doesn’t die with us. On this model, the future is one vast pyramid scheme stretching out of sight. We’ve outsourced functions we don’t even realise we don’t need to people and organisations of whom we have no understanding. Worse, we’ve outsourced the functions we do need too, and we can’t tell the difference.

Maybe that’s just being human. But so is learning and tool-making. We must be able to do better than we are. John R. Ehrenfeld’s Sustainability by Design, which I’m reading at present, explores the idea that reducing unsustainability will not create sustainability, which ought to be pretty fundamental to how we think about these issues: going more slowly towards the cliff edge does not mean changing direction.

I’m especially inspired by Tim O’Reilly’s “Work on stuff that matters” advice. If we go back to the ‘most things are unnecessary’ idea, the plan must be to work on things that are really useful, that will really advance things. There is little excuse for not trying to do something useful. It sounds ruthless, and it does have the risk of immediately putting us on the defensive (“I am doing something that matters…”).

The idea I can’t get out of my head is that if we took more responsibility for things (i.e. progressively stopped outsourcing everything to others as in paragraphs 2 and 3 above, and actively learned how to do them ourselves), this would make a massive difference in the long run. We’d be independent from those future generations we’re currently recruiting into our pyramid scheme before they even know about it. We’d all of us be empowered to understand and participate and create and make and generate a world where we have perspicacity, where we can perceive the affordances that different options will give us in future and make useful decisions based on an appreciation of the longer term impacts.

An large part of it is being able to understand consequences and implications of our actions and how we are affected, and in turn affect, the situations we’re in – people around us, the environment, the wider world. Where does this water I’m wasting come from? Where does it go? How much does Google know about me? Why? How does a bank make its money? How can I influence a new law? What do all those civil servants do? How was my food produced? Why is public transport so expensive? Would I be able to survive if X or Y happened? Why not? What things that I do everyday are wasteful of my time and money? How much is the purchase of item Z going to cost me over the next year? What will happen when it breaks? Can I fix it? Why not? And so on.

You might think we need more transparency of the power structures and infrastructures around us – and we do – but I prefer to think of the solution as being tooling us up in parallel: we need to have the ability to understand what we can see inside, and focus on what’s actually useful/necessary and what isn’t. Our attention is valuable and we mustn’t waste it.

How can all that be taught?

I remember writing down as a teenager, in some lesson or other, “What we need is a school subject called How and why things are, and how they operate.” Now, that’s broad enough that probably all existing academic subjects would lay claim to part of it. So maybe I’m really calling for a higher overall standard of education.

But the devices and systems we encounter in everyday life, the structures around us, can also help, by being designed to show us (and each other) what they’re doing, whether that’s ‘good’ or ‘bad’ (or perhaps ‘useful’ or not), and what we can do to improve their performance. And by influencing the way we use them, whether nudging, persuading or preventing us getting it wrong in the first place, we can learn as we use. Everyday life can be a constructionist learning process.

This all feeds into the idea of ‘Design for Independence’:

Reducing society’s resource dependence
Reducing vulnerable users’ dependence on other people
Reducing users’ dependence on ‘experts’ to understand and modify the technology they own.

One day I’ll develop this further as an idea – it’s along the lines of Victor Papanek and Buckminster Fuller – but there’s a lot of other work to do first. I hope it’s stuff that matters.

Dan Lockton

Another charging opportunity?

A knife blade cutting the cable of a generic charger/adaptor

Last month, an Apple patent application was published describing a method of “Protecting electronic devices from extended unauthorized use” – effectively a ‘charging rights management’ system.

New Scientist and OhGizmo have stories explaining the system; while the stated intention is to make stolen devices less useful/valuable (by preventing a thief charging them with unauthorised chargers), readers’ comments on both stories are as cynical as one would expect: depending on how the system is implemented, it could also prevent the owner of a device from buying a non-Apple-authorised replacement (or spare) charger, or from borrowing a friend’s charger, and in this sense it could simply be another way of creating a proprietary lock-in, another way to ‘charge’ the customer, as it were.

It also looks as though it would play havoc with clever homebrew charging systems such as Limor Fried‘s Minty Boost (incidentally the subject of a recent airline security débâcle) and similar commercial alternatives such as Mayhem‘s Anycharge, although these are already defeated by a few devices which require special drivers to allow charging.

Reading Apple’s patent application, what is claimed is fairly broad with regard to the criteria for deciding whether or not re-charging should be allowed – in addition to charger-identification-based methods (i.e. the device queries the charger for a unique ID, or the charger provides it, perhaps modulated with the charging waveform) there are methods involving authentication based on a code provided to the original purchaser (when you plug in a charger the device has never ‘seen’ before, it asks you for a security code to prove that you are a legitimate user), remote disabling via connection to a server, or even geographically-based disabling (using GPS: if the device goes outside of a certain area, the charging function will be disabled).

All in all, this seems an odd patent. Apple’s (patent attorneys’) rather hyperbolic statement (Description, 0018) that:

These devices (e.g., portable electronic devices, mechanical toys) are generally valuable and/or may contain valuable data. Unfortunately, theft of more popular electronic devices such as the Apple iPod music-player has become a serious problem. In a few reported cases, owners of the Apple iPod themselves have been seriously injured or even murdered.

…is no doubt true to some extent, but if the desire is really to make a stolen iPod worthless, then I would have expected Apple to lock each device in total to a single user – not even allowing it to be powered up without authentication. Just applying the authentication to the charging method seems rather arbitrary. (It’s also interesting to see the description of “valuable data”: surely in the case that Apple is aware that a device has been stolen, it could provide the legitimate owner of the device with all his or her iTunes music again, since the marginal copying cost is zero. And if the stolen device no longer functions, the RIAA need not panic about ‘unauthorised’ copies existing! But I doubt that’s even entered into any of the thinking around this.)

Whether or not the motives of discouraging theft are honourable or worthwhile, there is the potential for this sort of measure to cause signficant inconvenience and frustration for users (and second-hand buyers, for example – if the device doesn’t come with the original charger or the authentication code) along with incurring extra costs, for little real ‘theft deterrent’ benefit. How long before the ‘security’ system is cracked? A couple of months after the device is released? At that point it will be worth stealing new iPods again.

(Many thanks to Michael O’Donnell of PDD for letting me know about this!)

Previously on the blog: Friend or foe? Battery authentication ICs

UPDATE: Freedom to Tinker has now picked up this story too, with some interesting commentary.