All posts filed under “Vague rhetoric

Sort some cards and win a copy of The Hidden Dimension

The Hidden Dimension

UPDATE: Thanks everyone – 10 participants in just a few hours! The study’s closed now – congratulations to Ville Hjelm whose book is now on its way…

If you’ve got a few minutes spare, are interested in the Design with Intent techniques, and fancy having a 1/10 chance of winning a brand-new copy of The Hidden Dimension, Edward T Hall’s classic 1966 work on proxemics (very worthwhile reading if you’re involved in any way with the design of environments, either architecturally or in an interaction design sense), then please do have a go at this quick card-sorting exercise [now closed].

It makes use of the pinball / shortcut / thoughtful user models I introduced in the last post, so it would probably make sense to have that page open alongside the exercise. The DwI techniques will be presented to you distinct from the ‘lenses’ (Errorproofing, Cognitive etc) so don’t worry about them.

The free WebSort account I’m using for this only allows 10 participants, so be quick and get a chance of winning the book! Once 10 people have done it, I’ll draw one of the participants out of some kind of hat or bucket and email you to get your postal address.

The purpose here (a closed card-sort, to use Donna Spencer‘s terminology) is, basically, to find out whether the pinball / shortcut / thoughtful models allow the DwI techniques to be assigned to particular ways of thinking about users – that make sense to a reasonable proportion of designers. There’s no right or wrong answer, but if 80% of you tell me that one technique seems to fit well with one model, while for another there’s no agreement at all, then that’s useful for me to know in developing the method.

Thanks for your help!

Card sorting

Cover photo from Amazon

Stuff that matters: Unpicking the pyramid

Most things are unnecessary. Most products, most consumption, most politics, most writing, most research, most jobs, most beliefs even, just aren’t useful, for some scope of ‘useful’.

I’m sure I’m not the first person to point this out, but most of our civilisation seems to rely on the idea that “someone else will sort it out”, whether that’s providing us with food or energy or money or justice or a sense of pride or a world for our grandchildren to live in. We pay the politicians who are best at lying to us because we don’t want to have to think about problems. We bail out banks in one enormous spasm of cognitive dissonance. We pay ‘those scientists’ to solve things for us and them hate them when they tell us we need to change what we’re doing. We pay for new things because we can’t fix the old ones and then our children pay for the waste.

Economically, ecologically, ethically, we have mortgaged the planet. We’ve mortgaged our future in order to get what we have now, but the debt doesn’t die with us. On this model, the future is one vast pyramid scheme stretching out of sight. We’ve outsourced functions we don’t even realise we don’t need to people and organisations of whom we have no understanding. Worse, we’ve outsourced the functions we do need too, and we can’t tell the difference.

Maybe that’s just being human. But so is learning and tool-making. We must be able to do better than we are. John R. Ehrenfeld’s Sustainability by Design, which I’m reading at present, explores the idea that reducing unsustainability will not create sustainability, which ought to be pretty fundamental to how we think about these issues: going more slowly towards the cliff edge does not mean changing direction.

I’m especially inspired by Tim O’Reilly’s “Work on stuff that matters” advice. If we go back to the ‘most things are unnecessary’ idea, the plan must be to work on things that are really useful, that will really advance things. There is little excuse for not trying to do something useful. It sounds ruthless, and it does have the risk of immediately putting us on the defensive (“I am doing something that matters…”).

The idea I can’t get out of my head is that if we took more responsibility for things (i.e. progressively stopped outsourcing everything to others as in paragraphs 2 and 3 above, and actively learned how to do them ourselves), this would make a massive difference in the long run. We’d be independent from those future generations we’re currently recruiting into our pyramid scheme before they even know about it. We’d all of us be empowered to understand and participate and create and make and generate a world where we have perspicacity, where we can perceive the affordances that different options will give us in future and make useful decisions based on an appreciation of the longer term impacts.

An large part of it is being able to understand consequences and implications of our actions and how we are affected, and in turn affect, the situations we’re in – people around us, the environment, the wider world. Where does this water I’m wasting come from? Where does it go? How much does Google know about me? Why? How does a bank make its money? How can I influence a new law? What do all those civil servants do? How was my food produced? Why is public transport so expensive? Would I be able to survive if X or Y happened? Why not? What things that I do everyday are wasteful of my time and money? How much is the purchase of item Z going to cost me over the next year? What will happen when it breaks? Can I fix it? Why not? And so on.

You might think we need more transparency of the power structures and infrastructures around us – and we do – but I prefer to think of the solution as being tooling us up in parallel: we need to have the ability to understand what we can see inside, and focus on what’s actually useful/necessary and what isn’t. Our attention is valuable and we mustn’t waste it.

How can all that be taught?

I remember writing down as a teenager, in some lesson or other, “What we need is a school subject called How and why things are, and how they operate.” Now, that’s broad enough that probably all existing academic subjects would lay claim to part of it. So maybe I’m really calling for a higher overall standard of education.

But the devices and systems we encounter in everyday life, the structures around us, can also help, by being designed to show us (and each other) what they’re doing, whether that’s ‘good’ or ‘bad’ (or perhaps ‘useful’ or not), and what we can do to improve their performance. And by influencing the way we use them, whether nudging, persuading or preventing us getting it wrong in the first place, we can learn as we use. Everyday life can be a constructionist learning process.

This all feeds into the idea of ‘Design for Independence’:

Reducing society’s resource dependence
Reducing vulnerable users’ dependence on other people
Reducing users’ dependence on ‘experts’ to understand and modify the technology they own.

One day I’ll develop this further as an idea – it’s along the lines of Victor Papanek and Buckminster Fuller – but there’s a lot of other work to do first. I hope it’s stuff that matters.

Dan Lockton

The asymmetry of the indescribable

Like the itchy label in my shirt, there’s something which has been niggling away at the back of my mind, ever since I started being exposed to ‘academic fields’, and boundaries between ‘subjects’ (probably as a young child). I’m sure others have expressed it much better, and, ironically, it probably has a name itself, and a whole discipline devoted to studying it.

It’s this:
The set of things/ideas/concepts/relationships/solutions/sets that have been named/defined is much, much, much smaller than the set of actual things/ideas/concepts/relationships/solutions/sets.

And yet without a name or definition for what you’re researching, you’ll find it difficult to research it, or at least to tell anyone what you’re doing. The set of things we can comprehend researching is thus limited to what we’ve already defined.

How do we ever advance, then? Are we not just forever sub-dividing the same limited field with which we’re already familiar? Or am I missing something? Is this a kind of (obvious) generalisation of the Sapir-Whorf hypothesis?

Relating it to my current research, as I ought to, the problems of choice architecture, defaults, framing, designed-in perceived affordances and so on are clearly special cases of the idea: the decision options people perceive as available to them can be, and are, used strategically to limit what decisions people make and how they understand things (e.g. Orwell’s Newspeak). But whether it’s done deliberately or not, the problem exists anyway.

Dredging up some old ideas

Three essays I’d pretty much forgotten about, written for courses at Cambridge during my Master’s in Technology Policy, linked here for no reason in particular:

Peer Treasure: how firms outside the software industry can use open source thinking
How can we strengthen links between entrepreneurial companies and entrepreneurial universities in the UK?
Motor vehicles in the developing world: options for sustainability* [all PDFs]

Read More

Cross-purposes?

Last week I was at a seminar where a fellow student was outlining some (very interesting) research about how to adapt ‘professional’ products to be usable by a ‘lay’ audience (what functions do you retain, what do you lose, how do you deal with different mental models? and so on)

He repeatedly referred to the importance of ‘user experience’ throughout the presentation, and it took me a while to realise that he was not talking about UX, but “the degree of prior knowledge/understanding a user has, having dealt with similar products/systems”. That made a whole lot more sense. Yet no-one else in the room – including a number of people with backgrounds in human-centred design – asked about or pointed out this (quite important) difference.

It made me think: how often in science, technology – indeed any subject – are people talking about very different things yet using the same terminology? Do they realise they’re doing it? And can this ever be used as a deliberate provocation tactic to generate new ideas or ways of looking at things? Can we think of third and fourth meanings for terms that might give us insights? (E.g. with ‘user experience’, can we think of the ‘experience’ a product has with a user – his or her quirks, errors, misperceptions and so on – rather than the other way round? Is that ever helpful?)