If…

(introducing behavioural heuristics)

Some heuristics extracted by workshop participants

EDIT (April 2013): An article based on the ideas in this post has now been published in the International Journal of Design – which is open-access, so it’s free to read/share. The article refines some of the ideas in this post, using elements from CarbonCulture as examples, and linking it all to concepts from human factors, cybernetics and other fields.

There are lots of models of human behaviour, and as the design of systems becomes increasingly focused on people, modelling behaviour has become more important for designers. As Jon Froehlich, Leah Findlater and James Landay note, “even if it is not explicitly recognised, designers [necessarily] approach a problem with some model of human behaviour”, and, of course, “all models are wrong, but some are useful”. One of the points of the DwI toolkit (post-rationalised) was to try to give designers a few different models of human behaviour relevant to different situations, via pattern-like examples.

I’m not going to get into what models are ‘best’ / right / most predictive for designers’ use here. There are people doing that more clearly than I can; also, there’s more to say than I have time to do at present. What I am going to talk about is an approach which has emerged out of some of the ethnographic work I’ve been doing for the Empower project, working on CarbonCulture with More Associates, where asking users questions about how and why they behaved in certain ways with technology (in particular around energy-using systems) led to answers which were resolvable into something like rules: I’m talking about behavioural heuristics.

If...

Behavioural heuristics

The term has some currency in game theory, other economic decision-making and even in games design, but all I really mean here is rules (of thumb) that people might follow when interacting with a system – things like:

▶ If someone I respect read this article, I should read it too

▶ If this email claiming to be from my bank uses language which makes me suspicious, I should ignore it

▶ If I’ve read something that makes me look intelligent, I should tell others

▶ If that Go Compare advert comes on, I should press ‘mute’

▶ If the base of my coffee cup might be wet, I should put it on something rather than directly on the polished wooden table

▶ If, when asked which of two cities has a bigger population, I have only heard of one of them, I should choose that one

▶ If my friend posts that she has a new job, I should congratulate her

▶ If there’s a puddle in front of me, I should walk round it

▶ If there’s a puddle in front of me, I should jump in it

▶ If I’m short of time, I should choose the brand name I recognise

▶ If I have some rubbish, and there’s a recycling bin nearby, I should recycle it

▶ If I have some rubbish, and there isn’t a recycling bin nearby, I should put it in a normal bin

▶ If that bench is wet or dirty, I should sit somewhere else

▶ If lots of my friends are using this app, I should try it too

▶ If there are lots of pairs of seats empty on the train, I should sit in one of them rather than sitting next to someone already occupying one of a pair

▶ If I can’t see the USB logo on the top of this connector, I should turn it over before trying to plug it in

▶ If I can’t get the USB cable to plug in properly, I should force it

▶ If seats are positioned round a table, I should sit at the table

▶ If I’m trying to lose weight, I should try to choose food with less fat in it

▶ If this envelope has HM Revenue & Customs on the back, I should open it

▶ If this envelope is from BT and printed on shiny paper, I should shred it immediately without bothering to open it

▶ If this website asks me to fill in a survey, I should click cancel immediately

▶ That urinal spacing thing. You know what I mean.

These are a mixture of instinctive or automatic reactions (a kind of ifttt for people) and those with more deliberative processes behind them: the elephant and rider or Systems 1 and 2 or whatever you like. Some are more abstract than others; most involve some degree of prior learning, whether purely through conditioning or a conscious decision, but in practice can be applied quickly and without too much in-context deliberation (hence at least some are ‘fast and frugal’, in Dan Goldstein and Gerd Gigerenzer’s terms). Some heuristics could lead to cognitive biases (or vice versa); some involve following plans, some are more like situated actions. And of course not all of them are true for everyone, and they would differ in different situations even for the same people, depending on a whole range of factors.

Just some chips with Tippexed faces on an old Dictaphone

Truth tables for people

Regardless of the backstory, though, each of these rules or heuristics potentially has effects in practice in terms of the actual behaviour that occurs. They are almost like atomic black boxes of action, transducers* which when connected together in specific configurations result in ‘behaviour’.

We might construct ‘behavioural personas’ which put together compatible (whatever that means) heuristics into persona-like fictional users, described in terms of the rules they follow when interacting with things, and both (admittedly crudely) simulate** their behaviour in a situation, and, maybe more importantly, design systems which take account of the heuristics that users are employing.

If we know that our fictive user is following a “If someone I respect read this article, I should read it too” heuristic, then designing a system to show users that people they respect (however that’s determined) read or recommended an article ought to be a fairly obvious way to influence the fictive user to read the article. If we know that he or she also follows related heuristics in other parts of life, e.g. the “If I’ve read something that makes me look intelligent, I should tell others” rule, then this action could also be incorporated into the process.

There are two main objections to this. One: it’s obvious, and we do it anyway; and two: treating people like electronic components is horrible / grotesquely reductive / etc. I don’t disagree with either, but am nevertheless interested in exploring the possibilities of using this kind of modelling, simple and lacking in nuance as it is, to provide a way of navigating and exploring the many different ways that design can influence behaviour. If we could do contextual user research with this kind of heuristic as a unit of analysis, uncovering how many users in our situation are likely to be following different heuristics, we could design systems which are not just segmented but tailored much more directly to the things which ‘matter’ to people in terms of how they behave.

Interaction 12 workshop
Interaction 12 workshop

Trying it out: thank you, Dublin guinea-pigs

At Interaction 12 last week in Dublin, 41 wonderful people from organisations including Adaptive Path, Google and Chalmers University took part in a workshop exploring the idea of these heuristics and how they might be used in design for behaviour change.

What we did first was a kind of rapid functional decomposition (in the Christopher Alexander sense) on a few examples where systems have been designed expressly to try to influence user behaviour in multiple ways.

The example I worked through first though was a simple decomposition of Amazon’s ‘social proof’ recommendation system: the point was to try to think through some of the ‘assumptions’ about behaviour that can be read into the design, and using a kind of laddering / Five Whys process, end up with statements of possible heuristics.

Amazon recommendations

So with the Amazon example here, what are the assumptions? Basically, what assumptions are present, that if true would explain how the system ‘works’ at influencing users’ behaviour? What I have glibly classified as simply social proof contains a number of assumptions, including things like:

▶ People will do what they see other people doing

▶ People want to learn more about a subject

▶ People will buy multiple books at the same time

And many others, probably. But let’s look in more detail at ‘People will do what they see other people doing’: Why? Why will people do what they see other people doing? If we break this down, asking ‘Why?’ a couple of times, we get to tease out some slightly different possible factors.

Decomposing 'People will do what they see other people doing'
Decomposing 'People will do what they see other people doing'

After a couple of iterations it’s possible to see some actual heuristics emerge:

Decomposing 'People will do what they see other people doing'

Of course there are many possible heuristics here, but for the five uncovered, it’s not too difficult to think of design patterns or techniques which are directly relevant:

▶ If lots of people are doing it, do it

Show directly how many (or what proportion of) people are choosing an option

▶ If people like me are doing it, do it

Show the user that his or her peers, or people in a similar situation, make a particular choice

▶ If people that I aspire to be like are doing it, do it

Show the user that aspirational figures are making a particular choice

▶ If something worked before, do it again

Remind the user what worked last time

▶ If an expert recommends it, do it

Show the user that expert figures are making a particular choice

There’s nothing there that isn’t obvious, but I suppose my point is that each heuristic implies a specific design feature, and the process of unpicking what the actual decision-points might involve gives us a much more targeted set of design possibilities than simply saying ‘put some social proof there’. Depending on the heuristics uncovered, it might be that simple majority preference (the Whiskas ad), irritating pseudo-authority-based messaging (Klout), friend-based recommendation (Facebook apps), peer voting (Reddit) or even celebrity/expert endorsement (John Stalker and Drummer endorsing awnings) could match individual users’ heuristics better.

In tests, 8 out of 10 owners who expressed a preferences said their cats preferred it
Klout: vermin of Twitter Facebook apps
Reddit John Stalker and Drummer endorse these awnings

Sometimes a service will use more than one, to try to satisfy multiple heuristics, or perhaps because the designers are not sure which heuristics are really important to the user (e.g. the This Is My Jam example below). In some ways, this process is approaching the kind of ‘persuasion profiling’ being pioneered by Maurits Kaptein, Dean Eckles and Arjan Haring’s Persuasion API, although from a different direction.

This is My Jam: Twitter recommendations
This is My Jam: popular recommendations

In the workshop, groups did a similar decomposition on three examples: Codecademy, Opower and Foodprints, part of More Associates’ CarbonCulture platform – the introductory material is reproduced below. [PDF of this material]

Codecademy
Opower
Foodprints

For each of these, groups extracted a handful of statements of possible heuristics – for example, for Opower, these included:

▶ If my neighbour can do it, I can do it

▶ If life’s a competition, I want to win it

▶ If I set myself goals, I want to meet them

▶ I don’t want to be the ‘weak link’, so I should do it

▶ I want to be ‘normal’, so I should do it

▶ [If I do it] I will be better than other people

▶ If I get apprecation from others, I will continue to do it

▶ If it stops me being the ‘bad guy’, I will do it

▶ If it stops me feeling guilty, I will do it

▶ [If I do it] I will improve myself

▶ If I don’t do it, I won’t fit in

▶ If I save money, I’ll have it for other things

▶ [If I do it] I will be a ‘good’ person

▶ [If I don't do it] bad things will happen

Personas

We went on to swap some of the heuristics among groups, and build them up into relatively plausible (if completely fake) personas, ranging from a “goth who doesn’t want to do what others do”, to Fido, a guide dog intent on helping his partially-sighted owner Bob (as SVA’s Lizzy Showman mentions here).

In turn, the groups then used the DwI cards as inspiration to generate some possible concepts in response to a brief about keeping that person (or dog) engaged and motivated as part of a behaviour change programme at work, around behaviours such as exercise, giving better feedback and so on. Finally, groups acted these out (photo below shows Fido and Bob!).

Guide dog

Where does all this fit into a design process?

What was the point of all this? The aim, really, is ultimately to provide a way of helping designers choose the most appropriate methods for influencing user behaviour in particular contexts, for particular people. This is what much design for behaviour change research is evolving towards, from Stanford’s Behaviour Wizard to Johannes Zachrisson’s development of a framework.

I would envisage that with user research framed and phrased in the right way, observation, interviews and actual behavioural data, it would be possible to extract heuristics in a form which are useful for selecting design patterns to apply. While in the workshop we ‘decomposed’ existing systems without doing any real user research, doing this alongside would enable the heuristics extracted to be compared and discrepancies investigated and resolved. The redesigned system could thus match much better the heuristics being followed by users, or, if necessary, help to shift those heuristics to more appropriate ones.

Ultimately, each design pattern in some future version of the DwI toolkit will be matched to relevant heuristics, so that there’s at least a more reasoned (if not proven) process for doing design for behaviour change, using heuristics as a kind of common currency between user behaviour and design patterns: user research → extracting heuristics → matching heuristics to design patterns → redesigning system by applying patterns → testing → back to the start if needed

In the meantime, my next step with this is to do some more extraction of heuristics from actual behavioural data for some particular parts of CarbonCulture, and (as my job requires) put this process into a more formal write-up for an academic journal. I will try to make some properly theoretical bridges with the heuristics work of Gerd Gigerenzer, Dan Goldstein and (as always) Herbert Simon. But if you have any thoughts, suggestions, objections or otherwise, please do get in touch.

Thanks to everyone who came to the workshop, and thanks too to the Interaction 12 organisers for an impressively organised conference.

* In reality, the rules have to be able to degrade if the conditions are not met: people are maybe following nested IF…THEN…ELSE loops rather than individual IF…THEN rules. Or perhaps more likely (this thought occurred while talking to Sebastian Deterding on a bus from Dun Laoghaire last week) a kind of CASE statement – which would take us into pattern recognition and recognition-primed decision models
**Matt Jones suggests I should read Manuel deLanda’s Philosophy and Simulation, which fills me with both excitement and fear…

Image sources: ‘If…’ movie poster; Whiskas ad; Nationwide awnings

Just some chips with Tippexed faces on an old Dictaphone gathered round to watch a display

21 thoughts on “If…”

  1. Hi Dan,

    if this is the outcome of your Dublin workshop, I should definitely have attended it instead :).

    What I like very much about your notion of behavioural heuristics is that they cut across deliberation, emotion, habit, etc. The way I understand it, your heuristic is the re-description of the observable behaviour of a human black box in a rule format. That’s very behavioristic in its parsimony, but for practical rather than epistemic or ontological reasons: You don’t deny that possibly, its due to deliberation, or emotion, or habit, or something else, or even some interaction between those things. But from the designer’s standpoint, you’re only interested in the emergent outcome of all this interaction within the black box as your “unit of analysis” and springboard for ideation.

    So, some questions.

    * Out of curiosity, why did you phrase your heuristics “If …, I should …”, rather then the pure “If …, then …” – apart from the quintessential Britishness ^_^? Doesn’t “should” already imply cognition and intention?

    * Does this model work beyond small-scale decisions (however automated those decisions are)? Can you play your idea of behaviour resulting from a set of heuristics through for getting a person to exercise regularly, or vote for party X? Take exercise: In principle, I could imagine mapping every single little decision point from the first time a person makes a decision to buy running shoes to the point where exercise has become a new routine, and somehow capture the heuristics operating at that moment, but doing so strikes me as impractical. I sense you need something on a higher level of abstraction, something of a coarser granularity that cares about time as well. As for voting: I assume many voting decisions are interactions of many heuristics interacting with each other (only few people bother reading all party manifestos), but then again, each person likely has a highly individual set of heuristics operating (some care for personality, others for family tradition, …).

    * Connected to that, does this model lead to ignoring design strategies that don’t operate on a cognitive, information processing level? E.g. making stuff harder to do? For the ‘heuristic’ “if it’s too hard, don’t do it” is implicit in everything, no? Do heuristics capture phenomena like “in the afternoon, my energy level is usually pretty low, so even though I want to not procrastinate, I end up doing it quite often”?

    * How exactly in user research do you get at the heuristics that correctly capture the relevant dynamics within the black box for the given behaviour in the given situation? Essentially, that brings up two connected problems:
    (1) The limits of self-report: People come up with all kinds of post-rationalised reasons why they did what they did.
    (2) Entering Quine-Duhem territory: The Quine-Duhem thesis states that in general, any empirical observation is congruent with a possibly infinite amount of theoretical explanations for it. Put otherwise, there is a possibly infinite amount of heuristic statements you could generate that would all explain why a person did something at a given moment. This is the researcher, not the subject rationalising

    * Ontologically, do you see those heuristics as post-hoc descriptions of the emergent result of some decision/pre-action processes, or as actual rules soft-coded “as-is” somewhere (in memory, ‘muscle memory’, …)?

    * Often, heuristics do clash – we are messy, heterogeneous, incongruent creatures to say the least. To stick with your example: I have a “read stuff people I trust read” heuristic, and a “don’t click on anything saying ‘techcrunch’” heuristic. Often, these clash, and that’s where deliberation and/or indecision kicks in. Heuristics can still be useful to inform design, but I would assume that in quite some cases, changing people’s behaviour would mean to identify and resolve such clashes (being green but also making ends meet, for instance). “Motivational interviewing” tries to elicit and in the course resolve internal conceptual and value clashes as an explicit psychotherapeutic method.

    * What about interactions not *within* the black box that is the human, but *between* the human and the environment? I.e., back to affordances. What a trained parcours practicioner perceives as an opportunity to jump onto, you and I perceive as an insurmountable obstacle. So the heuristic “if it is insurmountable, I should walk around it” correctly describes both our and the behaviour of the parcours practicioner, but the way we behave might actually be very different.

    * If you have stated what drives the behaviour of a person in question in such a heuristic format, do you actually need to pair that with a design pattern? Is the solution approach not already implicit in the clear problem statement? Don’t problem statements often imply solution approaches that go beyond existing design patterns and are possibly even more appropriate to the problem. Take your example: “If something worked before, do it again” -> “Remind the user of what worked last time”. As far as I know, that’s not even a pattern in your collection, no? (That’s pretty much a practical observation from the brainstorming workshops I ran, and in the end, a practical question: Is it easier for designers to generate ideas if they combine the problem statement with a solution suggestion as springboard, or do solution ideas flow so readily from the problem statement that adding a design pattern as a further ideation input is more constraining than inspiring?)

    So to summarise, I think your behavioural heuristics work well for a certain set of problems/behaviours, namely *small, one-time (but possibly repeated) decisions made in highly specific, predictable contexts* like choosing whether to click through to an article link while browsing twitter. The more specific and homogeneous-across-repetition the context, the easier it is to identify such heuristics. Or put differently: The more variables in the messy equation of behaviour are stable and predictable that we can reduce the variable remainder into an if-then statement. And mind you, that’s no small achievement :). But still, and that’s my second conclusion (and something I’m guilty of as well), you are glancing over the hard design synthesis work of reducing all that remaining messy variance into such a clear, concise statement. The bulk of the work is not jumping from the heuristic to a design solution, but aggregating research into a heuristic statement.

    Cheers.

  2. Thanks so much Sebastian – that’s a much more detailed review than anything based on this is ever likely to get from journal reviewers. You’ve given me some really good questions there, and I’ll try to answer them over the weekend.

    First though, the ‘should’ is kind of a mistake, since it does imply some kind of deliberation which may not be present. I meant it in an informal sense of someone reading through the logic of a program, saying something like “well, if this variable is greater than that one, it should branch to here…” but you’re right, ‘should’ implies more than it should!

    Will come back to the rest tomorrow – thanks again

  3. Right, sorry for the hiatus. Continuing from my last comment:

    - About the ‘levels of abstraction’ at which these heuristics operate, I suppose the most sensible way I can conceive it working is that in a situation, we will inevitably have a number of relevant heuristics which are applicable, some of which are ‘universal’ like “if it’s going to take a long time to do, then don’t do it” and others such as “if you think it’s going to be tasty, then eat it”, “if it’s fatty, then don’t eat it” and so on. They interact and clash and work on different levels as you suggest. We may have overarching heuristics like “if something is bad for the environment, then don’t do it” yet at the coal-face, we follow heuristics like “if it’s cheaper to drive to work than get the bus, then drive to work”.

    Imagine a friend has given you a recipe for a pie which sounds delicious, but will take some time and effort to cook, and is also going to be quite fatty. One or more of those (potentially conflicting) heuristics is going to ‘win’, but it’s unlikely to be simply a case of adding up columns of pros and cons for each possible action (like Bentham) or enumerating the possibilities of each (à la Darwin) – rather, something will lead one heuristic to dominate. Maybe if you’re hungry right now, the rule about time will dominate (and lead you to follow the course of action of eating something else instead), but maybe the potential tastiness will dominate. The presence of other actors might also sway you one way or the other – is it fair to inflict this fatty pie on someone you’re dating if you know she watches her weight, or will she be impressed by the time/effort/kindness/tastiness?

    Why exactly one/a set of heuristics dominates, and why the dominant one(s) is/are different for different people in different contexts, is, I guess, basically what the entire field of decision research is about. Maybe all of psychology. But from the entirely pragmatic design point of view, I would venture that even the process of trying to understand the heuristics which are present at the point of decision, and what choices are really being made (e.g. is it fatty vs healthy, or time vs taste, or impress-the-girl-by-cooking vs impress-her-by-considering-her-health?) by particular users ought to give us a much better insight into what design techniques would help support / change particular decisions and behaviours.

    - Getting at the heuristics. Yes, you’re right, the challenge of actually uncovering them is massive, and I did skirt round the issue in the post. There could indeed be infinite interpretations of each, and there will be post-rationalisation by both the researcher and the participant, I’m sure. I suppose my reasons for considering the heuristics as something like black boxes is exactly for this reason: compare outputs to inputs and if you can describe the results in terms of a rule – even if the reasons given behind the rule are wrong or absent – the rule may still be useful. To take your Techcrunch example, describing the heuristic as “don’t click on anything saying ‘techcrunch’” doesn’t presuppose the reasons for itself – which may be due to disliking their style, may be due to having learned that the articles will make you angry / waste your time, or it may be something else entirely (like not wanting to be exposed to YouTube-like levels of comments). But the rule “If it says Techcrunch in the URL, don’t click on it” is detailed enough to inform at least a basic model of your behaviour when presented with URLs. It’s enough to know that to get you to click on a Techcrunch link, the URL will have to be obfuscated. To the extent that I’ve thought about a process for this (which I haven’t yet in detail – hence why your comment is so helpful!) I would venture that the really basic, solely descriptive behavioural heuristics, with no motivation or reasoning assigned to them, could act as a kind of first step to investigating (and influencing) behaviour, with more detailed investigation following.

    Something why always struck me about the ‘Five whys’ technique was that while it might be intended as a ‘root cause’ analysis, each level of ‘Why?’ potentially offers useful opportunities for intervention to solve a problem. If someone asks me “Dan, why don’t you do more exercise?”, my initial answer will probably be “I’m just too busy to be able to take the time to do it” (which might suggest design interventions like exercise equipment that I could use while at my desk). Further stages of ‘Why?’ might reveal “I’m too busy because I’m poor at organising my time” (which might suggest a design intervention around scheduling) or “I’m poor at organising my time because I get too easily distracted” (which might suggest some kind of focus-based design intervention) and so on. Each of them could be a valid behaviour change solution, just operating at different levels of abstraction. Of course many ‘solutions’ are basically treating the symptoms of an underlying disease rather than the disease itself, but this is not uncommon. We still find painkillers useful even though they don’t root out what the cause of the pain is in the first place. And I think design for behaviour change can still be a useful painkiller in many circumstances even as we try to understand the disease in others.

    - The parcours and affordances example is interesting. There is some work in the human factors literature on ‘econiches’, e.g. Warren (1995) which is relevant here. I would say, of course, the ‘same’ heuristics may be interpreted differently by different users as they relate to themselves, but if they are derived from empirical observations of behaviour, then we just need to find a level which we are able to address through the design techniques available to us.

    - You’re probably right that the design patterns don’t need to follow from the heuristics. If the heuristics are stated clearly enough, the general form of the solution will be obvious. Perhaps the value of the design patterns is really in situations where the heuristics are difficult to uncover or state, or are unclear. Maybe the two approaches could meet in the middle, working backwards from design patterns which seem like they might be relevant and forward from a set of plausible heuristics to define more fully the possibility space for the problem and possible solutions. This is something I’ll try to explore through future workshops!

    I probably haven’t answered all your points, but thanks again for a really helpful comment which has certainly given me the opportunity to think this stuff through a bit more clearly!

  4. Wow, the formatting of these comments is awful. I apologise – didn’t test this WP theme well enough before installing it, and am reluctant to use Disqus or similar due to their cross-site tracking.

  5. Hi Dan,

    thanks for the long, very insightful reply – and thanks again for the long, insightful post itself :).

    First off, your description of how one heuristic “bundle” comes to outweigh another one has a striking resemblance to how connectionism (http://en.wikipedia.org/wiki/Connectionism) models cognition.

    Next, I basically put you on the spot for something that I do myself – namely using “the 5 Why’s”/”laddering”/however you name it, and not accountig for how prone it is to post-rationalisation errors. It strikes me that this really, really gets at the root of the distinction between humanistic/phenomenological and behavioristic approaches.

    For if you take post-rationalisation *really* seriously (as Skinner did), you have no choice to validate that your if-then rule is a fitting description but to run a series of experimental tests where you try to test and falsify all other alternative plausible rules that could also describe my behaviour. To stick with techcrunch: Is my rule “don’t trust anything with a ‘t’ in the domain name”? Or “don’t trust anything with a .com domain ending”? So you amass data until you can say: “This if-then rule (don’t trust anything that has ‘techcrunch’ in the URL) is the only one congruent with all data, so I stick with it”. Wich is essentially what behavioral (re)targeting does: Run with whatever fits the data best, not knowing whether you follow a correlation or causation.

    Whereas if you grant that people may have at least *some* self-insight into their mental states (not, mind you, the underlying cognitive processes), and that these mental states do correlate at least to *some* degree with underlying cognitive processes, then you are allowed to ask the individual to introspect. Which is why I try – in my guidelines for user research – to instruct people to stick to *describing their lived experience*, rather than explaining what caused this experience.

    Why then use the “5 Why’s”? I think we can cut ourselves some slack if we shift the interpretation of what the interviewed person does when engaging in “5 Why’s” – namely, that the person is not somehow magically lifting the mind-body veil and looking at its own cognitive wheels –, but that by and large, we are our own best oberservers in that we have amassed the largest data set of our own past behaviours from which to conjecture. The only remaining issue is that we also tend towards many self-serving biases. Hence we should take the answers to the 5 Why’s as yet one more data point, and nothing more.

    Finally, on the levels of abstraction: If you assume that these heuristics ‘exist’ on different levels of abstraction, then that implies that they are not merely the emergent result of what happens in the black box, because that emergent result will by definition always be one and just one rule (if a complex one). But what I consider even more important is that thinking through design for long-term changes or change processes with mental heuristics will likely be *impractical*, because – to go back to the exercising example – you would have to map a “decision journey” of some sort with a million single decision points for every relevant decision on the way. That’s where I think the usefulness of the model breaks down, and we need another model – equally wrong, but more useful.

    And yes, definitely, this debate could use another typographical space ;-).

  6. So I’m re-reading what you wrote here (http://architectures.danlockton.co.uk/2012/02/09/if/), which got me thinking about how understanding the reasons for people’s behaviour (5 why approach) might let us understand their deeper motivations which could allow us to apply change more effectively by finding the leverage points with more impact. assuming a human itself is a ‘belief and bias and mental model system’ and how we can apply Donella Meadows’ leverage points hierarchy to (groups of) people.
    Sebastian pointed out how any model will fall short of *really* understanding people. So we might not ever find the ‘leverage points in a human’, a model that – if we would understand other people’s motivations that well – could be a scary scary manipulation tool.
    But anyway, I’m wondering how to add levels/hierarchies to your heuristics based on psychological models probably. I assume you’ve thought about stuff like that.

  7. As a complete newbie to UX and user design this is my first time to this blog and I found it fascinating. Great spattering of useful links (first time I’ve heard of the 5 why’s) and something I hope can help me in my web design. Great job Dan!

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>