All posts filed under “TRIZ

Next week: a simplified Design with Intent toolkit, v.0.9

The ‘Design with Intent method‘, on which I’m working as the first part of my PhD, has been fairly sparsely reported on this blog. This is intended to be an innovation method for helping designers faced with “behaviour change” problems come up with useful solutions, or in situations where helping users to use a product or system more efficiently would be worthwhile. The ideas that have gone into it are (mostly) the ‘positive’ side of what we’ve discussed on the blog for the last few years.

The brief series of posts from last summer about getting people to do things in a particular order, which more recently got some attention from Kati London’s ‘Persuasive Technologies: Designing the Human‘ class at NYU’s Interactive Telecommunications Program (with some very interesting student commentary) was based on a relatively early iteration of the method. At some point, I’ll draw up a comparison between the iterations of the method, even if simply for my own clarity of mind – it’s helpful to record why I changed different aspects along the way.

The initial plan had been for it to be almost TRIZ-like in terms of ‘prescribing’ relevant design techniques to help achieve particular target behaviours. The first few iterations of the method thus took the form of a kind of hierarchical decision tree. Live|Work‘s very helpful advice to me last summer to reduce the prescriptive nature slightly by having a kind of illustrated ‘idea space’ led – in due course – to the version tested in the pilot studies carried out in late 2008 and earlier this year. What the studies showed, among other things (to be reported in the Persuasive 2009 paper!) was that many designers, when asked to come up with concept solutions, don’t really like working from categories and rules and hierarchies, even where they would be useful. Some do (and with impressively exhaustive efficiency), but many don’t: they preferred to use the method as a kind of well of inspiration, without necessarily using it in any kind of procedural way.

So – and there’s another reason for this, too, which I’ll be able to announce at some point – it seemed sensible to redesign the method to accommodate both modes of working: a ‘prescription mode’ for the more procedure-driven designer, and an ‘inspiration mode’ for the designer who prefers less bounded creativity (a bit more like IDEO’s method cards, but not quite as unstructured as the Oblique Strategies). The inspiration mode is essentially a very simplified, flattened set of design patterns loosely grouped into different ‘lenses’ representing views on influencing behaviour, but with no real structure beyond that. It’s more of a ‘toolkit’ than a method, and because of its relative simplicity it seems worth releasing to get some feedback without too much more work. The “eight design patterns for errorproofing” post from a few weeks back is a kind of preview of part of it.

On Monday morning, then, there’ll be a large poster available to download on the blog, and I’ll do a series of posts forming the online component of the toolkit. So please, feel free to comment, make suggestions for improvements or better examples, or pick holes in it!

P.S. I’m aware the blog needs a bit of housekeeping in terms of making the sidebar work properly again in IE, fixing the very out-of-date blogroll, and my appalling sloth in replying to people who’ve very kindly sent very interesting links and ideas. I will try to get round to it all soon.

Some thoughts on classifications

Over the last couple of years, this site has examined, mentioned, discussed or suggested around 250 examples of ‘control’ features or methods designed into products, systems and environments – many of which have come from readers’ suggestions and comments on earlier posts. I’d resisted classifying them too much, since my original attempt wasn’t entirely satisfactory, and it seemed as though it might be better to amass a large quantity of examples and then see what emerged, rather than try to fit every example into a pre-defined framework.

As I start work on the PhD, though, it becomes more important to formalise, to some extent, the characteristics of the different examples, in order to identify trends and common intentions (and solutions) across different fields. My thinking is that while the specific strategy behind each example may be completely disparate, there are, on some levels, commonalities of intention.

Abstracting to the general…

For example, paving an area with pebbles to make it uncomfortable for barefoot protesters to congregate – U Texas, Austin and a system which curtails a targeted individual’s mobility by remotely disabling a public transport pay-card have very different specific strategies, but the overall intention in both cases is to restrict access based on some characteristic of the user, whether it’s bare feet or some data field in an ID system. In one case the intended ‘strength’ of the method is fairly weak (it’s more about discouragement); in the other the intended strength is high: this individual’s freedom must be curtailed, and attempted circumvention must be detected.

In the case of the pebbles, we might describe the method as something like “Change of material or surface texture or characteristic”, which would also apply to, for example, rumble strips on a road; the method of disabling the pay-card might be described as “Authentication-based function lockout”, which could also describe, say, a padlock, at least on the level of keyholder authentication rather than actual identity verification. (Note, though, that the rumble strip example doesn’t match the access-restriction intention, instead being about making users aware of their speed. Similar methods can be used to achieve different aims.)

…and back to the specific again

Of course, this process of abstracting from the specific example (with a specific strategy) to a general principle (both intention, and method) can then be reversed, but with a different specific strategy in mind. The actual specific strategy is independent of the general principle. Readers familiar with TRIZ will recognise this approach – from this article on the TRIZ Journal website:

TRIZ research began with the hypothesis that there are universal principles of creativity that are the basis for creative innovations that advance technology. If these principles could be identified and codified, they could be taught to people to make the process of creativity more predictable. The short version of this is:

Somebody someplace has already solved this problem (or one very similar to it.)
Creativity is now finding that solution and adapting it to this particular problem.

Much of the practice of TRIZ consists of learning these repeating patterns of problems-solutions, patterns of technical evolution and methods of using scientific effects, and then applying the general TRIZ patterns to the specific situation that confronts the developer.

So, following on from the above examples, where else is restricting access based on some characteristic of the user ‘useful’ to some agency or other? (Clearly there are many instances where most readers will probably feel that restricting access in this way is very undesirable, and I agree.) But let’s say, from the point of view of encouraging / persuading / guiding / forcing users into more environmentally friendly behaviour (which is the focus of my PhD research), that it would be useful to use some characteristic of a user to restrict or allow access to something which might cause unnecessary environmental impact.

An in-car monitoring system could adjust the sensitivity (or the response curve) of the accelerator pedal so that a habitually heavy-footed driver’s fuel use is reduced, whilst not affecting someone who usually drives economically anyway. (A persuasive, rather than controlling alternative would be a system which monitors driver behaviour over time and gives feedback on how to improve economy, such as the Foot-LITE being developed at Brunel by Dr Mark Young). Or perhaps a householder who throws away a lot of rubbish one week (which is recorded by the bin) is prevented from throwing away as much the next week – each taxpayer is given a certain allocation of rubbish per year, and this is enforced by an extension of the ‘bin-top spy’ already being introduced to prevent the bin being opened once the limit has been reached (OK, cue massive fly-tipping: it’s not a good idea – but you can bet someone, somewhere, has thought of it).

Both of the above ‘control’ examples strike me as technical overkill, unnecessarily intrusive and unnecessarily coercive, but thinking on a simpler level and extending the ‘characteristic of the user’ parameter to include characteristics of an object borne by the user (such as the key mentioned earlier), we might include everything from the circular slots and flaps on bottle banks (which make it more difficult to put other types of rubbish in – restricting access based on a characteristic of what the user’s trying to put in it), to narrower parking spaces or physical width restrictions to prevent (or discourage) wider vehicles (such as 4x4s) from being used in city centres.

At this stage, these thoughts are fairly undeveloped, and I’m sure the methods of classification will evolve and mature, but even writing a post such as this helps to clarify the ideas in my mind. The real test of any system such as this is whether it can be used to suggest or generate worthwhile new ideas, and so far I haven’t reached this level.