Category Archives: Good profits

Two events next week

Next Wednesday evening, 27th May, I’ll be giving a presentation about Design with Intent at SkillSwap Brighton’s ‘Skillswap Goes Behavioural’ alongside Ben Maxwell from Onzo (pioneers of some of the most interesting home energy behaviour change design work going on at present). I hope I’ll be able to give a thought-provoking talk with plenty of ideas and examples that can be practically applied in interaction, service design and user experience. Thanks to James Box of Clearleft for organising this.

Walkway

Then on Thursday 28th, I’m honoured to be talking as part of a symposium in Loughborough University’s Radar Arts Programme‘s ‘Architectures of Control‘ themed events exploring how our lives are impacted by social and environmental controls.

The symposium is interspersed with the performance of Mark Titchner’s ‘Debating Society and Run’, which sounds intriguing. In the symposium I’ll be talking alongside Professor David Canter, who seems to have had an incredible career ranging from environmental to offender profiling (inspiration for Cracker, etc) and Alexa Hepburn, senior lecturer in Social Psychology at Loughborough. Again, I hope my presentation does justice to the event and other participants! Thanks to Nick Slater for inviting me.

The week after (4th June) I’ll be giving a presentation at UFI in Sheffield, best known for its Learndirect courses. I’m hoping to be able to run a bit of a very rapid idea-generation workshop as part of this talk, something of an ultra-quick trial of the DwI toolkit

Slanty design

Library of Congress, Main Reading Room
The Main Reading Room, Library of Congress. Image from CIRLA.

In this article from Communications of the ACM from January 2007, Russell Beale uses the term slanty design to describe “design that purposely reduces aspects of functionality or usability”:

It originated from an apocryphal story that some desks in the US Library of Congress in Washington, DC, are angled down toward the patron, with a glass panel over the wood, so when papers are being viewed, nothing harmful (like coffee cups, food and ink pens) can be put on top of them. This makes them less usable (from a user-centric point of view) but much more appropriate for their overall purpose.

[S]lanty design is useful when the system must address wider goals than the user might have, when, say, they wish to do something that in the grander scheme of things is less than desirable.

New Pig cigarette binCone cup
The angled lid on this cigarette bin prevents butts being placed on top; the cone shape of cup subtly discourages users from leaving it on the table.

We’ve looked before on this site at a couple of literally ‘slanty’ examples – notably, cigarette bins with angled lids and paper cone cups (above) – and indeed “the common technique of architects to use inclined planes to prevent people from leaving things, such as coffee cups, on flat spaces” is noted on the Designweenie blog here – but in his article, Beale expands the scope of the term to encompass interfaces or interaction methods designed to prevent or discourage certain user behaviour, for strategic reasons: in essence, what I’ve tried to corral under the heading ‘architectures of control‘ for the last few years, but with a different way of arriving at the idea:

We need more than usability to make things work properly. Design is (or should be) a conversation between users and design experts and between desired outcomes and unwanted side effects… [U]ser-centred design is grounded in the user’s current behavior, which is often less than optimal.

Slanty design incorporates the broader message, making it difficult for users to do unwanted things, as well as easy to do wanted things. Designers need to design for user non-goals – the things users do not want to do or should not be able to do even if they want to [my emphases]. If usability is about making it easy for users to do what they must do, then we need to have anti-usability as well well, making it difficult for them to do the things we may not want them to do.

He gives the example of Gmail (below), where Google has (or had – the process is apprently not so difficult now) made it difficult for users to delete email – “Because Google uses your body of email to mine for information it uses to target the ads it delivers to generate revenue; indeed, deleting it would be detrimental to the service” but that in fact, this strategy might be beneficial for the user – “By providing a large amount of storage space for free, Gmail reduces any resource pressure, and by making the deletion process difficult it tries to re-educate us to a new way of operating, which also happens to achieve Google’s own wider business goals.” This is an interesting way of looking at it, and somewhat reminscent of the debate on deleting an Amazon or eBay account – see also Victor Lombardi’s commentary on the where the balance lies.

How to delete an email in Gmail

However, from my point of view, if there’s one thing which has become very clear from investigating architectures of control in products, systems and environments, it’s that the two goals Beale mentions – “things users do not want to do” and things users “should not be able to do” – only coincide in a few cases, and with a few products, and a few types of user. Most poka-yoke examples would seem to be a good fit, as would many of the design methods for making it easier to save energy on which my PhD is focusing, but outside these areas, there are an awful lot of examples where, in general, the goal of the user conflicts with the goal of the designer/manufacturer/service provider/regulator/authority, and it’s the user’s ability which is sacrificed in order to enforce or encourage behaviour in line with what the ‘other’ party wants. “No-one wakes up in the morning wanting to do less with his or her stuff,” as Cory Doctorow puts it.

Beale does recognise that conflicts may occur – “identify wider goals being pursued by other stakeholders, including where they conflict with individual goals” – and that an attempt should be made to resolve them, but – personally – I think an emphasis on using ‘slanty’ techniques to assist the user (and assist the ‘other party’, whether directly or simply through improving customer satisfaction/recommendation) would be a better direction for ‘slanty design’ to orient itself.

Slanty carousel - image by Russell Beale
“Slanty-designed baggage carousel. Sloping floor keeps the area clear”. From ‘Slanty Design’ article by Russell Beale.

Indeed, it is this aim of helping individual users while also helping the supersystem (and actually using a slant, in fact) which informs a great suggestion on which Beale elaborates, airport baggage carousels with a slanted floor (above):

The scrum of trolleys around a typical [carousel] makes it practically impossible to grab a bag when it finally emerges. A number of approaches have been tried. Big signs… a boundary line… a wide strip of brightly coloured floor tiles…

My slanty design would put a ramp of about 30 degrees extending two meters or so up toward the belt… It would be uncomfortable to stand on, and trolleys would not stay there easily, tending to roll off backward or at least be awkward to handle. I might also add a small dip that would catch the front wheels, making it even more difficult to get the trolley or any other wheeled baggage on it in the first place, but not enough to trip up a person.

If I was being really slanty, I’d also incorporate 2 cm-high bristles in the surface, making it a real pain for the trolleys on it and not too comfy for the passengers to stay there either. Much easier for people to remain (with their trolleys) on the flat floor than negotiate my awkward hill. We’d retain the space we need, yet we could manage the short dash forward, up the hill, to grab our bags, then return to our trolleys, clearing the way for the next baggage-hungry passenger.

There are some very interesting ideas embodied in this example – I’m not sure that using bristles on such a slope would be especially easy for wheelchair users, but the overall idea of helping both the individual user, and the collective (and probably the airport authority too: reducing passenger frustration and necessity for supervision of the carousel), is very much something which this kind of design, carefully thought out, can bring about.

Making exercise cooler

Snowdown, by Matthew Barnett
Main image and above right: Snowdown aesthetic model; below right: Snowdown functional test rig prototype.

Snowdown, by Matthew Barnett, is fantastic. Powered by a child exercising, moving the handle, it crushes ice cubes and compacts them to make snowballs. There are a lot of kids out there who would very much like one of these, at any time of year – summer especially. Shown last month at Made in Brunel – I hope Matthew finds a way to take the project forward.

Is the requiring-exercise-to-get-a-reward strategy an architecture of control? I think so, and I think this product exemplifies why and how it is possible to use ‘control’ for the benefit of the user. Sure, society benefits when children grow up more healthily, but the children (and their parents) also benefit. And Snowdown actively rewards the user for his or her effort.

We’ve seen this thinking, specifically regarding encouraging exercise, embodied before on the blog in two products, as far as I can remember: Gillian Swan’s Square-Eyes (also from Brunel), and, of course, the Entertrainer. Both of these use television as the ‘reward’ for exercise – in the case of Square-Eyes, 100 steps on the special insole equate to 1 minute of TV time (controlled by a base station); with the Entertrainer, the user’s heart rate is monitored (you can set the level of exercise you want) and the TV’s volume is controlled, which is an interesting concept: you exercise watching the TV, keeping your heart rate within the optimal range:

The chest strap heart monitor wirelessly relays your heart rate to the Entertrainer™. The Entertrainer then determines if your heart rate is within, above, or below your target zone. If your heart rate is low, the Entertrainer lowers the volume on your television (or other infrared remotely controlled device). If your heart rate is within the target zone (range), the volume remains at a comfortable level. If your heart rate is too high, the volume increases.

Stanford’s Captology research group has also investigated exercise-promotion persuasive technology extensively (e.g. here) but I’m not sure to what extent actual ‘control’ is involved, as opposed to persuasion through making exercise more attractive/fun.

Square-Eyes by Gillian Swan Square-Eyes by Gillian Swan
Square-Eyes by Gillian Swan, using special insoles and a control unit

Image from theentertrainer.com
The Entertrainer (image from theentertrainer.com)

Nevertheless, with all the above examples, the element of control is very much something the user opts into (unless, say, parents were to force their kids to use Square-Eyes or have no TV) rather than having it imposed with no choice. The ‘code’ is embedded in the product architecture, but you make a choice to use the product because you want the discipline it can help give you.

And again, Snowdown stands out, since it is something fun in itself. Indeed, it may be stretching it to see it as any more a control example than any other children’s toy which requires exercise (bicycle, trampoline, rollerskates, etc). If I hadn’t seen Matthew’s description which specifically highlighted the product’s ability to promote exercise in children, I probably wouldn’t have considered it in this light at all. And it’s perhaps this ‘mindless margin’ (to quote Brian Wansink) of helping yourself while not feeling that you’re being ‘controlled’, which might lie behind positive, successful, ethical, useful applications of architectures of control in design as opposed to the generally anti-user spirit with which the majority are imbued.

Bad profits

Image from Sevenblock (Flickr)
Image from Sevenblock (Flickr)
The Gillette Sensor Excel not only comes with a dummy blade, it also only comes with two out of five possible blade slots filled. Images from Sevenblock on Flickr.

The razor-blade model in general is something of an old chestnut as far as architectures of control go, and we’ve covered it in a number of different contexts on this site over the past couple of years. But it’s always interesting to see it in action with razors themselves, especially if the strategy has become even less consumer-friendly. Via the This Is Broken pool on Flickr, in which ‘Sevenblock‘ talks about Gillette’s use of a dummy blade and dummy slots on the Sensor Excel packaging, I learned of Fred Reichheld’s concept of ‘bad profits’:

…there is something disappointing with the set-up of buying a new razor. This razor reminded me of Fred Reichheld.

The blade which arrives pre-attached to the razor is fake. Is it dangerous to use a real one? Perhaps.

No, it is a set-up to dupe customers into grabbing a new razor and heading to the mirror only to realize that they are holding a plastic faux blade. Then, turn over the packaging, and two razors are held in a spot for five. Another subtle sigh from the customer.

Why not surprise the customer in the other direction? “Wow, five blades! For less than 20 dollars.” Because that’s what happens when you go to refill. BJs and Costco have good deals on bulk blades.

Reichheld’s idea is, effectively, that a company’s strategies can centre on creating ‘good profits’ or ‘bad profits’:

Whenever a customer feels misled, mistreated, ignored, or coerced, then profits from that customer are bad. Bad profits come from unfair or misleading pricing. Bad profits arise when companies save money by delivering a lousy customer experience. Bad profits are about extracting value from customers, not creating value.

If bad profits are earned at the expense of customers, good profits are earned with customers’ enthusiastic cooperation. A company earns good profits when it so delights its customers that they willingly come back for more—and not only that, they tell their friends and colleagues to do business with the company.

What is the question that can tell good profits from bad? Simplicity itself: How likely is it that you would recommend this company to a friend or colleague?

The full article is well worth a read, as, I expect, Reichheld’s book The Ultimate Question is too (though one reviewer on Amazon also offers some succinctly persuasive criticism).

The basic concept, that the ‘ultimate question’ of whether or not a customer would recommend a company is the key to growth is a good way of articulating, from a business perspective, the message of consumer advocacy that so many from Ralph Nader and Vance Packard to Consumerist and Seth Godin have promulgated over the years, though of course the ‘Why?’ and ‘Why not?’ are crucial. But Reichheld’s simple identification of ‘good profit’ and ‘bad profit’ seems to be a very clever way of looking at the issue: the ‘good’ and ‘bad’ labels refer to the effect on the company itself as well as on the customer, since a company reliant on bad profits will, one would assume, ultimately, lose its customer base (unless there are no alternatives – Brand Autopsy has an interesting piece on this in relation to car rental firms).

Most commercially driven architectures of control, then (as opposed to politically driven ones) would seem to be designed to extract value from customers (unwilling or ignorant), and thus might be described as bad profit-seeking, by Reichheld’s definition. To paraphrase Cory Doctorow on DRM, it’s unlikely that any customers wake up and say, “Damn, I wish there was a way to have my actions deliberately constrained for commercial gain by the products and services I use.” Hence, it’s unlikely that customers will evangelise or even recommend products and systems which give them a lousy experience. They may accept them grudgingly, as most of us do with many commercial (and political) interactions every day, but once a ‘good profit’ alternative becomes available and widely known about, they won’t hesitate to switch. I hope.

Maybe ‘good profits’ and ‘bad profits’ are too simplistic as terminologies, much like Jakob Nielsen’s ‘Evil design’ comments, but even a continuum between ‘good’ and ‘bad’ profit intentions is a useful way of thinking about the merits or otherwise of corporate strategies, particularly with customer service, products, pricing, rent-seeking, gouging, lock-in and so on.