Category Archives: Software

The world’s energy meter

Electrcity meter, in a cupboard

One of the presentations I’m really looking forward to at OpenTech 2008 in London is by AMEE, self-described as “The world’s energy meter”:

If all the energy data in the world were accessible, what would you build? The Climate Change agenda has created an imperative to measure the energy profile of everything. As trillions of pounds flow into re-inventing how we consume, we have a unique opportunity use open data and systems as a starting point. AMEE is an open platform for energy and CO2 data, algorithms and transactions.

From this PDF on the AMEE website:

AMEE is a neutral aggregation platform to measure and track all the energy data in the world. It combines monitoring, profiling and transactional systems to enable this, as well as an algorithmic engine that applies conversion factors from energy into CO2 emissions.

# AMEE is a technology platform (a web-service API) , designed to be built upon by you
# AMEE can represent both copyright and open data without conflict
# AMEE is open source
# You can build commercial applications using AMEE

This does sound extremely useful – the ability to convert energy into CO2 emission equivalent “enables the calculation of the “Carbon-Footprint” of anything” – and I’m going to see how I might be able to make use of AMEE’s functionality or the data set as part of the research. (As an aside, it’s interesting how often ‘energy methods’ allow us to compare diverse activities and effects with a common currency: I remember being struck by this concept before when being introduced to von Mises’ criterion in stress analysis and streamlined lifecycle analysis within a few days of each other.)

AMEE’s Gavin Starks also presented at O’Reilly’s ETech earlier this year (one day I’m sure I’ll go to this…) and the slides are available [PDF, 8MB]. On a similar theme, the very impressive Saul Griffith (of MIT Media Lab, Squid Labs, Instructables, Make et al) talked on ‘energy literacy’ – again, a detailed presentation [PDF, 7.6MB] with thoughtful notes (see also Wattzon) – and it seems that there is a certain degree of overlap, or symbiosis between the ideas. We need a public literate in energy to care enough about measuring and changing their behaviour; we equally need good and understandable energy-using behaviour data to enable that public to become literate in the consequences of their actions, and indeed for ‘us’ (designers/engineers/technologists/policymakers…) to understand what behaviours we want to address.

I’d like to think that Design for Sustainable Behaviour can help here. That’s certainly the aim of what I’m doing.

Interview with Sir Clive

Sir Clive Sinclair (BBC image)Chris Vallance of Radio 4’s excellent iPM has done a thoughtful interview with Sir Clive Sinclair, ranging across many subjects, from personal flying machines to the Asus Eee, and touching on the subject of consumer understanding of technology, and the degree to which the public can engage with it:

Your [Chris Vallance’s] generation really understood the computers, and today’s generation know they’re just a tool, and don’t really get to grips with them… When I was starting in business, and when I was a child, electronics was a huge hobby, and you could buy components on the street and make all sort of things, and people did. But that also has all passed; it’s almost forgotten.

It’s true, of course, that there are still plenty of hobbyist-makers out there, including in disciplines that just weren’t open before, and if anything, initiatives such as Make and Instructables – and indeed the whole free software and open source movements – have helped raise the profile of making, hacking, modding and other democratic innovation. It’s no secret that Clive himself is a proponent of Linux and open source in general for future low-cost computing, as is mentioned briefly in the interview, and the impact of the ZX series in children’s bedrooms (together with BBC Micros at school) was, to some extent, a fantastic constructionist success for a generation in Britain.

But is Clive right? How many schoolkids nowadays make their own radios or burglar alarms or write their own games? When they do, is it a result of enlightened parents or self-directed inquisitiveness? Or are we guilty of applying our own measures of ‘engagement’ with technology? After all, you’re reading something published using WordPress, which was started by a teenager. Personally, I’m extremely optimistic that the future will lead to much greater technological democratisation, and hope to work, wherever possible, to contribute to achieving that.

I’ve worked for Clive, as a designer/engineer, on and off, for a number of years, and it’s pleasing to have an intelligent media interview with him that doesn’t simply regurgitate and chortle over the C5, but instead tries to tap his vision and thoughts on technical society and its future.

Silicon Dreams

Incidentally, Clive’s 1984 speech to the US Congressional Clearinghouse on the Future, mentioned in the interview, is extremely interesting – quite apart from the almost Randian style of some of it – as much as for the mixture of what we might now see as mundanities among the far-sighted vision as for the prophetic clarity, with talk of guided 200mph maglev cars and the colonisation of the galaxy alongside the development of a cellular phone network and companion robots for the elderly. Of course, the future is here, it’s just not evenly distributed yet.

Talk of information technology may be misleading. It is true that one of the features of the coming years is a dramatic fall, perhaps by a factor of 100, in the cost of publishing as video disc technology replaces paper and this may be as significant as the invention of the written word and Caxton’s introduction of movable type.

Talk of information technology confuses an issue – it is used to mean people handling information rather than handling machines and there is little that is fundamental in this. The real revolution which is just starting is one of intelligence. Electronics is replacing man’s mind, just as steam replaced man’s muscle but the replacement of the slight intelligence employed on the production line is only the start.

And then there is this, which seems to predict electronic tagging of offenders:

Consider, for example, the imprisonment of offenders. Unless conducted with a biblical sense of retribution, this procedure attempts to reduce crime by deterrence and containment. It is, though, very expensive and the rate of recidivism lends little support to its curative properties.

Given a national telephone computer net such as I have described briefly, an alternative appears. Less than physically dangerous criminals could be fitted with tiny transporters so that their whereabouts, to a high degree of precision, could he monitored and recorded constantly. Should this raise fears of an Orwellian society we could offer miscreants the alternative of imprisonment. I am confident of the general preference.

Getting someone to do things in a particular order (Part 4)

Part 1 | Part 2 | Part 3 | Part 4 | Part 5 (coming soon)

Continued from part 3

This series is looking at what design techniques/mechanisms are applicable to guiding a user to follow a process or path, performing actions in a specified sequence. The techniques that seem to apply with this ‘target behaviour’ fall roughly into three ‘approaches’, which if anything describe the mindset a designer might have in approaching the ‘problem’: i.e., the techniques suggested may well apply more than one at a time to many designed solutions, but each reflects a particular way of looking at the problem. In this post, I’m going to examine what I’ve called the Persuasive Interface approach, which draws heavily from the work of BJ Fogg, though applied specifically to this particularly target behaviour.

As noted before, a lot of this may seem obvious – and it is obvious: we encounter these kinds of design techniques in products and systems every day, but that’s part of the point of this bit of the research: understanding what’s out there already.

Persuasive Interface approach

The design of the interface (however loosely defined) of a product or system can be an important factor encouraging users to follow a process or path in a specified sequence. Interfaces can use a number of psychological persuasion mechanisms (outlined by B J Fogg) – a ‘human factors’ approach – in conjunction with the technical capabilities of the interface itself. Some mechanisms applicable to this behaviour, then, are – as well as the Interface capabilities themselves – Tunnelling, Suggestion (kairos), Self-monitoring and Operant conditioning.

Interface capabilities
What I mean by this – there is probably a better term for it waiting to be coined – is the choice of degree/type/format of information or feedback that an interface can provide a user. Clearly, an interface with few capabilities for actually providing the user with feedback, or worse, inappropriate feedback capabilities (e.g. a car speedometer only telling you your mean speed for the journey, rather than the instantaneous velocity), has a different (probably much worse) chance for affecting users’ behaviour. (Which is why having the electricity meter in a cupboard, and looking at it four times a year, is not very persuasive in energy-saving terms.)

Careful selection of what information, feedback and control capabilities are designed into a system, from a technical point of view, can have a major effect on user behaviour. To some extent, the addition of an interface to a system which did not previously have one may drive behaviour change in itself. Technical decisions about the types of interaction possible between an interface and the underlying system or product, and between the user and the interface – the capabilities of the interface – determine how the user experience will work: if a system is not designed with a function for monitoring progress through a sequence of operations, for example, then the possibility of indicating this via an interface is not possible, or far more difficult. Providing the infrastructure for a meaningful and useful interface for a system is a design decision which can shape or even determine the system’s use characteristics.

Self-monitoring
Self-monitoring, as defined by BJ Fogg, is an interface design mechanism which explicitly links feedback of information to the user’s actions: the user can monitor his or her behaviour and the effect that this has on the system’s state. As applied to helping a user follow a process or path in sequence, it makes sense for the self-monitoring to involve real-time feedback – so that the ‘correct’ next step can immediately be taken if the feedback indicates that this is what should happen – but in other contexts, ‘summary’ monitoring may also be useful, such as giving the user a report of his or her behaviour and its efficacy over a certain period.

Even giving a user the ability to self-monitor where previously there was none can help change behaviour: for example, providing a home electricity meter in an immediately visible position is likely to be more persuasive at inspiring energy saving – by increasing awareness of consumption – than having the meter hidden away.

LinkedIn: Self-monitoringExample: LinkedIn‘s ‘Profile Completeness’ indicator allows users to monitor their ‘progress’, driving them to follow a specified sequence of actions

Tunnelling
Tunnelling is a ‘guided persuasion’ mechanism outlined by Fogg, in which a user ‘agrees’ to be routed through a sequence of pre-specified actions or events:

When you enter a tunnel, you give up a certain level of self-determination. By entering the tunnel, you are exposed to information and activities you may not have seen or engaged in otherwise. Both of these provide opportunities for persuasion.

Applying this mechanism involves treating the user as a captive audience: presenting only the ‘correct’ sequence of actions, step by step, with any user choices being limited, and the commitment to following the process being a motivator to accept the advice or opinions presented. Fogg uses the example of people voluntarily hiring personal trainers to guide them through fitness programmes. Some software wizards provide an interface analogy, where the intention is not merely to simplify a process, but additionally to shape the user’s choices.

Wizard: tunnellingExample: This software wizard helps the user ‘tunnel’ through a file conversion process in the right order.

Suggestion (kairos)
Suggestion (kairos) involves suggesting a behaviour to a user at the ‘opportune’ moment, i.e. when that behaviour would be the most efficient or otherwise most desirable step to take (either from the user’s point of view, or that of another entity). In the context of helping a user follow a process or path in a specified sequence, this is very easily implemented: the system can simply ‘cue’ the desired next step in the sequence by alerting or reminding the user, whether that comes through indicators on the interface itself, or some other kind of alert.

Suggestions can also help steer users away from incorrect behaviour next time they use the system, even if it’s too late this time; when presented at the point where a mistake or incorrect step is obvious, advice on what to do next time may be more easily recalled. The key to this mechanism is that the suggestion is timed or triggered at the right point in the sequence, so that its effect is most persuasive. This may imply a system which monitors the user’s behaviour and responds accordingly via the interface, or it might be realised by an interface designed so that, by helping the user keep track of where he or she is in a sequence of operations, the suggestions only appear or are visible at the right point.

Volvo gearchange light
Example: This Gearchange Indicator light, fitted to certain Volvo models, suggests the most efficient moment to change gear, based on measurement of engine RPM and throttle position. Thanks to Mac MacFarlane for the image.

Operant conditioning
Controversial, certainly, but reinforcing target behaviours through rewards or punishment may be applicable where we want the user to perform a (perhaps complex) behavioural sequence repeatedly, so that it becomes habit, or successive iterations approximate the intended sequence. But it is unlikely to be effective in encouraging users to follow one-off sequences, where actual direction (e.g. suggestion, tunnelling) is far more useful. In general, punishing users for mistakes is an undesirable way of designing.

In part 5, we’ll review the approaches we’ve looked at, and see how one might actually go about choosing among them to design a new product or system with this particular target behaviour.

Apologies for the delay to this service

You’re owed an apology, dear reader, for the 2-month hiatus with the blog. It’s down to a variety of reasons compounding each other, and alternately forcing me to prioritise other pressing problems, then when I tried seizing the initiative again, frustrating me with technical issues and actually preventing posting. You probably never noticed it, due to the nature of the exploit, but this blog was drawn into this nightmare of invisible insertion of hundreds of spam links into the header and footer, incorporating the URLs of dozens of other similarly attacked WordPress blogs, redirecting to the spammers’ intended destination.
Continue reading

Digital control round-up

An 'Apple' dongle

Mac as a giant dongle

At Coding Horror, Jeff Atwood makes an interesting point about Apple’s lock-in business model:

It’s almost first party only– about as close as you can get to a console platform and still call yourself a computer… when you buy a new Mac, you’re buying a giant hardware dongle that allows you to run OS X software.

There’s nothing harder to copy than an entire MacBook. When the dongle — or, if you prefer, the “Apple Mac” — is present, OS X and Apple software runs. It’s a remarkably pretty, well-designed machine, to be sure. But let’s not kid ourselves: it’s also one hell of a dongle.

If the above sounds disapproving in tone, perhaps it is. There’s something distasteful to me about dongles, no matter how cool they may be.

Of course, as with other dongles, there are plenty of people who’ve got round the Mac hardware ‘dongle’ requirement. Is it true to say (à la John Gilmore) that technical people interpret lock-ins (/other constraints) as damage and route around them?

Screenshot of Mukurtu archive website

Social status-based DRM

The BBC has a story about the Mukurtu Wumpurrarni-kari Archive, a digital photo archive developed by/for the Warumungu community in Australia’s Northern Territory. Because of cultural constraints, social status, gender and community background have been used to determine whether or not users can search for and view certain images:

It asks every person who logs in for their name, age, sex and standing within their community. This information then restricts what they can search for in the archive, offering a new take on DRM.

For example, men cannot view women’s rituals, and people from one community cannot view material from another without first seeking permission. Meanwhile images of the deceased cannot be viewed by their families.

It’s not completely clear whether it’s intended to help users perform self-censorship (i.e. they ‘know’ they ‘shouldn’t’ look at certain images, and the restrictions are helping them achieve that) or whether it’s intended to stop users seeing things they ‘shouldn’t’, even if they want to. I think it’s probably the former, since there’s nothing to stop someone putting in false details (but that does assume that the idea of putting in false details would be obvious to someone not experienced with computer login procedures; it may not).

While from my western point of view, this kind of social status-based discrimination DRM seems complete anathema – an entirely arbitrary restriction on knowledge dissemination – I can see that it offers something aside from our common understanding of censorship, and if that’s ‘appropriate’ in this context, then I guess it’s up to them. It’s certainly interesting.

Neverthless, imagining for a moment that there were a Warumungu community living in the EU, would DRM (or any other kind of access restriction) based on a) gender or b) social status not be illegal under European Human Rights legislation?

Disabled buttonsDisabling buttons

From Clientcopia:

Client: We don’t want the visitor to leave our site. Please leave the navigation buttons, but remove the links so that they don’t go anywhere if you click them.

It’s funny because the suggestion is such a crude way of implementing it, but it’s not actually that unlikely – a 2005 patent by Brian Shuster details a “program [that] interacts with the browser software to modify or control one or more of the browser functions, such that the user computer is further directed to a predesignated site or page… instead of accessing the site or page typically associated with the selected browser function” – and we’ve looked before at websites deliberately designed to break in certain browers and disabling right-click menus for arbitrary purposes.

Do you really need to print that?

Do you really need to print that?
Do you really need to print that?

This is not difficult to do, once you know how. Of course, it’s not terribly useful, since a) most people don’t read the display on a printer unless an error occurs, or b) you’re only likely to see it once you’ve already sent something to print.

Is this kind of very, very weak persuasion – actually worthwhile? From a user’s point of view, it’s less intrusive than, say, a dialogue box that asks “Are you sure you want to print that? Think of the environment” every time you try to print something (which would become deeply irritating for many users), but when applied thoughtfully, as (in a different area of paper consumption) in Pete Kazanjy’s These Come From Trees initiative, or even in various e-mail footers* (below), there may actually be some worthwhile influence on user behaviour. It’s not ‘micropersuasion’ in Steve Rubel’s sense, exactly, but there is some commonality.

Please consider the environment

I’m thinking that addressing the choices users make when they decide to print (or not print) a document or email could be an interesting specific example to investigate as part of my research, once I get to the stage of user trials. How effective are the different strategies in actually reducing paper/energy/toner/fuser/ink consumption and waste generation? Would better use of ‘Printer-friendly’ style sheets for webpages save a lot of unnecessary reprints due to cut-off words and broken layouts? Should, say, two pages per sheet become the default when a dicument goes above a certain number of pages? Should users be warned if widows (not so much orphans) are going to increase the number of sheets needed, or should the leading be automatically adjusted (by default) to prevent this? What happens if we make it easier to avoid printing banner ads and other junk? What happens if we make the paper tray smaller so the user is reminded of just how much paper he/she is getting through? What happens if we include a display showing the cost (financially) of the toner/ink, paper and electricity so far each day, or for each user? What happens if we ration paper for each user and allow him or her to ‘trade’ with other users? What happens if we give users a ‘reward’ for reaching targets of reducing printer usage, month-on-month? And so on. (The HP MOPy Fish – cited in B J Fogg’s Persuasive Technology – is an example of the opposite intention: a system designed to encourage users to print more, by rewarding them.)

Printing is an interesting area, since it allows the possibility of testing out both software and hardware tactics for causing behaviour change, which I’m keen to do.