All posts filed under “Interaction design

comment 1

Exploring Qualitative Displays and Interfaces

Windsock on Burgh Island. Devon

by Dan Lockton, Delanie Ricketts, Shruti Aditya Chowdhury (Imaginaries Lab, Carnegie Mellon School of Design) and Chang Hee Lee (Royal College of Art)

Much of how we construct meaning in the real world is qualitative rather than quantitative. We think and act in response to, and in dialogue with, qualities of phenomena, and relationships between them. Yet, quantification has become a default mode for information display, and for interfaces supporting decision-making and behaviour change.

There are more opportunities within design and human-computer interaction for qualitative displays and interfaces, for information presentation, and an aid to help people explore their own thinking and relationships with ideas. Here we attempt one dimension of a tentative classification to support projects exploring opportunities for qualitative displays within design.

This blog post is a slightly edited version of a late-breaking work submission presented at CHI’17, May 06–11, 2017, Denver, CO, USA, and published in the CHI Extended Abstracts at

Download this article as a PDF.

Water trapped in train carriage door is a form of qualitative display of the train’s acceleration, deceleration and inertia.


Outside of the digital, we largely live and think and act and feel in response to, and in dialogue with, the perceived qualities of people, things and phenomena, and the relationships between them, rather than their number.

Much of our experience of — and meaning-making in — the real world is qualitative rather than quantitative. How friendly was she? How tired do I feel right now? Who’s the tallest in the group? How windy is it out there? Which route shall we take to work? How was your meal? Which apple looks tastier? Which piece of music best suits the mood? Do I need to use the bathroom? Particularly rarely do we deal with quantities in relation to abstract concepts — two coffees, half a biscuit, three children, but rarely 0.5 loves or 6.8 sadnesses.

And yet, quantification has become the default mode of interaction with technology, of display of information, and of interfaces which aim to support decision-making and behaviour change in everyday life [27]. We need not elaborate here the phenomena of the quantified self [36, 42] and personal informatics more widely [24, 12], except to note the prevalence of numerical approaches (Figure 1) and the relative unusualness of non-numerical, pattern-based forms (Figure 2).

Figure 1: A typical form of quantitative interface: a Fitbit’s display of number of steps taken.

Figure 2: The Emulsion activity tracker, by Norwegian design studio Skrekkøgle, contains two immiscible liquids. Movement splits the colored liquid into smaller drops, making patterns.

But what might we be missing through this focus on quantification? It seems as though there might be opportunities for human-computer interaction (HCI) to explore forms of qualitative display and interface, as an approach to information presentation and interaction, as an aid to help people explore their own and each other’s thinking, and specifically to help people understand their relationships and agency with systems.

In this article, we discuss qualitative displays and interfaces, and attempt one dimension of a tentative classification supporting design projects exploring this space.

Leaves as a qualitative interface for the wind

What could qualitative displays and interfaces be?

Here we define a qualitative display as being a way in which information is presented primarily through representing qualities of phenomena; a qualitative interface enables people to interact with a system through responding to or creating these qualities. ‘Displays’ are not necessarily solely visual — obvious to say, perhaps, but not always made explicit.

Before exploring some examples, we will look at some theoretical issues. The terms ‘qualitative interface’ or ‘qualitative display’ are not commonly used outside of some introductory human factors textbooks, but forms of interface along these lines are found in lots of projects at CHI, TEI, DIS, Ubicomp (all academic human-computer interaction conferences) and other venues, without authors explicitly drawing our attention to the concept — it is perhaps just too obvious and too broad to merit specific comment in HCI and interaction design research. But, assuming the idea does have value, what are some characteristics?

A human face is a qualitative interface, perhaps the earliest we encounter [e.g. 40] along with the voice. We learn to read and interpret emotions in others’ expressions, to recognize commonalities and differences across people, to make inferences about internal and external factors affecting the person, and monitor the effects we or others are having on that person. We understand that the face and voice and our ability to read them are abstractions, interpretations, not perfect knowledge, but a model which enables us to make decisions in conjunction with our reading of our own emotions.

In a sense, the whole world, as we perceive it, is a very complex qualitative interface. The most accurate model of a phenomenon is the phenomenon itself, but it is only useful to us to the extent we can understand what we are observing, detect the patterns we need to, and recognize that we are constructing the ‘reality’ we perceive. We are always creating a model [14] and that model is necessarily not reality itself; all displays of information are representations of a simplified model of phenomena in the world. Levels of indexicality [32], drawing on Charles Peirce’s semiology, are relevant here, addressing the “causal distance” between the phenomenon and how it is displayed.

One advantage of interfaces seeking to provide a qualitative display is that they have the potential to enable the preservation of at least some of the complexity of real phenomena — representing complexity without attenuating variety [2] — even if we do not pay attention to it until we actually need to, in much the same way as certain phenomena in the real world become salient only when we need to deal with them. Looking out of the window or opening the door to see and feel and hear what the weather is like outside presents us with complex phenomena, but we are able to interpret what actions we need to take, in a more experientially salient way than looking at some numbers on a weather app.

Figure 4: It’s easy to imagine the feel of the wind on ourselves when we watch this scarf tied around a lamp post flapping in the breeze. Figure 5: A windsock gives us more sense of the wind’s qualities than a numerical display.

The feel of the wind on our skin, or watching the wind affect the environment, gives us a better sense of whether we need a scarf or coat than knowing the quantitative value of the wind speed and direction (Figures 3, 4 and 5). We can see, hear and feel not just wind speed and direction, but other qualities of it — is it continuous? in short gusts? damp, dry?

Qualitative displays could enable us to learn to recognize patterns in the world (and in data sets), and the characteristics of state changes, similarly to benefits identified in sonification research [35]. We should consider that ‘qualitative’ does not simply imply the absence of numbers. The examples we use in this paper might involve elements that could easily be quantified (rain drops, ink in a pen) but are given meaning through their display in a way that emphasises a quality or characteristic of the phenomenon. We recognise that this is potentially an ambiguous area, and are open to evolving the concept.

A possible spectrum of one dimension of qualitative displays: directness of connection

Here’s a tentative spectrum of one dimension of qualitative displays, relating phenomena to the display in terms of how directly they are connected.

(Levels 0–1 involve direct use of a real-world phenomenon in the display; from about Level 2 up to Level 5, they involve increasing degrees of translation or transduction of the phenomena. This parallels ideas in indexical visualisation [32] and embedded data representation [41] in terms of ‘situatedness’ or causal distance to phenomena.)

  • Level 0: The phenomenon itself ‘creates’ the display directly
  • Level 1: The display is an ‘accidental’ side-effect of the phenomenon
  • Level 2: The side-effect is ‘incorporated’ into a display that gives it meaning
  • Level 3: The display is a designed side-effect of the phenomenon
  • Level 4: Some minor processing of the phenomenon creates the display
  • Level 5: Major processing of the phenomenon creates the display

Figure 6: Some examples of displays from Levels 0, 1 and 2. Level 0: The pattern of raindrops hitting a translucent umbrella — frequency, coverage, and sound — directly creates a ‘rain display’ for the user, providing insight into the current state and enabling decisions about whether the umbrella is still needed; City lights create a display showing the shape of the city’s districts and indicator of population density; Water trapped in a train carriage window moves as the train ac-/de-celerates, creating a dynamic display of the train’s motion; A transparent pen is a physical progress bar for the amount of ink remaining — it could be quantified, but it is perhaps the quality of being not-yet-run-out which matters to the user. Level 1: A worn patch on a map accidentally provides a display of ‘you are here’; Use marks [5] from previous users demonstrate how to use a swipe-card for entry to a building; A spoon worn through decades of use is an accidental display of the way in which it has been used [31]; Footprints in the snow ‘accidentally’ provide a display of previous walkers’ paths. Level 2: ‘This Color For Best Taste’ label gives ‘meaning’ to the colour of a mango’s skin for the consumer (Photo used with permission of Reddit user /u/cwm2355); Writing ‘Clean Me’ or other messages in dust on a car gives meaning to the dusty property; Admiral Robert Fitzroy’s Storm Glass, as used on the voyage of the Beagle (1831–6), incorporates crystals whose changing appearance was believed to enable weather forecasting (Photo: ReneBNRW, Wikimedia Commons, public domain dedication); George Merryweather’s Tempest Prognosticator (1851[30]) incorporates “a jury of philosophical councillors”, 12 leeches whose movement on detecting an approaching storm causes a bell to ring (Photo: Badobadop, Wikimedia Commons, CC-BY-SA).
Figure 7: Some examples of displays from Levels 3, 4 and 5. Level 3: IceAlert is designed so that freezing temperatures cause the blue reflectors to rotate to become visible; A ‘participatory bar chart’ by Dan Lockton along the lines of [22, 33, 16], designed so that ‘voting’ increases the visible height of the bar, though the votes are not numbered; A non-numerical weighing scale by Chang Hee Lee designed so liquid trapped under glass changes shape; Toilet stall door lock designed so display rotates from ‘Vacant’ to ‘Engaged’ — the position of the lock itself gives us a display of actionable information. Level 4: Chronocyclegraphs (1917) by Frank and Lillian Gilbreth, tracing manual workers’ movements [10] (Photo from [15],, out of copyright]; Live Wire (Dangling String) by Natalie Jeremijenko (1995)[39] moved a wire in proportion to local network traffic; Melbourne Mussel Choir, also by Natalie Jeremijenko with Carbon Arts [6] uses mussels with Hall effect sensors to translate the opening and closing of their shells into music; Availabot (2006), by Schulze & Webb, later BERG [3], is a USB puppet which “stands to attention when your chat buddy comes online”. Level 5: Powerchord by Dan Lockton [29] provides real-time sonification of electricity use, translating it into birdsong or other ambient sound; Immaterials: Ghost in the Field by Timo Arnall [1] visualizes “the three-dimensional physical space in which an RFID tag and a reader can interact with each other”; Ritual Machine 2 by the Family Rituals 2.0 project [23] uses patterns on a flip-dot display to visualize the countdown to a shared event for two people; Tempescope by Ken Kawamoto [21] visualizes weather conditions elsewhere in the world through re-creating them in a tabletop display (Photo used from Tempescope Press Kit).

The boundaries between levels here are dependent on observers’ interpretations of what is signified (whether an effect is accidental or deliberate is a common question in design (teleonomy [25])). Nevertheless, this spectrum permits a classification of some examples and is being applied by the authors in undergraduate design studio projects. We note the absence of screen-based examples: this is not intentional, and we welcome adding relevant examples. There are many intersecting research areas we aim to explore; in current HCI research, the most relevant are data physicalisation, embedded data representation, tangible interaction, sonification, and glanceable displays.

The work of Yvonne Jansen, Pierre Dragicevic and others [20] in data physicalisation, including compilation of examples, and embedded data representation [41], provides us with many instances of qualitative display, mostly at what we are calling Levels 2–5; likewise, development of ubiquitous computing, tangible interaction and tangible user interfaces [39, 18, 17] and Hiroshi Ishii’s subsequent vision of tangible bits [19] offers a huge set of projects, many of which provide qualitative interfaces for data or system interaction (usually at Levels 4–5).

Sonification [35] and glanceable displays [e.g. 9, 34] also offer us diverse sets of examples often using non-numerical representation, also largely at levels 4–5. As noted earlier, qualitative does not just mean non-quantitative, and the boundaries may be blurred: if a sonification directly maps numerical values to tones, is it much different to an unlabelled line chart? Or are sparklines [37], for example, a way of turning quantitative data into a form of qualitative presentation?

Even with a quantitative display, how a person interprets it may have a qualitative dimension: Figure 8 shows an electricity monitor used by a study participant [28] who accidentally set it to display kg CO2/day equivalent; this “meant nothing” to her but she interpreted the display such that “>1” meant “expensive”. ‘Annotations’ of values as users construct their own meaning [11] may fit here; the aim must, however, be to avoid the kind of reductive ‘qualitative’ nature of a limited set of labels [13].

Figure 8: A quantitative electricity display that was used ‘qualitatively’ by a householder (see text). Figure 9: An example of MONIAC, the Phillips Machine, at the Reserve Bank of New Zealand (Photo by Kaihsu Tai, Wikimedia Commons, public domain dedication).

Analogy and metaphor are important here, and the almost-forgotten field of Analogue Computing offers us an intriguing perspective. By “build[ing] models that created a mapping between two physical phenomena” [7], some analogue computers effectively operated as ‘direct’ displays of an analogue of the ‘original’ phenomenon — a kind of meta-level 2 type qualitative display, with devices such as the 1949 Phillips Machine [4] (Figure 9), which performed operations on flows of coloured water to model the economy of a country, enabling an interactive visualization of a system in operation as it operates (there are parallels with Bret Victor and Nicky Case’s work on explorable explanations [38, 8], and the development of visual programming languages).

Other areas of pertinent research and inspiration, are synaesthesia and mental imagery: sensory overlaps, fusions and mappings offer a fertile field for exploring qualitative displays of phenomena.

Conclusion: What use is all of this?

We’re interested in using qualitative displays and interfaces for supporting decision-making, behaviour change and new practices through enabling new forms of understanding — as an aid to help people explore their own and each other’s thinking, and specifically to help people understand their relationships and agency with the systems around them [26]. Projects using qualitative displays are unlikely simply to be de-quantified ‘conversion’ of existing numerical displays; instead, the aim will be to make use of the approach to represent and translate phenomena appropriately, in ways which enable users to construct meaning and afford new ways of understanding, enabling nuance and avoiding reductiveness.

The spectrum of the ‘directness’ dimension introduced here provides a possible starting point for this work, by giving a framework for analysing examples and suggesting ways of handling phenomena to be displayed, and is currently being used by the authors to brief an undergraduate design studio project on materialising environmental phenomena to reveal hidden relationships. We welcome the opportunity to learn from others who have thought about these kinds of ideas to inform our future explorations of this area.


Thanks to Dr Delfina Fantini van Ditmar, Dr Laura Ferrarello, Flora Bowden, Gyorgyi Galik, Stacie Rohrbach, Ross Atkin, Shruti Grover, Veronica Ranner and Dixon Lo for discussions in which some of these ideas were formulated and explored, and to the CHI reviewers. Unless otherwise noted, photos are by the authors.


1. Timo Arnall. 2014. Exploring ‘immaterials’: Mediating design’s invisible materials. International Journal of Design 8, 2: 101–117.

2. W. Ross Ashby. 1956. An Introduction to Cybernetics. Chapman & Hall, London.

3. BERG. 2008. Availabot. Retrieved Jan 10, 2017 from

4. Chris Bissell. 2007. The Moniac: A Hydromechanical Analog Computer of the 1950s. IEEE Control Systems Magazine 27, 1:59–64.

5. Brian Burns. 2007. From Newness to Useness and Back Again: A review of the role of the user in sustainable product maintenance. Retrieved June 1, 2009 from

6. Carbon Arts. 2013. Melbourne Mussel Choir. Retrieved Jan 10, 2017 from

7. Charles Care. 2006–7. A Chronology of Analogue Computing. The Rutherford Journal 2. Retrieved Jan 10, 2017 from http://www.rutherford

8. Nicky Case. 2014. Explorable Explanations. Blog post (Sept 8, 2014). Retrieved Jan 10, 2017 from

9. Sunny Consolvo, Predrag Klasnja, David W. McDonald, Daniel Avrahami, Jon Froehlich, Louis LeGrand, Ryan Libby, Keith Mosher, and James A. Landay. 2008. Flowers or a Robot Army? Encouraging Awareness & Activity with Personal, Mobile Displays. In Proceedings of 10th International Conference on Ubiquitous Computing (UbiComp’08): 54–63.

10. Régine Debatty. 2012. The Chronocyclegraph. Blog post, We Make Money Not Art (May 6. 2012). Retrieved Jan 10 2017 from

11. Paul Dourish. 2004. What we talk about when we talk about context. Personal and Ubiquitous Computing 8, 1: 19–30.

12. Chris Elsden, David Kirk, Mark Selby, and Chris Speed. 2015. Beyond Personal Informatics: Designing for Experiences with Data. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15): 2341–2344.

13. Delfina Fantini van Ditmar and Dan Lockton. 2016. Taking the Code for a Walk. Interactions 23, 1: 68–71.

14. Heinz von Foerster. 1973. On constructing a reality. In F.E. Preiser (Ed.). Environmental Design Research Vol. 2. Dowden, Hutchinson & Ross, Stroudberg: 35–46. Reprinted in Heinz von Foerster. 2003. Understanding Understanding — Essays on Cybernetics and Cognition. Springer-Verlag, New York: 211–228.

15. Frank Gilbreth and Lillian Gilbreth. 1917. Applied Motion Study: a collection of papers on the efficient method to industrial preparedness. Sturgis & Walton, New York. Retrieved Jan 10, 2017 from

16. Hans Haacke. 2009. Lessons Learned. Tate Papers 12. Retrieved Jan 10, 2017 from

17. Eva Hornecker and Jacob Buur. 2006. Getting a grip on tangible interaction: a framework on physical space and social interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’06): 437–446.

18. Hiroshi Ishii and Brygg Ullmer. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’97): 234–241.

19. Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. Interactions 19, 1: 38–51.

20. Yvonne Jansen, Pierre Dragicevic, Petra Isenberg, Jason Alexander, Abhijit Karnik, Johan Kildal, Sriram Subramanian, and Kasper Hornbæk. 2015. Opportunities and Challenges for Data Physicalization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’15): 3227–3236.

21. Ken Kawamoto. 2012. Prototyping “Tempescope”, an ambient weather display. Blog post (Nov 15, 2012). Retrieved Jan 10, 2017 from

22. Lucy Kimbell. 2011. Physical Bar Charts. Retrieved Jan 10, 2017 from

23. David Kirk, David Chatting, Paulina Yurman, and Jo-Anne Bichard. 2016. Ritual Machines I & II: Making Technology at Home. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’16): 2474–2486.

24. Ian Li, Anind Dey, and Jodi Forlizzi. 2010. A stage-based model of personal informatics systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10): 557–566.

25. Dan Lockton. 2012. POSIWID and Determinism in Design for Behaviour Change. Social Science Research Network. ssrn.2033231

26. Dan Lockton. 2016. Designing Agency in the City. In Lacey Pipkin (Ed.), The Pursuit of Legible Policy: Agency and Participation in the Complex Systems of the Contemporary Megalopolis. Buró-Buró, Mexico City: 53–61. Legible-Policies_BB.pdf

27. Dan Lockton, David Harrison, and Neville Stanton. 2010. The Design with Intent Method: A design tool for influencing user behaviour. Applied Ergonomics 41, 3: 382–392. j.apergo.2009.09.001

28. Dan Lockton, Flora Bowden, Catherine Greene, Clare Brass, and Rama Gheerawo. 2013. People and energy: A design-led approach to understanding everyday energy use behaviour. In Proceedings of EPIC 2013: Ethnographic Praxis in Industry Conference: 348–362.

29. Dan Lockton, Flora Bowden, Clare Brass, and Rama Gheerawo. 2014. Powerchord: Towards ambient appliance-level electricity use feedback through real-time sonification. In Proceedings of UCAmI 2014: 8th International Conference on Ubiquitous Computing & Ambient Intelligence: 48–51.

30. George Merryweather. 1851. An essay explanatory of the Tempest Prognosticator in the building of the Great Exhibition for the Works of Industry of All Nations. John Churchill, London. Retrieved Jan 10, 2017 from

31. Bruno Munari. 1971. Design as Art (trans. Patrick Creagh). Pelican Books, London.

32. Dietmar Offenhuber and Orkan Telhan. 2015. Indexical Visualization — the Data-Less Information Display. In Ulrik Ekman, Jay David Bolter, Lily Diaz, Morten Søndergaard, and Maria Engberg (eds.). Ubiquitous Computing, Complexity and Culture: 288–303. Routledge, New York.

33. Jennifer Payne, Jason Johnson, and Tony Tang. 2015. Exploring Physical Visualization. In Jason Alexander, Yvonne Jansen, Kasper Hornbæk, Johan Kildal and Abhijit Karnik. Exploring the Challenges of Making Data Physical. Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15):

34. Tim Regan, David Sweeney, John Helmes, Vasillis Vlachokyriakos, Siân Lindley, and Alex Taylor. 2015. Designing Engaging Data in Communities. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘15): 271–274.

35. Stefania Serafin, Karmen Franinovic, Thomas Hermann, Guillaume Lemaitre, Michal Rinott, and Davide Rocchesso. 2011. Sonic Interaction Design. In Thomas Hermann, Andy Hunt, and John Neuhoff (Eds.), The Sonification Handbook. Logos, Berlin: 87–110. index.php/chapters/chapter5/

36. Melanie Swan. 2013. The quantified self: fundamental disruption in big data science and biological discovery. Big Data 1, 2: 85–99.

37. Edward Tufte. 2001. The Visual Display of Quantitative Information (2nd ed.). Graphics Press, Cheshire, CT.

38. Bret Victor. 2011. Explorable Explanations. March 10, 2011. Retrieved Jan 10, 2017 from

39. Mark Weiser and John Seely Brown. 1995. Designing Calm Technology. Dec 21, 1995. Retrieved Jan 10, 2017 from

40. Sherri C. Widen. 2013. Children’s Interpretation of Facial Expressions: The Long Path from Valence-Based to Specific Discrete Categories. Emotion Review 5, 1: 72–77.

41. Wesley Willett, Yvonne Jansen, and Pierre Dragicevic. 2017. Embedded Data Representations. IEEE Transactions on Visualization and Computer Graphics 23, 1: 461–470.

42. Gary Wolf. 2010. The quantified self. Video (June 2010). Retrieved Jan 10, 2017, from

Let’s See What We Can Do: Designing Agency

‘What does energy look like?’ drawn by Zhengni Li, participant in Drawing Energy (Flora Bowden & Dan Lockton)
‘What does energy look like?’ drawn by Zhengni Li, participant in Drawing Energy (Flora Bowden & Dan Lockton)

How can we invert ‘design for behaviour change’ and apply it from below, enabling people to understand, act within, and change the behaviour of the systems of society and the environment?

[This article is cross-posted to Medium where there may also be readers’ comments]

As our everyday lives are increasingly pushed and pulled by technology and the systems around us, from infrastructure to quantification to government to horrifying combinations of these, understanding these complex systems, and how to change them, is something we should be paying attention to. In ‘As we may understand’, last year, I looked — at excessive length — at the understanding bit, but not the change. Hopefully here I can address that, to some extent, though my thinking’s moved on a bit.

Some empty chairs in Munich

Paralysis and regret

We are surrounded by, and enmeshed in, complexity which at once causes us paralysis over not being able to take action, and regret over the actions we do take (and continue to take). We simultaneously worry and do nothing about issues such as the military-industrial surveillance state, ageing populations, inequality, war and privatisation of the commons. We fudge our responses to planetary-scale crises such as climate change, pollution or poverty because our understanding of what we are able to do locally does not match our understanding of what is possible at a larger scale. We face a crisis of agency, in the phrase used by Gyorgyi Galik, Natalie Jeremijenko, Zygmunt Bauman and others.

How can ‘we’ (at the level of individual people — and I’m speaking from the position of a middle-class Western consumer, with all that entails) act? We don’t know what to do, and even if we did, we are not individual “micro-resource managers” (to use Yolande Strengers’ phrase), but people acting within the constraints (and enablers) of family, society, social groups, cultural contexts, norms and expectations. We lack the ability to hold different visions of possible futures in mind simultaneously, or even to think through the consequences and possibilities at multiple levels. We are entangled in social traps, double binds and knots around everything from participation in democracy (why bother? it won’t change anything) to dealing with terrorism (be alert, but not scared, because that’s what they want, so still be very very very alert).

Tomorrow's News Today, Edinburgh

What can designers do?

What is designers’ role in this? Both design and sustainability, in its broadest sense, are about “the future” — bringing into being a world where humanity and other forms of life will “flourish on the planet forever” (John Ehrenfeld) or where we can “go about our daily affairs… [knowing] that our activities as civilised beings are expanding our future options and improving our current situation” (Bruce Sterling). Design might be one of the mechanisms by which much of our current predicament has come about (Victor Papanek), but perhaps “the future with a future for “us” can only be reached by design” (Tony Fry).

Designing for behaviour change at the mundane level of helping people recycle things, or use their electrical appliances more efficiently — the sort of thing a lot of my previous work has focused on — might be part of the solution, but it’s clear that design really needs to address things at a much higher, more systemic level, including designing things out of existence. Perhaps, in terms of producing a new generation of designers ready to engage with this degree of challenge, this is what transition design can bring us. I hope so.

A tangled ethernet cable cupboard at Goldsmiths

Understanding complexity

To engage with this complexity — not destroy its variety, because we can’t and we shouldn’t — requires designers to understand society better. Yes, we need designers to understand people’s lives, and appreciate the realities of situated decision making and subjective experience, but also to understand complexity, connectedness (in a technology sense but a people sense too) and the effects of design, and its politics, to a degree beyond what might previously have been common. We need designers to engage with the invisible ‘dark matter’ (Dan Hill) even though it may often be experienced as an impediment to action.

We need designers to understand (and be allowed to deal with) the wickedness of the problems we are facing: they will not be understood until ‘solutions’ have been attempted (which will in turn create new problems, as John Gall pointed out); there will be no stopping rules; there will be no right or wrong answers; and all attempts to deal with a problem will only highlight its uniqueness and contextual peculiarity. We will not be able to step in the same river twice, nor even once (as Ranulph Glanville suggests), and we must make peace with that. It doesn’t mean we can’t learn from what we’ve done before, but we cannot presume that patterns always transpose effectively. Deterministic top-down approaches promoted by behavioural economics and simplistic notions of the ‘Quantified Self’ and ‘Big Data’ are not going to work.

A You Are Here sign at Goldsmiths

Understanding how to act

Of course, understanding complexity is not the goal in itself. The real goal is understanding what agency is possible, and how to enact change. So, we need design that enables people to understand the wider contexts of their actions, their agency within society, and how they can act to create different outcomes, different futures.

Understanding how to act to change the systems we’re in is arguably the biggest meta-challenge of our age. We need not just information, but tools for connecting our understanding of how things work and how we can act, around everything from the environment, cities, our own bodies, networked infrastructure to social, civic and political contexts, emerging technologies and plural considerations of the future itself.

This is design for behaviour change, but is not about designers trying to change ‘public behaviour’ as if it were somehow a separate phenomenon. Designers are members of society, and there is only one Earth: we are part of the same systems. It is about design which enables people to change the behaviour of the systems of which they — we — are part.

Some tar road repairs at Carnegie Mellon

Ways of doing this

What do we do, then? I imagine a ‘Designing Agency’ research / action programme, which would rethink how we engage with the systems of everyday (and future) life, through developing new approaches to understanding and action. Designing Agency would use ‘design’ — in the broadest sense — as a way to:

1. understand the world
2. understand people’s understandings of the world
3. help people understand the world
4. help people understand their agency in the world
5. help people use that agency in the world

We could see these as a progression from understanding to action. But how would we do it in practice? Different techniques would be effective at different levels. Some would be investigatory, some practical, some speculative or critical. Some would give us tools for understanding and learning, some tools for doing, some provocations for reflection. The examples I have here are quite pedestrian.

A ‘comfort timeline’ heating practice diary developed by Natalia Romero Herrera, TU Delft, being used here by a householder in Dartford, UK.
A ‘comfort timeline’ heating practice diary developed by Natalia Romero Herrera, TU Delft, being used here by a householder in Dartford, UK.

For example, at Level 1, using design to understand the world might involve designing and deploying probes (e.g. the heating diary shown here), and running designed experiments, which investigate phenomena in the world (including society) through gathering data in a way which provides meaningful scaffolding for the next level. This is essentially using design as a way to do science, or social science.

Level 2, in attempting to ‘understand understanding’ (in Heinz von Foerster’s phrase), would take things a stage further: using activities which practically try to explore the different ways in which people imagine, conceptualise and think about how things work. Very basically, we could use techniques such as drawing (as in the image at the top of the article, from the Drawing Energy project), but there’s a whole world of possibilities here. It is partly about making the invisible visible, tangible or legible, from the point of view of people themselves (i.e. what is legible, or not, to them), but also about surfacing people’s different understandings of situations, and how that leads different people to act.

Claustrophobia simulation apparatus, developed by Anna Dakin, Harry Thompson, Nong Chotipatoomwan and Tess Dumon as part of ‘One Another: Empathy and Experience’, AcrossRCA course by Katie Gaudion & Dan Lockton
Claustrophobia simulation apparatus, developed by Anna Dakin, Harry Thompson, Nong Chotipatoomwan and Tess Dumon as part of ‘One Another: Empathy and Experience’, AcrossRCA course by Katie Gaudion & Dan Lockton

At Level 3, we’d be designing ways which help change people’s understandings of the world and the systems they’re in. This could take the form of new kinds of interface, designed experiences, educational activities — a range of things.Some of the examples collected by Dieter Zinnbauer’s Ambient Accountability project perhaps fit here. It could be about changing mental models, expanding horizons, reframing of situations, or even trying to facilitate empathy (as in the image). I want to make it clear here that this isn’t about ‘correcting incorrect mental models’ but about enabling and supporting people to construct and refine their own models of the world, experientially, which serve them better. And learning how to reflect on that.

I don’t really know, at this stage, what Level 4 would look like. This is the “let’s see what we can do” of the title. I have some ideas, but they need work: I imagine new forms of interface, new ‘senses’, new metaphors (in the sense suggested by Margaret Mead and also by A. Baki Kocabelli — see below) and new analogues: not just behaviour quantification and data dashboards, but highlighters and contextual explainers of agency. I am very excited about this, and aim to come back to it with another article very soon, once I’ve actually built something. Let’s just say, qualitative interfaces…

At Level 5, among other things, we would pretty much be challenging and inverting common ‘behavioural design’ paradigms. We have a whole load of them, of course, but what can they do if you turn them upside down? What does it look like when the public uses a technique like Commitment & Consistency or Are You Sure? or Watermarking to change the behaviour of a system like policing or energy policy? Can it be more constructive than ‘fighting back’, and actually be about co-designing systems of society that behave more effectively, and work better for more people? Again, these could be applied critically, or provocatively — a what if? — or they could be direct ways of enabling action, empowering people to change the behaviour of the systems in which we live.

At this level, we should be mindful of our roles as designers within the systems we are aiming to help people change. The power dynamics, and our assumptions about the people we are designing with or for, need to be surfaced and questioned. We need to be aware of — and honest about — our inherent subjectivity: as Hugh Dubberly and Paul Pangaro point out:

“Framing wicked problems requires explicit values and viewpoints, accompanied by the responsibility to justify them with explicit arguments, thus incorporating subjectivity and the epistemology of second-order cybernetics.”

In this vein, A. Baki Kocaballi has written very usefully about agency sensitive design, particularly the notion of relationality (recognising that assumptions of neither full technological determinism, nor full social determinism, are useful when understanding agency in context):

“In design processes, the quality of relationality asks for three sensitivities: (i) understanding of mutual influence, shaping and co-constitution of actors and artefacts; (ii) embracing and supporting emergent and improvised action and (iii) consideration of the system as an assemblage/network of actors, artefacts or collective hybrids. In order to develop these sensitivities, we first need to stop formulating design solutions based upon the assumption of a well-defined individual with fixed characteristics and capacities of action. Design solutions should recognize and support the existence of the multiple individuals embodied in one individual and the possibility of multiple enactments of one individual within a network of other human and non-human actors interacting with each other and exhibiting different capacities for action.”

Kocaballi’s six qualities for agency sensitive design — relationality, visibility, multiplicity, configurability, accountability and duality — could be a valuable set of considerations to explore in relation to the design of these ‘Level 5’ attempts to help people use their agency in the world.

Somewhere on the D2 near Grèoliéres, France

What next?

I need to stop writing about things like this, and get back to doing it. I’ve had my own career-related crisis of agency in 2015, but 2016 is going to be better. First up is an amazing opportunity working with Laboratorio para la Ciudad, Superflux, Future Cities Catapult and UNAM on a joint project between Mexico City and London, funded by the British Council’s Newton Fund, in which (I’m hoping) at least a bit of levels 4 and 5 can come into play, in the context of helping people understand their agency, and act in relation to policy in the built environment.

We’ll have to see what we can do.

Thank you to Veronica Ranner, Gyorgyi Galik, Delfina Fantini van Ditmar and Laura Ferrarello for conversations which have led to ideas in this article.

[See also readers’ comments / responses on Medium]

As we may understand: A constructionist approach to ‘behaviour change’ and the Internet of Things

Find Alternative Route, Old Street

In a world of increasingly complex systems, we could enable social and environmental behaviour change by using IoT-type technologies for practical co-creation and constructionist public engagement.

[This article is cross-posted to Medium, where there are some very useful notes attached by readers]

We’re heading into a world of increasingly complex engineered systems in everyday life, from smart cities, smart electricity grids and networked infrastructure on the one hand, to ourselves, personally, being always connected to each other: it’s not going to be just an Internet of Things, but very much an Internet of Things and People, and Communities, too.

Yet there is a disconnect between the potential quality of life benefits for society, and people’s understanding of these — often invisible — systems around us. How do they work? Who runs them? What can they help me do? How can they help my community?

IoT technology and the ecosystems around it could enable behaviour change for social and environmental sustainability in a wide range of areas, from energy use to civic engagement and empowerment. But the systems need to be intelligible, for people to be engaged and make the most of the opportunities and possibilities for innovation and progress.

They need to be designed with people at the heart of the process, and that means designing with people themselves: practical co-creation, and constructionist public engagement where people can explore these systems and learn how they work in the context of everyday life rather than solely in the abstract visions of city planners and technology companies.

View Source

Understanding things

The internet, particularly the world-wide web, has done many things, but something it has done particularly well is to enable us to understand the world around us better. From having the sum of human knowledge in our pockets, to generating conversation and empathy between people who would never otherwise have met, to being able to look up how to fix the washing machine, this connectedness, this interactivity, this understanding, has—quickly—led to changes in everyday life, in social practices, habits, routines, decision processes, behaviour, in huge ways, not always predictably.

It’s surfaced information which existed, but which was difficult to find or see, and—most importantly—links between ideas (as Vannevar Bush, and later Ted Nelson, envisaged), at multiple levels of abstraction, in a way which makes discovery more immediate. And it’s linked people in the process, indeed turned them into creators and curators on a vast scale, of photos, videos, games and writing (short-form and longer). It may not all be hand-coding HTML, but perhaps much of it followed, ultimately, from the ability to ‘View Source’, GeoCities, Xoom, et al, and the inspiration to create, adapt and experiment.

But how do things fit into this? How can the Internet of Things, ambient intelligence and ubiquitous, pervasive computing, help people understand the world better? Could they enable more than just clever home automation-via-apps, more-precisely-targeted behavioural advertising, and remote infrastructure monitoring, and actually help people understand and engage with the complex systems around them — the systems we’re part of, that affect what we do and can do, and are in turn affected by what we do? Even as the networks become ever more complex, can the Internet of Things — together with the wider internet — help people realise what they can do, creating opportunities for new forms of civic engagement and empowerment, of social innovation, of sustainability?

In this article, I’m going to meander a bit back and forth between themes and areas. Please bear with me. And this is very much a draft—a rambling, unfocused draft—on which I really do welcome your comments and suggestions.

Light switch panel, RCA

Design and behaviour change

For the last few years, I’ve been working in the field of what’s come to be known as design for behaviour change, mostly, more specifically, design for sustainable behaviour. This is all about using the design of systems—interfaces, products, services, environments—to enable, motivate, constrain or otherwise influence people to do things in different ways. The overall intention is social and environmental benefit through ‘behaviour change’, which is, I hope, less baldly top-down and individualist than it may sound. I am much more comfortable at the ‘enable’ end of the spectrum than the ‘constrain’. The more I type the phrase ‘behaviour change’, the less I like it, but it’s politically fashionable and has kept a roof over my head for a few years.

As part of my PhD research, I collected together insights and examples from lots of different disciplines that were relevant, and put them into a ‘design pattern’ form, the Design with Intent toolkit, which lots of people seem to have found useful. All of the patterns exemplify particular models of human behaviour—assumptions about ‘what people are like’, what motivates them, how homogeneous they are in their actions and thoughts, and so on—often conflicting, sometimes optimistic about people, sometimes less so. Each design pattern is essentially an argument about human nature. Some of them are nice, some of them are not.

However, in applying some of the (nicer!) ideas in practice, particularly towards influencing more sustainable behaviour at work and at home, around issues such as office occupancy and food choices, as well as energy use, it became clear that the models of people inherent in many kinds of ‘intervention’ are simply not nuanced enough to address the complexity and diversity of real people, making situated decisions in real-life contexts, embedded in the complex webs of social practices that everyday life entails. (This is, I feel, something also lacking in many current behavioural economics-inspired treatments of complex social issues.)

Milton Keynes Station

Many of the issues with the ‘behaviour change’ phenomenon can be characterised as deficiencies in inclusion: the extent to which people who are the ‘targets’ of the behaviour change are included in the design process for those ‘interventions’ (this terminology itself is inappropriate), and the extent to which the diversity and complexity of real people’s lives is reflected and accommodated in the measures proposed and implemented. This suggests that a more participatory process, one in which people co-create whatever it is that is intended to help them change behaviour, is preferable to a top-down approach. Designing with people, rather than for people.

Another issue, noted by Carl DiSalvo, Phoebe Sengers and Hrönn Brynjarsdóttir in 2010, is the distinction between modelling “users as the problem” in the first place, and “solving users’ problems” in approaches to design for behaviour change. The common approach assumes that differences in outcome will result from changes to people—‘if only we can make people more motivated’; ‘if only we can persuade people to do this’; ‘if only people would stop doing that’—overcoming cognitive biases, being more attentive, caring about things, being more thoughtful, and so on.

But considering questions of attitude, beliefs or motivations in isolation rather than in context—the person and the social or environmental situation in which someone acts (following Kurt Lewin and Herbert Simon)—can lead to what is known as the fundamental attribution error. Here, for example, some behaviour exhibited by other people—e.g. driving a short distance from office to library—is attributed to ‘incorrect’ attitudes, laziness, lack of motivation, or ignorance, rather than considering the contextual factors which one might use to explain one’s own behaviour in a similar situation—e.g. needing to carry lots of books (this example courtesy of Deborah Du Nann Winter and Susan M. Koger).

So, framing behaviour change as helping people do things better, rather than trying to ‘overcome irrationality’ as if it were something that exists independently of context, offers a much more positive perspective: solving people’s problems—with them—as a way of enacting behaviour change, from the initial viewpoint of trying to understand, in context, the problems that people are trying to solve or overcome in everyday life, rather than adopting a model of defects in people’s attitudes or motivation which need to be ‘fixed’.

Lord Stand By Me

Something that has arisen, for me, during ethnographic research and other contextual enquiry around things like interaction with heating systems, energy (electricity and gas) use more widely — and even seemingly unrelated issues such as neighbourhood planning, or a community group’s use of DropBox — is the importance of people’s understanding and perceptions of the systems around them. Questions about perceived agency, mental models of how things work, assumptions about what affects what, conflating one concept or entity with another, and so on, feed into our decision processes, and the differences in understanding can cause conflict or undesired outcomes for different actors within the system.

As Dan Hill puts it, if we can “connect [people’s] behaviour to the performance of the wider systems they exist within” we can help them “begin to understand the relationships between individuals, communities, environments and systems in more detail”.

'Pig Ears' outside the Said Business School, Oxford

But it seems as though most approaches to design for behaviour change—and it’s a rapidly growing field under different labels—either ignore questions around understanding entirely, or try to find out about how users (mis)understand things, and then attempt to change users’ understanding to make it ‘correct’. Many, in fact, start straight out to try to change understanding without trying to find anything out about users’ current understanding. A few (but not enough, perhaps) try to adjust the way a system works so that it matches users’ understanding. (This is a development of something I explored in a London IA talk a few years ago.)

Also, I must emphasise at this point that ‘behaviour change’ is not really a thing at all. ‘People doing something differently’ covers so much, across so many fields and contexts, that it’s silly to think it can be assessed properly in a simple way.

If anyone is really an ‘expert’ in ‘behaviour change’, it is parents and teachers and wise elderly raconteurs of lives well lived, children with youthful clarity of insight, people who strike up conversations with strangers on the bus, or talk down people about to jump off bridges: optimistic, experienced (or not) human students of human nature, not someone who sees ‘the public’ as a separate category to him- or herself, ripe for ‘intervention’.

Not for Public Use, Class 172 London Overground train

The Internet of Things as an innovation space

One of the nicest things about the Internet of Things phenomenon—and indeed the Quantified Self movement—as opposed to that other, related, topic of our time, the top-down ‘Smart City’, is the extent to which it crosses over with the bottom-up, almost democratic, Maker movement mentality. I’m using ‘the IoT’ here as a broad category for the potential to involve objects and sensors and networks in areas or situations that previously didn’t have them.

The Internet of Things, through initiatives such as Alexandra Deschamps-Sonsino’s IoT meetups and others—while undoubtedly boosted commercially by Gartner Hype Cycle-baiting corporate buzzword PowerPoints—has been to no small extent driven by people doing this stuff for themselves. And helping each other to do it better. The peer support for anyone interested in getting into this area is immense and impressive: you can bet that someone out there will offer assistance, suggest ways round a problem, and share their experience. The barriers to entry are relatively low, and there are organisations and projects springing up whose rationale is based around lowering those barriers further.

The IoT is a huge von Hippel user innovation space, and it involves not just innovation by users, but innovation that is about building things. Its very sustenance is people building things to try out hypotheses, addressing and reframing their own problems responding to their own everyday contexts, modifying and iterating and joining and forking and evolving what they’re doing, putting the output from one project into the input of another, often someone else’s. And yet it is still quite a small community in a global sense, overrepresented in the echo-chamber of the sorts of people likely to be reading this article.

Home Energy Hackday, Dana Centre

Constructionism and co-creation

I suspect there is something about the open structure of many IoT technologies (and those which have enabled it) which has made this kind of distributed, collaborative community of builders and testers and people with ideas more likely to happen. It may just be the openness, but I think it’s more than that. There are three other elements which might be important:

  • Linking the real world to a virtual, abstract, invisible one. Even if an IoT project is about translating one physical phenomenon into another, this action comes about through links to an invisible world. I don’t know for certain why that might be important, but I think it may be that it triggers thinking about how the system works, in a way that is still somewhat outside our everyday experience. This kind of action-at-a-distance retains some magic, in the process calling new mental models or simulations into existence…
  • …which are then tested and iterated, because nothing ever works first time. This means people learn through doing things, through coming up with ideas about how things work, and testing those hypotheses by their own hand, often understanding things at quite different levels of abstraction (but that still being just fine). It’s not a field that’s particularly suited to learning from a book (despite some excellent contributions)…
  • …and indeed the boundaries of what the IoT is for are so fluid and expansive in a ‘What use is a baby?’ sense that the goal is one of exploration rather than ‘mastery’ of the subject. There is no right or wrong way to do a lot of this stuff, nor limits imposed by any kind of central authority.

I’m no scholar of educational theory, but it seems that these kinds of characteristics are similar to what Seymour Papert, father of LOGO and student of Jean Piaget, termed constructionism—in the words of the One Laptop Per Child project,

“a philosophy of education in which children learn by doing and making in a public, guided, collaborative process including feedback from peers, not just from teachers. They explore and discover instead of being force fed information”.

Story Machine workshop at The Mill, Walthamstow
Constructionist learning (whether with children or adults) is not a ‘leave them to it’ approach: it involves a significant degree of facilitation, including designing the tools (like LOGO, or Scratch) that enable people to create tools for themselves. Returning to the design context, this is a central issue in discussions of participatory design, co-design and co-creation—to what extent, and how, designers are most usefully involved in the process. What are the boundaries of co-creation? How do they differ in different contexts? Is the progression from design for people to design with people to design by people an inevitability? Whither the designer in the end case?

Setting aside this kind of debate for the moment, I am going to say that for the purposes of this article:

  • involving people (‘users’, though they are more than that) in a design process…
  • to address problems which are meaningful for them, in their life contexts…
  • in which they participate through making, testing and modifying systems or parts of systems…
  • partly facilitated or supported by designers or ‘experts’…
  • in a way which improves people’s understanding of the systems they’re engaging with, and issues surrounding them…

meets a definition of ‘constructionist co-creation’.

Education City, Doha

Behaviour change through constructionist co-creation

Now, let’s go back to behaviour change. I mentioned earlier my contention that much of what’s wrong with the ‘behaviour change’ phenomenon is about deficiencies in inclusion. People (‘the public’) are so often seen as targets to have behaviour change ‘done to them’, rather than being included in the design process. This means that the design ‘interventions’ developed end up being designed for a stereotyped, fictional model of the public rather than the nuanced reality.

Every discipline which deals with people, however tangentially, has its own models of human behaviour—assumptions about how people will act, what people are ‘like’, and how to get them to do something different (as Susan Weinschenk notes). As Adam Greenfield puts it:

“Every technology and every ensemble of technologies encodes a hypothesis about human behaviour”.

Phone box, Isleworth

All design is about modelling situations, as Hugh Dubberly and Paul Pangaro and before them, Christopher Alexander remind us. Even design which does not explicitly consider a ‘user’ inevitably models human behaviour in some way, even if by omitting to consider people. Modelling inescapably has limitations—Chris Argyris and Donald Schön suggested that “an interventionist is a man struggling to make his model of man come true”—but of course, although “all models are wrong…, some are useful.”

In design for behaviour change, we need to recognise the limitations of our models, and be much clearer about the assumptions we are making about behaviour. We also need to recognise the diversity and heterogeneity of people, across cultures, across different levels of need and ability, but also across situations. This approach is something like attempting to engage with the complexity of real life rather than simplifying it away—in Steve Portigal’s words:

“rather than create distancing caricatures, tell stories… Look for ways to represent what you’ve learned in a way that maintains the messiness of actual human beings.”

What’s a way to do this? Co-creation, co-production—in a behaviour change context—enables us to include a more diverse set of people, leading to a more nuanced treatment of everyday life. This, in itself, represents an advance in inclusion terms over much work in this field. Flora Bowden and I have tried to take this approach as part of our work on the European SusLab energy project.

But going further, constructionist co-creation for behaviour change would enable people actually to create, test, iterate and refine tools for understanding, and influencing, their own behaviour. Just look at Lifehacker or LifeProTips, GetMotivated or even the venerable 43 Folders. People enjoy exploring ways to change their own behaviour, through experimenting, through discussion with others, and through developing their own tools and adapting others’, to help understand themselves and other people, and the systems of everyday life which affect what we do. Behaviour change could be direct—or it could be, perhaps more interestingly, directed towards exploring and improving our understanding of the systems around us.

Vodafone tower, on a car park roof in central London

Invisible infrastructures and the Internet of Things: avoiding the demon-haunted smart fridge

The thing is, the systems around us are complex and becoming more so, and often invisible—or “distressingly opaque”—in the process, which makes them more difficult to understand and engage with. This includes everything from ‘the Cloud’ (which, as Dan Hon notes, is coming to the fore with news stories such as celebrity photo hacking) to Facebook (as danah boyd puts it, “as the public, we can only guess what the black box is doing”) to CCTV and other urban sensor networks.

You are now entering a Bluetooth Zone (Right: An interesting infrastructure ‘business model’ from the Public Safety Charitable Trust—see

Timo Arnall, in his PhD thesis, introduces this issue using the example of smartphones, “perhaps the most visible aspect of contemporary, digitally-mediated, everyday-life. Yet the complex networks of systems and infrastructures that allow a smartphone to operate remain largely invisible and unknown.”

He goes on to explore, via some beautiful projects, another invisible infrastructure—RFID and near-field communication— and the possibilities of making this visible, tangible and legible.

Most diagrams or infographics aiming to illustrate the Internet of Things show visible lines connecting objects to each other, or to central hubs of some kind. But whatever forms the IoT takes, most of these are going to be ‘invisible by default’, in Mayo Nissen’s words (specifically referring to urban sensors). Invisibility might seem attractive, and magic (and we’ll get onto seamlessness in a bit) but by its very nature it conceals the links between things, between organisations, between people and purpose:

“Some sensing technologies capture our imagination and attract our constant attention. Yet many go unnoticed, their insides packed with unknowable electronic components, ceaselessly counting, measuring, and transmitting. For what purpose, or to whose gain, is often unclear… there is seldom any information to explain what these barnacles of our urban landscape are or what they are doing.”

Black Boxes & Mental Models Black Boxes & Mental Models Black Boxes & Mental Models

(Above and below: Black boxes and mental models: an exercise at dConstruct 2011. Some photos by Sadhna Jain.)

Back in 2011 I ran a workshop at dConstruct including an exercise where groups each received a ‘black box’, an unknown electronic device with an unlabelled interface of buttons, ‘volume’ controls and LEDs, housed in a Poundland lunchbox and badly assembled one evening while watching a Bill Hicks documentary and drinking whisky.

Black Boxes & Mental ModelsInternally  — and so secretly — each box also contained a wireless transmitter, receiver, sound chip and speaker (basically, a wireless doorbell), and in one box, an extra klaxon. The aim was to work out what was going on — what did the controls do? — and record your group’s understanding, or mental model, or even an algorithm of how the system worked in some form that could explain it to a new user who hadn’t been able to experiment with the device.

As people realised that the boxes ‘interacted’ with each other, by setting off sounds in response to particular button-presses, the groups’ explanations became more complex.

Each group used slightly different methods to investigate and illustrate the model, with unexpected behaviour or coincidences (one group’s box setting off the doorbell in another, but coinciding with a button being pressed or a volume control being turned) leading to some rapidly escalating complex algorithms.

We are now creating an even more complex world of black boxes, networked black boxes with their own algorithms, real and assumed, and those that depend on algorithms out of our hands, remote, changeable, strategic, life-changing which we may not have any easy way of investigating. And which model us, the public, in particular ways.

Algorithm is going from black box code to black box language. Everything is being explained away as “algorithm”. No surprise really

(“Algorithm is going from black box code to black box language. Everything is being explained away as “algorithm”. No surprise really.” Scott Smith, 6 July 2014 —

As James Bridle puts it, “comprehension is impossible without visibility”:

“the intangibility of contemporary networks conceals the true extent of their operation… This invisibility extends through physical, virtual, and legal spaces.”

Bridle is talking about a policing context, but invisibility, or rather lack of transparency, is of course also a hallmark of crime and corruption, often intentionally complex systems. Dieter Zinnbauer’s concept of ambient accountability is very relevant here: systems can only be accountable if people can understand them, whether that’s windows in building-site hoardings or politicians’ expenses.

Or as Louise Downe has said:

“We can only trust something if we think we know how it works… When we don’t know how a thing works we make it up.”

What new superstitions are going to arise from smart homes, smart meters, smart cities? What will people make up? Are my fridge and Fitbit collaborating with Tesco and BUPA to increase my health insurance premiums? What assumptions are the systems in my daily life going to be making about me? How will I know? What are the urban legends going to be? How will this understanding affect people’s lives? How can we make use of what the IoT enables to help us understand things, rather than making things less understandable?

Cables, Downing College Cambridge, 2004

An opportunity

The opportunity exists, then, for more work which uses a constructionist approach to enable us—the public—to investigate and understand the complex hidden systems in the world around us, in the process potentially changing our mental models, behaviour and practice. Tools based around IoT technology, developed and applied practically through a process of co-creation with the public, could enable this particularly well. In general, co-creation offers lots of opportunities for designing behaviour change support systems that actually respond to the real contexts of everyday life. But the IoT, in particular, can enable technological participation in this.

We would have to start with particular domains where public understanding of a complex, invisible system in everyday life potentially has effects on behaviour or social practices, and where changing that understanding would improve quality of life and/or provide social or environmental benefit.

Ghosts, Old Street LT

Introducing ‘knopen’

I want to propose some examples of projects (or rather areas of practical research) that could be done in this vein, but before that—because I can—I am going to coin a new word for this. Knopen, a fairly obvious portmanteau of know and open, can be a verb (to knopen something) or an adjective (e.g. a knopen tool). Let’s say ‘to knopen’ conjugates like ‘to open’. We knopen, we knopened, we are knopening. Maybe it will usually be more useful as a transitive verb: We knopened the office heating system. The app helped us knopen the local council’s consultation process. Help me knopen the sewage system. Maybe it’s useful as a gerund: knopening as a concept in itself. Knopening the intricacies of the railway ticketing system has saved our family lots of money.

Tools for understanding

What does knopen mean, though? I’m envisaging it being the kind of word that’s used as description of what a tool does. We have tools for opening things—prying, prising, unscrewing, jimmying, breaking, and so on. We also have tools that help us know more about things, and potentially understand them—a magnifying glass, a compass, Wikipedia—but just as with any tool, they are better matched to some jobs than to others.

If I just use a screwdriver to unscrew or pry open the casing on my smart energy meter, and look at the circuitboard with a magnifying glass, unless I already have lots of experience, I don’t know much more about how it works, or what data it sends (and receives), and why, or what the consequences are of that. I don’t necessarily have a better understanding of the system, or the assumptions and models inscribed in it. I have opened the smart meter, but I haven’t knopened it. To knopen it would need a different kind of tool. In this case, it might be a tool that interrogates the meter, and translates the data, and the contexts of how it’s used and why, into a form I understand. That doesn’t necessarily just mean a visual display.

Meter cupboard

This, then, would be knopening: opening a system or part of a system (metaphorically or physically) with tools which enable you to know and understand more about how it works, what it does, or the wider context of its use and existence: why things are as they are. Knopening could include ‘knopening thyself’—understanding and reflecting on why and how you make decisions.

Knopening isn’t as involved as grokking. To grok something is at a much deeper level. Nevertheless, knopening could be transformative. Going back to the earlier discussion, knopening is basically a label for a process by which we can investigate and understand the complex hidden systems in the world around us, which could certainly change our mental models, behaviour and practice. Knopening is about understanding why.

Maybe knopen is a daft conceit, a ‘fetch’ that isn’t going to happen. But it’s worth a try. And I see that it also means ‘to button’ or ‘to knot’ in Dutch, but that’s not too awful. As my wife put it, “that’s quite sweet.” Probably ontknopen, unbuttoning or untying, would be closer in meaning to what I mean. Urban Dictionary tells us that knopen can also mean “the act of knocking on and opening a closed door simultaneously”, which is not inappropriate, I think.

Some areas of research for knopen

These are all about people making and using tools to understand—to knopen—the systems around them, in particular the whys behind how things work. They all have the potential to integrate the quantitative data from networked objects and sensors with qualitative insights from people themselves, in co-created useful and meaningful ways.

Please Don't Turn Me Off, I'm The Fridge :)

DIY for the home of the future

In the UK, “at least 60% of the houses we’ll be living in by 2050 have already been built” (and that quote’s from 2010). That means that whatever IoT technologies come to our homes, they will largely be retrofitted. The ‘smart home’ in practice is going to be piecemeal for most people, the Discman-to-cassette-adaptor-to-car-radio rather than a glossy integrated vision.

CC licensed by Toyohara
(Photo by Toyohara, used under a Creative Commons licence)

That’s something to bear in mind in itself, but even with this piecemeal nature, there’s still going to be plenty of invisibility—quite apart from whatever it is our fridges are going to be making decisions about, what will DIY look like?

What are people going to be able to choose to fit themselves? What systems will people be able to connect together? What’s the equivalent of a buried cable detector for data flows? What will Saturday afternoons be like with the IoT? Is it an electrician we need or a ‘data plumber’? What will happen when parts need to be replaced? When smart grids come along, for example, what is interaction with them going to look like? Can DIY work in that context? What happens if microgeneration becomes popular?

Could we use this DIY context strategically — as a way of engaging people in behaviour change, through active participation in experimenting and changing their own homes and everyday practices, using IoT technologies? How do we domesticate the IoT?

House of Coates Haunted Coates House

(Tom Coates’ House of Coates, and the Haunted Coates House)

Something in this space could be the core of the knopen concept: tools that enable us to understand and investigate the invisible systems around us, and the links between them, at home (or at work). Really basically, we could think of it as in-context system diagrams on everythingnot just static, but explorable explanations in Bret Victor’s terminology, maybe even some kind of data traces. And those explanations don’t have to be physical diagrams — they can be ambient, responsive, exploring both the backstories and possible future states of systems.

Networked devices and sensors, inputs and outputs, everything the IoT provides, could show us explicitly how systems work both in and beyond our immediate home context — including our own actions, past, present and future (hence enabling us to change our behaviour), and those of other people. We would learn what a system assumes/knows about us, and how it makes decisions that affect us and others; how do we fit into these systems that pervade our homes?

Pipes in disabled toilet at RCA Battersea

Seams, streams and new metaphors

The idea of seamful design  — in contrast to the seamlessness which so often seems to be goal of advances in human-computer interaction—is useful here. We are used to systems being promoted as invisible, seamless, frictionless as if this is necessarily always a good thing, from contactless payment to Facebook Connect. There’s no doubt that seamlessness can be convenient, but there’s a cost.

Matthew Chalmers, who has developed the ideas that Mark Weiser (father of calm technology, ubiquitous computing, etc) had around seamlessness and seamfulness, suggests that: “Seamfully integrated tools would maintain the unique characteristics of each tool, through transformations that retained their individual characteristics.”

Going slightly further than that, perhaps, by enabling people to experience the joins between systems, and the discontinuities, the texture of technologies — even making the seams not just ‘beautiful’ but tangible— we could help them understand better what’s going on, and interact with systems in a different way. As Karin Andersson says:

“The seams that are the most important are the ones that can improve a system’s functionality and when they are understood and figured out how they can become a resource for interaction by the user. If designers know how certain seams affect interactions, they can then incorporate them into an application and direct their effects into useful features of the system. This way, seamful design allows users to use seams, accommodate them and even exploit them to their own advantage”

Knopen is perhaps an attempt to enable people to make tools to make seams visible, or tangible, for themselves, where currently they are not. It is trying to turn seamlessness into seamfulness, then into understanding and empowerment, through enabling, facilitating, investigation of those systems: brass rubbing for the systems of the home, perhaps.

Detail of Juliana, wife of Thomas de Cruwe, 1411, CC licensed by Amanda Slater

(Detail of Juliana, wife of Thomas de Cruwe, 1411. CC licensed by Amanda Slater)

Seams are important to mental models. In the 1990s, Neville Moray — drawing on a approach taken by cybernetician (and ‘requisite variety’ originator) Ross Ashby — explored how one way of modelling what a mental model really is, is a lattice-like network of nodes that are super- or subordinate to other nodes (not necessarily in the sense of power relations, but rather in terms of parts or categories). By this interpretation, different mental models of the same situation or system come down to things like:

  • two people’s models containing different sets of nodes
  • or, more specifically, conflating particular nodes or introducing distinctions between nodes where others treat them as the same thing
  • two people’s models connecting the same nodes in different ways

Seams are, perhaps, the links or gaps between nodes or groups of nodes. Intentional seamlessness is an attempt to hide these links or gaps by actually conflating particular nodes or groups of nodes from the user’s perspective. Seamlessness is saying, “This is one system, and these nodes are the same”. In doing this, it inherently removes the ability to see or inspect or question or understand these relations.

Ethernet cable looped back, Quality Hotel Panorama, Gothenburg

We are — and will shortly be even more so — surrounded by systems, in our homes and elsewhere, that are collecting, sending, receiving and storing data all the time, about us, our actions and our environments. And yet we are generally not privy to what’s going on, what decisions are being made, where the data come from and where they go.

It might not seem a major issue at present to most people — even in the light of Snowden’s revelations and all that’s come since  — but once, for example, smart meters are dynamically adjusting pricing for electricity and gas on a large scale, a greater number of people are going to want to understand where those prices are coming from, and how these systems work. Compare the — often amusing — reactions when people explore what Google Ads or Facebook thinks it knows about them. Many people seem to enjoy this kind of exploration — all the more reason for a constructionist approach.

AC will not work when door is open, Four Seasons, Doha

We need a narrative context for the streams in our daily lives: what is the story of the sensors? What is the meaning of what’s going on? Even a Dyson-style ‘transparent container’ metaphor for data, showing us what’s being collected, or colour-coded statuses on devices, would give us some more understanding. This is something like ambient accountability in Dieter Zinnbauer’s terminology, but involving us, the public, the ‘end user’, much more explicitly.

Metaphors could play an important role here, or perhaps new metaphors. Representing a new, unfamiliar system in terms of more familiar ones is maybe obvious, and has its limitations (except in Borges, the map is never the territory), but as with our discussion of new superstitions earlier, it’s almost inevitable that new metaphors will arise for parts of these invisible systems in the home and elsewhere, as part of mental models and in people’s explanations to others of how they work. Metaphors are very commonly used in design for behaviour change, from gardens to sarcastic overlords.

What does energy look like?

(What does energy look like? From the V&A Digital Design Weekend 2014. Photo by V&A Digital.)

We can learn quite a lot from exploring people’s understanding and mental imagery around invisible systems. A project Flora Bowden and I have been doing over the last couple of years involves asking people to draw ‘what energy looks like’; we’ve also tried it with concepts such as ‘clean’ and ‘dirty’, and there are large scale projects such as Can You Draw the Internet? There are insights for the design of new kinds of interfaces, of course, but also something more fundamental about how people perceive and relate to intangible things. Almost by definition, people use metaphors (or metonyms) of one kind or another to visualise abstract or unseen concepts — what would they look like for invisible systems in our homes?

Could we use new metaphors strategically, to help people understand new systems? What should they be? How do they link to behaviour change in this context? Bringing it back to DIY, what metaphors are going to be used to get people interested in fitting these systems to their homes in the first place?

Ham Island, Old Windsor

You’re not alone

Moving away from the home, this next group of ideas would use IoT technologies to enable ‘peer support’ for decision making: connecting people to others facing similar situations, and enabling people to understand each other’s thinking and what worked for them (or not). The aim of this knopening of situations would be empathy, but also practical advice and support.

Understanding—and reflecting on—how you think, and how other people approach the same kinds of situation, can help change mental models, support behaviour change in the context of everyday practices (learning from others what worked for them, and why), and tackle attribution errors, as mentioned earlier, by bridging the gaps between our own thinking and our assumptions about others’ behaviour.

The contexts and domains where this could be useful range from physical and mental health, to route planning, to home improvement, to financial decisions, to any situation where a combination of networked objects and/or sensors, combined with qualitative insights from people who are part of the system, could help.

Some specific ways of implementing You’re not alone might include:

Windows XP Event Viewer
(Windows XP Event Viewer — image from

The Shared ‘Why?’

  • This would be a tool for annotating situations with ‘what your thinking is’ as you do things (that may be logged automatically anyway) — a kind of ‘Why?’ column in the event logs of everyday life.
  • The question might be prompted automatically by certain situations being recognised (through sensor data) or could also be something you choose to record. These ‘Whys’ would then be available to your future self, and others (as you choose) when similar situations arise.
  • My thinking here is that (as Tricia Wang points out), the vast quantities of Big Data generated and logged by devices, sensors and homes and infrastructure, are largely devoid of human contexts—the ‘Why?’, the ‘thick’ data—that would give them meaning. There’s a great opportunity for introducing a system which makes this easier to capture. It could be an academic or design practitioner research tool, but my main priority is that it be actually useful to the people using it.

Annotating household objects to understand thermal comfort

(Annotating household objects to understand thermal comfort. From a study by Sara Renström and Ulrike Rahe at Chalmers University of Technology, Gothenburg.)

  • Asking people to annotate real-life situations with simple paper labels or arrows has worked well as a research method for eliciting people’s stories, meanings and thought processes around interaction with particular devices, and the sequences they go through. Similarly, even simple laddering or 5 Whys-type methods can be used to uncover people’s heuristics around everyday activities. But how could these kinds of methods be made more useful for those doing the annotation or answering the questions—and for others too?
  • While there exist research methods such as experience sampling and sentiment mapping, with plenty of location- or other trigger-based mobile apps, these largely focus on mood and feelings, rather than the potentially richer question of ‘Why?’. Yet Facebook and Twitter have shown us that short-form status updates, with actual content (mostly!), are something people enjoy producing and sharing with others. When I worked on the CarbonCulture at DECC project, one of the most successful features (in terms of engagement) of the OK Commuter travel logging app was a question prompting users to describe that morning’s commute with a single word, which often turned out to be witty, insightful and revealing of intra-office dynamics around topics such as provision of facilities for cyclists.
  • Clearly there are lots of questions here about validity and privacy. Would people only log ‘Whys’ that they wanted others to know? Who would have access to my ‘Whys’? Would they ‘work’ better in terms of empathy or behaviour change if linked to real names or avatars than anonymously? We would have to find ways of addressing and accommodating these issues.

There are some parallels with explicitly social projects such as the RSA’s Social Mirror Community Prescriptions, but also with work in naturalistic decision making. For example, there are projects exploring how Gary Klein’s recognition-primed decision model of how experts make decisions (based on a mixture of situational pattern recognition and rapid mental simulation) can be ‘taught’ to non-experts. A constructionist approach seems very appropriate here.

The wall of a fish restaurant in Gothenburg

Helpful ghosts: ambient peer support

  • What this would involve is essentially being able to create helpful ‘ghosts’ for other people, which would appear when certain situations or circumstances, or conjunctions of conditions, were detected, through IoT capabilities. You could record advice, explanations, warnings, suggestions, motivational messages, how-to guides, photos, videos, audio, text, sets of rules, anything you like, which would be triggered by the system detecting someone encountering the particular conditions you specified. That could be location-based, but it could also be any other condition. It’s almost like a nice version of leaving a note for your successor, or anyone who faces a similar situation.

The wall of a fish restaurant in Gothenburg
(The Stone Tape (BBC, 1972). Image from

  • The ghosts wouldn’t be scary, or at least I hope not. Maybe ghost is the wrong word. The idea obviously has parallels with Marley’s Ghost in Dickens’ A Christmas Carol—and the feedforward / scenario planning / design futures of the Ghost of Christmas Yet-To-Come—but what directly inspired me was Nigel Kneale’s The Stone Tape (probably in turn inspired by archaeologist and parapsychologist Tom Lethbridge’s work), in which ghosts are explained as a form of recording somehow left behind in the fabric of buildings or locations where strong emotions have been felt. Kevin Slavin’s talk at dConstruct 2011, and Tom Armitage’s ghostcar, are also inspirations here. And I have recently also come across Joe Reinsel’s work on Sound Cairns, which has some very clever elements to it.
  • Maybe it’s better to think of this like If This Then That (see below), but allowing you to create rules that trigger events for other people instead of just for you.
  • How would it be different to Clippy? (thanks to Justin Pickard for making this connection). We should aim to learn from the late Clifford Nass’s work at Stanford on why Clippy was so disliked, and how to make him more loveable. It would also be important that the helpful ghosts did not just become a form of ‘pop-up window for real life’. Advertisers should not be able to get hold of it. It should always be opt-in, and the emphasis should be on participation (creating your own ghosts in response) and understanding. It is meant to be at least a dialogue, a collaborative approach to learning more about, and understanding—knopening—a situation, and then passing on that understanding to others.

Pigeon deciding whether to take the District Line or North London Line from Richmond station

A Collective If This Then That

  • This is probably already possible to achieve with clever use of If This Then That together with some other linked services, but the basic idea would be a system where multiple people’s inputs—which could be a combination of quantitative sensor data and qualitative comments or expressions of sentiment or opinion—together can trigger particular outputs. These might also be collective, or might apply only in a single location or context.
  • There are obvious top-down examples around things like adaptive traffic management, but it would more interesting to see what ‘recipes’ emerge from people’s—and communities’—own needs. There could also be multiple outputs to different systems. They could work within a family or household or on a much bigger scale—connecting families who are often apart, for example.
  • The knopen element comes with being able to understand—right from the start—how to make action happen, and collaboratively create recipes which address a community’s needs, for example. The system might be complex but would be not only visible, but fully accessible since the participants would be involved in creating and iterating it.
  • It could involve ‘voting’ somehow, but it would also be interesting to see effects emerge from unconscious action or a combination of physical effects read by sensors and social or psychological effects from people themselves.
  • I’m inspired here particularly by Brian Boyer and Dan Hill’s Brickstarter—in which the collective desire/need/interest of the crowdfunding model is applied to urban infrastructure—but also by the academic research (and workshop at Interaction 12) I did exploring ‘if…then’-type rules of thumb and heuristics that people use for themselves, often implicitly, around things like heating systems, and how different people’s heuristics differ.
  • There’s some really interesting academic research going on at the moment by teams at Brown and Carnegie Mellon—e.g. see this paper by Blase Ur et al from CHI 2014—on using IFTTT-like ‘practical action-trigger programming’ in smart homes as a way to enable a more easily programmable world, and it would be great to explore the potential of this approach for improving understanding and engagement with the systems around us. As Michael Littman puts it:

“We live in a world now that’s populated by machines that are supposed to make our lives easier, but we can’t talk to them. Everybody out there should be able to tell their machines what to do.” (Professor Michael Littman, Brown University)

Trackbed at St Margaret's (London)

Storytelling for systems: Five whys for public life

Five whys’ is a method for what’s called root cause analysis, used in fields as diverse as quality management and healthcare process reform. It’s similar to the interview technique of laddering, which has seen some application in user experience design. The basic principle is that there is never only one ‘correct’ reason ‘Why?’ something happens: there are always multiple levels of abstraction, multiple levels of explanation, multiple contexts—and each explanation may be completely valid within the particular context of analysis. In ‘solving’ the problem, the repeated asking ‘Why?’ enables reframing the problem at further levels up (or down) this abstraction hierarchy, as well as giving us the ‘backstory’ of the current state (which is often considered to be a problem, hence the analysis).

It’s a practical instantiation, in a way, of Eliel and Eero Saarinen’s tenet of trying to design for the “next largest context—a chair in a room, a room in a house, a house in an environment, environment in a city plan”. In some previous work, I tried exploring (not particularly clearly), the notion that this kind of approach, in reframing the problem at multiple levels, could essentially provide us with multiple suggested ‘solutions’ by inverting problem statements at each level of abstraction.

Construction work, Doha

Planning notice, Kensington, LondonSo what do we do with this? How can IoT technology be useful? Imagine being able to ‘ask’ the physical and societal infrastructure around you—the street lamps, the building site, the park fountain, but also the local council, the voting booth, the tax office, your children’s primary school’s board of governors, the bus timetable, Starbucks, the numberplate recognition camera, the drain cover, the air quality sensors in the park, the National Rail Conditions of Carriage—Why?

Why are they set up the way they are? Who came up with the idea? (not for blame, but for empathy). What’s the story behind the systems? What influenced how they’re operating, how the decisions were made, how they came to be?

What data do they collect, and what do they do with the data? What’s the revision history for this government policy? What were the reasons given for that cycle path being routed that way? What’s the history of planning applications for buildings on this site? What were the debates that led to the current situation?

And for each of those, the answers would be explained at multiple levels—maybe not exactly five ‘whys’, but more than one simplistic reason, devoid of context.

SEEB Cables Cross Here, Twickenham

This isn’t just Freedom of Information—although it intersects with that. It’s more about understanding the decision process, the constraints and priorities others have had to contend with along the way. Kind of autobiographies for systems (including public objects, perhaps, but also institutions—maybe even Dan Hill’s ‘Dark Matter’). Or a cross between blue plaques (or rather, Open Plaques), ‘For the want of a nail’, WhatDoTheyKnow, City-Insights, FixMyStreet, Dieter Zinnbauer’s Ambient Accountability, TheyWorkForYou, Historypin, Wikipedia’s revision history, Mayo Nissen’s ‘Unseen Sensors’ and a sort of transparent reverse IFTTT where you can see what led to what.

Cables, Berkeley

From a technology point of view, you could do it very simply with smartphones and QR codes or NFC tags stuck on bits of street furniture (for example), but it would be possible to do much more when systems have a networked capability and presence—when data are being collected or received, or transmitted, or when one piece of infrastructure is informing another.

Of course, it could be seen as quite antagonistic to authority: this kind of transparent storytelling could reveal how inept some institutions—and potentially some individuals—are at making decisions, although it could also help generate empathy for people facing tough decisions, in the sense of revealing the trade-offs they have to make, and so increase public engagement with these systems by showing both their complexity (potentially) and their human side. Peerveillance, sousveillance, equiveillance, yes—but preferably framed as storytelling.

The challenge would be finding positive stories to lead with (thanks to Duncan Wilson for this point). Suggestions are very welcome.

Asset mapping, Kentish Town

Conclusion: what next?

This has been a long, rambling and poorly focused article. It tangles together a lot of ideas that have been on my mind, and others’ minds, for a while, and I’m not sure the tangle itself is very legible. But I welcome your comments.

My basic thesis is that IoT technology can be a tool for behaviour change for social and environmental benefit, through involving people in making systems which address problems that are meaningful for them, and which improve understanding of the wider systems they’re engaging with.

I think we can do this, but, as always, doing something is worth more than talking about it. As an academic, I ought to be in a position to find funding and partners to do something interesting here. So I am going to try: if you’re interested, please do get in touch.

The End, College Hall, Cooper's Hill, 2004

Work in progress: Ambient audible energy data

The three instruments you hear here represent the electricity use of three items of office infrastructure – the kettle, a laser printer, and a gang socket for a row of desks – in the Helen Hamlyn Centre office over 12 hours from midnight on a Sunday to lunchtime on a Monday, in December, monitored using CurrentCost IAMs. The figures were scaled to provide ranges that sounded better, and converted into a MIDI file using John Walker’s csvmidi and then Aria Maestosa.

The ‘ticks’ indicate each hour’s passing. The ‘honk’ (Tenor Sax) is the kettle (up to 1.5kW when in use). The ‘whine’ (Synth Brass 1) is the Kyocera laser printer. The other synth (Polysynth) is the gang socket, which mainly had a couple of laptops (15W-50W) plugged into it when people were in the office, and a charger (1W) plugged into it otherwise . Lower pitch indicates greater electricity use, hence the high-pitched whine is the background power of the printer (about 10W on standby, rising to 300W-500W when in use).

As the audio starts, you can hear, over the background whine of the printer, the kettle come on as the security guard makes himself a middle-of-the-night cup of tea. Then, early in the morning, the kettle is used three times by the cleaners – twice in quick succession (reboiling?) and then once again. Suddenly, from 9.30, as office staff arrive, the kettle goes on again, laptops are plugged in, the printer starts printing and the energetic hubbub of office life becomes apparent.

Sound of the Office

Data sonification has been in the news a bit recently, from Domenico Vicinanza’s ‘Sound of Space Discovery’ to Opower’s ‘Chicago in the Wintertime’. It’s something that’s long intrigued me, but if I’m honest, has underwhelmed me in terms of either its actual utility or indeed its impact aesthetically. A (visual) graph is useful because I can use it to find something out. A table of numbers, likewise, even if patterns are less immediately evident. But a beautiful orchestral piece that just happens to draw on aggregated data which are a long way from anything I can comprehend, in scale or meaning, doesn’t tell me anything, somehow. Sarah Angliss was pretty much spot-on in this 2011 Mad Art Lab post.

Energy use is the focus of one of the main projects I’m working on, and one of the strongest findings that came out of interviews and co-creation work with householders that Flora Bowden and I did last summer and autumn was the notion that the invisibility of energy was a major component of householders’ lack of understanding, which contributed – by their own admission – to energy waste.

More than one person specifically suggested that being able to ‘listen’ to whether appliances were switched on or not, and, more interestingly, what state they were in (e.g. listening to a washing machine will give you a good idea as to where it is in its cycle), was potentially more useful for understanding how to reduce energy use than a flashy visual display or dashboard. Sound is potentially even more ‘glanceable’ than glanceables. Even hearing what you’d left on as you went out of the door would be useful. There are echoes of Mark Weiser’s Calm Technology including Natalie Jeremijenko’s Live Wire (Dangling String) but also the ‘useful side-effects’ of things like the ‘clacking’ sound of mechanical railway departure boards as an indicator that the display has updated, as Adrian McEwen and Hakim Cassimally point out in their excellent Designing the Internet of Things.

We also explored aspects of this idea further in our Seeing Things project with RCA students back in November, with contributors including Dave Cranmer and Dagny Rewera having an audio/visual sensory translation element to their work. Of the participants, Ted Hunt took an explicitly multi-sensory approach with his project, including audio, while Francesco Tacchini, with Julinka Ebhardt and Will Yates-Johnson, subsequently went on to create the incredible Space Replay where audio is both monitored and played back in public space.

I’m not saying the ‘Sound of the office’ audio above is particularly good. It was more of a let’s-play-around-with-some-data experiment, and I’ve since found that proper sonification platforms exist. But the approach is something I very much want to explore and build on – possibly whole-house energy use audio disaggregated by appliance, or by activity – and it raises so many interesting questions around what is most useful or most effective at actually either influencing energy use, or helping people understand the complex systems around them. Should it be aesthetically pleasing, or horrible enough it triggers you to turn things off? Is that just the kind of over-simplification that makes most energy monitor displays ineffective? Should the audio be real-time or provide a summary? Should it be paired with visuals? (e.g. like Alexander Chen’s beautiful MTA.ME or Listen to Bitcoin / Listen to Wikipedia) How much should it try to be ‘music’ versus, basically an ‘auditory affordance’ or alarm system? Should there be something about the quality of the sound that indicates something, e.g. load on the National Grid? (Thanks to Aideen McConville and Jack Kelly for this suggestion.)

The field is interesting partly because, post-PhD, I’ve come to realise that what I’m interested in is not so much the question of “how do we influence behaviour?” as an end in itself, but something more like “how do people understand complex systems of which their behaviour is a part, and how do we help them understand those systems better?”. There’s a substantial blog post coming on that, which hopefully draws together lots of interests and ideas, from the IoT to heuristics to seamfulness to affordances to mental models, and (I hope) will set out a kind of research programme which I might be able to get some funding for. But in the meantime, this is certainly part of the direction we’re going in with the ‘energy feedback’ part of the RCA’s work on the SusLab project. It’s going to be ambient, and it’s going to involve more than just numbers and graphs.

Direct link for MP3 file
Sound of the Office

Code as control

'You removed the card!'

In the earlier days of this blog, many of the posts were about code, in the Lawrence Lessig sense: the idea that the structure of software and the internet and the rules designed into these systems don’t just parallel the law (in a legal sense) in influencing and restricting public behaviour, but are qualitatively different, enabling distinct forms of affordance and constraint. Designers (and developers) — or in many cases those overseeing the process — in this sense potentially wield a lot of (political) power.
Read More

Report: Most people just trying to get by

Cubicles (image by Michael Lokner, used under CC licence)

Most people, for most of their day, are trying to get by. Every day is essentially a series of problems, some minor, some major, some requiring more thought than others. Some we care a lot about; some we wish we didn’t have to. Some are welcome; some we even bring on ourselves because we enjoy solving them; others are deeply unwelcome. Some we care about initially, but then find we no longer do; some we don’t care about to start with, but they become important to us over time.
Read More