Yearly archives of “2017

comment 1

Exploring Qualitative Displays and Interfaces

Windsock on Burgh Island. Devon

by Dan Lockton, Delanie Ricketts, Shruti Aditya Chowdhury (Imaginaries Lab, Carnegie Mellon School of Design) and Chang Hee Lee (Royal College of Art)

Much of how we construct meaning in the real world is qualitative rather than quantitative. We think and act in response to, and in dialogue with, qualities of phenomena, and relationships between them. Yet, quantification has become a default mode for information display, and for interfaces supporting decision-making and behaviour change.

There are more opportunities within design and human-computer interaction for qualitative displays and interfaces, for information presentation, and an aid to help people explore their own thinking and relationships with ideas. Here we attempt one dimension of a tentative classification to support projects exploring opportunities for qualitative displays within design.

This blog post is a slightly edited version of a late-breaking work submission presented at CHI’17, May 06—11, 2017, Denver, CO, USA, and published in the CHI Extended Abstracts at http://dx.doi.org/10.1145/3027063.3053165

Download this article as a PDF.

Water trapped in train carriage door is a form of qualitative display of the train’s acceleration, deceleration and inertia.

Introduction

Outside of the digital, we largely live and think and act and feel in response to, and in dialogue with, the perceived qualities of people, things and phenomena, and the relationships between them, rather than their number.

Much of our experience of—and meaning-making in—the real world is qualitative rather than quantitative. How friendly was she? How tired do I feel right now? Who’s the tallest in the group? How windy is it out there? Which route shall we take to work? How was your meal? Which apple looks tastier? Which piece of music best suits the mood? Do I need to use the bathroom? Particularly rarely do we deal with quantities in relation to abstract concepts—two coffees, half a biscuit, three children, but rarely 0.5 loves or 6.8 sadnesses.

And yet, quantification has become the default mode of interaction with technology, of display of information, and of interfaces which aim to support decision-making and behaviour change in everyday life [27]. We need not elaborate here the phenomena of the quantified self [36, 42] and personal informatics more widely [24, 12], except to note the prevalence of numerical approaches (Figure 1) and the relative unusualness of non-numerical, pattern-based forms (Figure 2).

Figure 1: A typical form of quantitative interface: a Fitbit’s display of number of steps taken.
 

Figure 2: The Emulsion activity tracker, by Norwegian design studio Skrekkøgle, contains two immiscible liquids. Movement splits the colored liquid into smaller drops, making patterns.
 

But what might we be missing through this focus on quantification? It seems as though there might be opportunities for human-computer interaction (HCI) to explore forms of qualitative display and interface, as an approach to information presentation and interaction, as an aid to help people explore their own and each other’s thinking, and specifically to help people understand their relationships and agency with systems.

In this article, we discuss qualitative displays and interfaces, and attempt one dimension of a tentative classification supporting design projects exploring this space.

Leaves as a qualitative interface for the wind

What could qualitative displays and interfaces be?

Here we define a qualitative display as being a way in which information is presented primarily through representing qualities of phenomena; a qualitative interface enables people to interact with a system through responding to or creating these qualities. ‘Displays’ are not necessarily solely visual—obvious to say, perhaps, but not always made explicit.

Before exploring some examples, we will look at some theoretical issues. The terms ‘qualitative interface’ or ‘qualitative display’ are not commonly used outside of some introductory human factors textbooks, but forms of interface along these lines are found in lots of projects at CHI, TEI, DIS, Ubicomp (all academic human-computer interaction conferences) and other venues, without authors explicitly drawing our attention to the concept—it is perhaps just too obvious and too broad to merit specific comment in HCI and interaction design research. But, assuming the idea does have value, what are some characteristics?

A human face is a qualitative interface, perhaps the earliest we encounter [e.g. 40] along with the voice. We learn to read and interpret emotions in others’ expressions, to recognize commonalities and differences across people, to make inferences about internal and external factors affecting the person, and monitor the effects we or others are having on that person. We understand that the face and voice and our ability to read them are abstractions, interpretations, not perfect knowledge, but a model which enables us to make decisions in conjunction with our reading of our own emotions.

In a sense, the whole world, as we perceive it, is a very complex qualitative interface. The most accurate model of a phenomenon is the phenomenon itself, but it is only useful to us to the extent we can understand what we are observing, detect the patterns we need to, and recognize that we are constructing the ‘reality’ we perceive. We are always creating a model [14] and that model is necessarily not reality itself; all displays of information are representations of a simplified model of phenomena in the world. Levels of indexicality [32], drawing on Charles Peirce’s semiology, are relevant here, addressing the “causal distance” between the phenomenon and how it is displayed.

One advantage of interfaces seeking to provide a qualitative display is that they have the potential to enable the preservation of at least some of the complexity of real phenomena—representing complexity without attenuating variety [2]—even if we do not pay attention to it until we actually need to, in much the same way as certain phenomena in the real world become salient only when we need to deal with them. Looking out of the window or opening the door to see and feel and hear what the weather is like outside presents us with complex phenomena, but we are able to interpret what actions we need to take, in a more experientially salient way than looking at some numbers on a weather app.

Figure 4: It’s easy to imagine the feel of the wind on ourselves when we watch this scarf tied around a lamp post flapping in the breeze. Figure 5: A windsock gives us more sense of the wind’s qualities than a numerical display.
 

The feel of the wind on our skin, or watching the wind affect the environment, gives us a better sense of whether we need a scarf or coat than knowing the quantitative value of the wind speed and direction (Figures 3, 4 and 5). We can see, hear and feel not just wind speed and direction, but other qualities of it—is it continuous? in short gusts? damp, dry?

Qualitative displays could enable us to learn to recognize patterns in the world (and in data sets), and the characteristics of state changes, similarly to benefits identified in sonification research [35]. We should consider that ‘qualitative’ does not simply imply the absence of numbers. The examples we use in this paper might involve elements that could easily be quantified (rain drops, ink in a pen) but are given meaning through their display in a way that emphasises a quality or characteristic of the phenomenon. We recognise that this is potentially an ambiguous area, and are open to evolving the concept.

A possible spectrum of one dimension of qualitative displays: directness of connection

Here’s a tentative spectrum of one dimension of qualitative displays, relating phenomena to the display in terms of how directly they are connected.

(Levels 0—1 involve direct use of a real-world phenomenon in the display; from about Level 2 up to Level 5, they involve increasing degrees of translation or transduction of the phenomena. This parallels ideas in indexical visualisation [32] and embedded data representation [41] in terms of ‘situatedness’ or causal distance to phenomena.)

  • Level 0: The phenomenon itself ‘creates’ the display directly
  • Level 1: The display is an ‘accidental’ side-effect of the phenomenon
  • Level 2: The side-effect is ‘incorporated’ into a display that gives it meaning
  • Level 3: The display is a designed side-effect of the phenomenon
  • Level 4: Some minor processing of the phenomenon creates the display
  • Level 5: Major processing of the phenomenon creates the display

Figure 6: Some examples of displays from Levels 0, 1 and 2. Level 0: The pattern of raindrops hitting a translucent umbrella—frequency, coverage, and sound—directly creates a ‘rain display’ for the user, providing insight into the current state and enabling decisions about whether the umbrella is still needed; City lights create a display showing the shape of the city’s districts and indicator of population density; Water trapped in a train carriage window moves as the train ac-/de-celerates, creating a dynamic display of the train’s motion; A transparent pen is a physical progress bar for the amount of ink remaining—it could be quantified, but it is perhaps the quality of being not-yet-run-out which matters to the user. Level 1: A worn patch on a map accidentally provides a display of ‘you are here’; Use marks [5] from previous users demonstrate how to use a swipe-card for entry to a building; A spoon worn through decades of use is an accidental display of the way in which it has been used [31]; Footprints in the snow ‘accidentally’ provide a display of previous walkers’ paths. Level 2: ‘This Color For Best Taste’ label gives ‘meaning’ to the colour of a mango’s skin for the consumer (Photo used with permission of Reddit user /u/cwm2355); Writing ‘Clean Me’ or other messages in dust on a car gives meaning to the dusty property; Admiral Robert Fitzroy’s Storm Glass, as used on the voyage of the Beagle (1831—6), incorporates crystals whose changing appearance was believed to enable weather forecasting (Photo: ReneBNRW, Wikimedia Commons, public domain dedication); George Merryweather’s Tempest Prognosticator (1851[30]) incorporates “a jury of philosophical councillors”, 12 leeches whose movement on detecting an approaching storm causes a bell to ring (Photo: Badobadop, Wikimedia Commons, CC-BY-SA).
Figure 7: Some examples of displays from Levels 3, 4 and 5. Level 3: IceAlert is designed so that freezing temperatures cause the blue reflectors to rotate to become visible; A ‘participatory bar chart’ by Dan Lockton along the lines of [22, 33, 16], designed so that ‘voting’ increases the visible height of the bar, though the votes are not numbered; A non-numerical weighing scale by Chang Hee Lee designed so liquid trapped under glass changes shape; Toilet stall door lock designed so display rotates from ‘Vacant’ to ‘Engaged’—the position of the lock itself gives us a display of actionable information. Level 4: Chronocyclegraphs (1917) by Frank and Lillian Gilbreth, tracing manual workers’ movements [10] (Photo from [15], Archive.org, out of copyright]; Live Wire (Dangling String) by Natalie Jeremijenko (1995)[39] moved a wire in proportion to local network traffic; Melbourne Mussel Choir, also by Natalie Jeremijenko with Carbon Arts [6] uses mussels with Hall effect sensors to translate the opening and closing of their shells into music; Availabot (2006), by Schulze & Webb, later BERG [3], is a USB puppet which “stands to attention when your chat buddy comes online”. Level 5: Powerchord by Dan Lockton [29] provides real-time sonification of electricity use, translating it into birdsong or other ambient sound; Immaterials: Ghost in the Field by Timo Arnall [1] visualizes “the three-dimensional physical space in which an RFID tag and a reader can interact with each other”; Ritual Machine 2 by the Family Rituals 2.0 project [23] uses patterns on a flip-dot display to visualize the countdown to a shared event for two people; Tempescope by Ken Kawamoto [21] visualizes weather conditions elsewhere in the world through re-creating them in a tabletop display (Photo used from Tempescope Press Kit).
 

The boundaries between levels here are dependent on observers’ interpretations of what is signified (whether an effect is accidental or deliberate is a common question in design (teleonomy [25])). Nevertheless, this spectrum permits a classification of some examples and is being applied by the authors in undergraduate design studio projects. We note the absence of screen-based examples: this is not intentional, and we welcome adding relevant examples. There are many intersecting research areas we aim to explore; in current HCI research, the most relevant are data physicalisation, embedded data representation, tangible interaction, sonification, and glanceable displays.

The work of Yvonne Jansen, Pierre Dragicevic and others [20] in data physicalisation, including compilation of examples, and embedded data representation [41], provides us with many instances of qualitative display, mostly at what we are calling Levels 2—5; likewise, development of ubiquitous computing, tangible interaction and tangible user interfaces [39, 18, 17] and Hiroshi Ishii’s subsequent vision of tangible bits [19] offers a huge set of projects, many of which provide qualitative interfaces for data or system interaction (usually at Levels 4—5).

Sonification [35] and glanceable displays [e.g. 9, 34] also offer us diverse sets of examples often using non-numerical representation, also largely at levels 4—5. As noted earlier, qualitative does not just mean non-quantitative, and the boundaries may be blurred: if a sonification directly maps numerical values to tones, is it much different to an unlabelled line chart? Or are sparklines [37], for example, a way of turning quantitative data into a form of qualitative presentation?

Even with a quantitative display, how a person interprets it may have a qualitative dimension: Figure 8 shows an electricity monitor used by a study participant [28] who accidentally set it to display kg CO2/day equivalent; this “meant nothing” to her but she interpreted the display such that “>1” meant “expensive”. ‘Annotations’ of values as users construct their own meaning [11] may fit here; the aim must, however, be to avoid the kind of reductive ‘qualitative’ nature of a limited set of labels [13].

Figure 8: A quantitative electricity display that was used ‘qualitatively’ by a householder (see text). Figure 9: An example of MONIAC, the Phillips Machine, at the Reserve Bank of New Zealand (Photo by Kaihsu Tai, Wikimedia Commons, public domain dedication).
 

Analogy and metaphor are important here, and the almost-forgotten field of Analogue Computing offers us an intriguing perspective. By “build[ing] models that created a mapping between two physical phenomena” [7], some analogue computers effectively operated as ‘direct’ displays of an analogue of the ‘original’ phenomenon—a kind of meta-level 2 type qualitative display, with devices such as the 1949 Phillips Machine [4] (Figure 9), which performed operations on flows of coloured water to model the economy of a country, enabling an interactive visualization of a system in operation as it operates (there are parallels with Bret Victor and Nicky Case’s work on explorable explanations [38, 8], and the development of visual programming languages).

Other areas of pertinent research and inspiration, are synaesthesia and mental imagery: sensory overlaps, fusions and mappings offer a fertile field for exploring qualitative displays of phenomena.

Conclusion: What use is all of this?

We’re interested in using qualitative displays and interfaces for supporting decision-making, behaviour change and new practices through enabling new forms of understanding—as an aid to help people explore their own and each other’s thinking, and specifically to help people understand their relationships and agency with the systems around them [26]. Projects using qualitative displays are unlikely simply to be de-quantified ‘conversion’ of existing numerical displays; instead, the aim will be to make use of the approach to represent and translate phenomena appropriately, in ways which enable users to construct meaning and afford new ways of understanding, enabling nuance and avoiding reductiveness.

The spectrum of the ‘directness’ dimension introduced here provides a possible starting point for this work, by giving a framework for analysing examples and suggesting ways of handling phenomena to be displayed, and is currently being used by the authors to brief an undergraduate design studio project on materialising environmental phenomena to reveal hidden relationships. We welcome the opportunity to learn from others who have thought about these kinds of ideas to inform our future explorations of this area.

Acknowledgements

Thanks to Dr Delfina Fantini van Ditmar, Dr Laura Ferrarello, Flora Bowden, Gyorgyi Galik, Stacie Rohrbach, Ross Atkin, Shruti Grover, Veronica Ranner and Dixon Lo for discussions in which some of these ideas were formulated and explored, and to the CHI reviewers. Unless otherwise noted, photos are by the authors.

References

1. Timo Arnall. 2014. Exploring ‘immaterials’: Mediating design’s invisible materials. International Journal of Design 8, 2: 101—117. http://www.ijdesign.org/ojs/index.php/IJDesign/article/view/1408

2. W. Ross Ashby. 1956. An Introduction to Cybernetics. Chapman & Hall, London.

3. BERG. 2008. Availabot. Retrieved Jan 10, 2017 from http://berglondon.com/projects/availabot/

4. Chris Bissell. 2007. The Moniac: A Hydromechanical Analog Computer of the 1950s. IEEE Control Systems Magazine 27, 1:59—64. https://dx.doi.org/10.1109/MCS.2007.284511

5. Brian Burns. 2007. From Newness to Useness and Back Again: A review of the role of the user in sustainable product maintenance. Retrieved June 1, 2009 from http://extra.shu.ac.uk/productlife/
 Maintaining%20Products%20presentations/Brian%20Burns.pdf

6. Carbon Arts. 2013. Melbourne Mussel Choir. Retrieved Jan 10, 2017 from http://www.carbonarts.org/projects/melbourne-mussel-choir/

7. Charles Care. 2006—7. A Chronology of Analogue Computing. The Rutherford Journal 2. Retrieved Jan 10, 2017 from http://www.rutherford
 journal.org/article020106.html

8. Nicky Case. 2014. Explorable Explanations. Blog post (Sept 8, 2014). Retrieved Jan 10, 2017 from http://blog.ncase.me/explorable-explanations/

9. Sunny Consolvo, Predrag Klasnja, David W. McDonald, Daniel Avrahami, Jon Froehlich, Louis LeGrand, Ryan Libby, Keith Mosher, and James A. Landay. 2008. Flowers or a Robot Army? Encouraging Awareness & Activity with Personal, Mobile Displays. In Proceedings of 10th International Conference on Ubiquitous Computing (UbiComp’08): 54—63. https://doi.org/10.1145/1409635.1409644

10. Régine Debatty. 2012. The Chronocyclegraph. Blog post, We Make Money Not Art (May 6. 2012). Retrieved Jan 10 2017 from http://we-make-money-not-art.com/the_chronocyclegraph/

11. Paul Dourish. 2004. What we talk about when we talk about context. Personal and Ubiquitous Computing 8, 1: 19—30. http://dx.doi.org/10.1007/
 s00779—003—0253—8

12. Chris Elsden, David Kirk, Mark Selby, and Chris Speed. 2015. Beyond Personal Informatics: Designing for Experiences with Data. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15): 2341—2344. https://dx.doi.org/10.1145/2702613.2702632

13. Delfina Fantini van Ditmar and Dan Lockton. 2016. Taking the Code for a Walk. Interactions 23, 1: 68—71. https://dx.doi.org/10.1145/2855958

14. Heinz von Foerster. 1973. On constructing a reality. In F.E. Preiser (Ed.). Environmental Design Research Vol. 2. Dowden, Hutchinson & Ross, Stroudberg: 35—46. Reprinted in Heinz von Foerster. 2003. Understanding Understanding—Essays on Cybernetics and Cognition. Springer-Verlag, New York: 211—228. https://dx.doi.org/10.1007/0-387-21722-3_8

15. Frank Gilbreth and Lillian Gilbreth. 1917. Applied Motion Study: a collection of papers on the efficient method to industrial preparedness. Sturgis & Walton, New York. Retrieved Jan 10, 2017 from https://archive.org/details/appliedmotionstu00gilbrich

16. Hans Haacke. 2009. Lessons Learned. Tate Papers 12. Retrieved Jan 10, 2017 from http://www.tate.org.uk/download/file/fid/7265

17. Eva Hornecker and Jacob Buur. 2006. Getting a grip on tangible interaction: a framework on physical space and social interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’06): 437—446. https://dx.doi.org/10.1145/1124772.1124838

18. Hiroshi Ishii and Brygg Ullmer. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’97): 234—241. https://dx.doi.org/10.1145/258549.258715

19. Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. Interactions 19, 1: 38—51. https://dx.doi.org/10.1145/2065327.2065337

20. Yvonne Jansen, Pierre Dragicevic, Petra Isenberg, Jason Alexander, Abhijit Karnik, Johan Kildal, Sriram Subramanian, and Kasper Hornbæk. 2015. Opportunities and Challenges for Data Physicalization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’15): 3227—3236. https://dx.doi.org/10.1145/2702123.2702180

21. Ken Kawamoto. 2012. Prototyping “Tempescope”, an ambient weather display. Blog post (Nov 15, 2012). Retrieved Jan 10, 2017 from http://kawalabo.blogspot.jp/2012/11/prototyping-tempescope-ambient-weather.html

22. Lucy Kimbell. 2011. Physical Bar Charts. Retrieved Jan 10, 2017 from http://www.lucykimbell.com/LucyKimbell/PhysicalBarCharts.html

23. David Kirk, David Chatting, Paulina Yurman, and Jo-Anne Bichard. 2016. Ritual Machines I & II: Making Technology at Home. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’16): 2474—2486. http://dx.doi.org/10.1145/2858036.2858424

24. Ian Li, Anind Dey, and Jodi Forlizzi. 2010. A stage-based model of personal informatics systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10): 557—566. https://dx.doi.org/10.1145/1753326.1753409

25. Dan Lockton. 2012. POSIWID and Determinism in Design for Behaviour Change. Social Science Research Network. http://dx.doi.org/10.2139/ ssrn.2033231

26. Dan Lockton. 2016. Designing Agency in the City. In Lacey Pipkin (Ed.), The Pursuit of Legible Policy: Agency and Participation in the Complex Systems of the Contemporary Megalopolis. Buró-Buró, Mexico City: 53—61. http://legiblepolicy.info/book/ Legible-Policies_BB.pdf

27. Dan Lockton, David Harrison, and Neville Stanton. 2010. The Design with Intent Method: A design tool for influencing user behaviour. Applied Ergonomics 41, 3: 382—392. http://dx.doi.org/10.1016/ j.apergo.2009.09.001

28. Dan Lockton, Flora Bowden, Catherine Greene, Clare Brass, and Rama Gheerawo. 2013. People and energy: A design-led approach to understanding everyday energy use behaviour. In Proceedings of EPIC 2013: Ethnographic Praxis in Industry Conference: 348—362. https://dx.doi.org/
 10.1111/j.1559—8918.2013.00029.x

29. Dan Lockton, Flora Bowden, Clare Brass, and Rama Gheerawo. 2014. Powerchord: Towards ambient appliance-level electricity use feedback through real-time sonification. In Proceedings of UCAmI 2014: 8th International Conference on Ubiquitous Computing & Ambient Intelligence: 48—51. https://dx.doi.org/10.1007/978-3-319-13102-3_10

30. George Merryweather. 1851. An essay explanatory of the Tempest Prognosticator in the building of the Great Exhibition for the Works of Industry of All Nations. John Churchill, London. Retrieved Jan 10, 2017 from https://archive.org/details/b2804163x

31. Bruno Munari. 1971. Design as Art (trans. Patrick Creagh). Pelican Books, London.

32. Dietmar Offenhuber and Orkan Telhan. 2015. Indexical Visualization—the Data-Less Information Display. In Ulrik Ekman, Jay David Bolter, Lily Diaz, Morten Søndergaard, and Maria Engberg (eds.). Ubiquitous Computing, Complexity and Culture: 288—303. Routledge, New York.

33. Jennifer Payne, Jason Johnson, and Tony Tang. 2015. Exploring Physical Visualization. In Jason Alexander, Yvonne Jansen, Kasper Hornbæk, Johan Kildal and Abhijit Karnik. Exploring the Challenges of Making Data Physical. Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15): http://architectures.danlockton.co.uk/wp-content/2015-chi2015workshop-physvis.pdf

34. Tim Regan, David Sweeney, John Helmes, Vasillis Vlachokyriakos, Siân Lindley, and Alex Taylor. 2015. Designing Engaging Data in Communities. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘15): 271—274. http://dx.doi.org/
 10.1145/2702613.2725432

35. Stefania Serafin, Karmen Franinovic, Thomas Hermann, Guillaume Lemaitre, Michal Rinott, and Davide Rocchesso. 2011. Sonic Interaction Design. In Thomas Hermann, Andy Hunt, and John Neuhoff (Eds.), The Sonification Handbook. Logos, Berlin: 87—110. http://sonification.de/handbook/ index.php/chapters/chapter5/

36. Melanie Swan. 2013. The quantified self: fundamental disruption in big data science and biological discovery. Big Data 1, 2: 85—99. https://dx.doi.org/10.1089/big.2012.0002

37. Edward Tufte. 2001. The Visual Display of Quantitative Information (2nd ed.). Graphics Press, Cheshire, CT.

38. Bret Victor. 2011. Explorable Explanations. March 10, 2011. Retrieved Jan 10, 2017 from http://worrydream.com/
 ExplorableExplanations

39. Mark Weiser and John Seely Brown. 1995. Designing Calm Technology. Dec 21, 1995. Retrieved Jan 10, 2017 from http://www.ubiq.com/
 weiser/calmtech/calmtech.htm

40. Sherri C. Widen. 2013. Children’s Interpretation of Facial Expressions: The Long Path from Valence-Based to Specific Discrete Categories. Emotion Review 5, 1: 72—77. https://dx.doi.org/10.1177/
 1754073912451492

41. Wesley Willett, Yvonne Jansen, and Pierre Dragicevic. 2017. Embedded Data Representations. IEEE Transactions on Visualization and Computer Graphics 23, 1: 461—470. https://dx.doi.org/10.1109/TVCG.2016.2598608

42. Gary Wolf. 2010. The quantified self. Video (June 2010). Retrieved Jan 10, 2017, from https://www.ted.com/talks/gary_wolf_the_quantified_self

Design Students Explore Landscape Metaphors for Project Modeling

Delanie Ricketts and Dan Lockton

This article originally appeared on the Carnegie Mellon School of Design website

We often use landscapes as metaphors in everyday speech, particularly to talk about complex systems–understanding a complex information system as an “information landscape”, for example, helps convey the idea that such a system, like a landscape, is vast and encompasses many interacting variables. However, while landscape metaphors are common in speech–terms like “stakeholder landscape”, “lie of the land”, “ocean of possibilities”, “food desert”, even the word “field”–landscape metaphors have been used more rarely in visual applications.

On March 30th, 45 Juniors from Carnegie Mellon University’s School of Design’s “Persuasion” class, taught by Michael Arnold Mages, Dan Lockton, and Stephen Neely, took part in a workshop to explore practically how physical and visual landscape metaphors could help elicit new insights about complex experiences–in this case, modeling and reflecting on group design projects. Facilitated by MA Design student and Research Assistant Delanie Ricketts and Assistant Professor Dan Lockton, as part of the School of Design’s new Imaginaries Lab, the workshop involved students collaboratively creating ‘landscape’ models representing projects they have worked on, using simple paper cut-outs of features such as hills, trees, weather, and people. Each group used the elements in different ways to represent different aspects of their projects, through creating ‘timeline’ landscapes in both two and three-dimensional formats.

Some projects started with rocky beginnings, represented by different cones or hills, in order to show how difficult that part of the project was. Other projects started with trees, rivers, and stars, representing periods of calm ideation, research, or general feelings of optimism. When projects encountered new difficulties later on, many groups represented these periods with lightning, rain, hills, and cones. Several groups used (and came up with names for) metaphors within the general landscape metaphor to represent specific parts of their project experiences, such as a “plateau of exhaustion” before the project came to an end.

Delanie’s previous prototypes of the landscape metaphor visuals, as part of her research assistantship project, have focused on how they could facilitate individual reflection on one’s own career path. However, while people found the metaphor and elements to be a useful and creative reflection tool, several expressed that it was difficult to show how their perspective changed over time within a two-dimensional format. In this second iteration of elements, we aimed to provide greater variation as well as enable three-dimensional expression. In addition, we wanted to explore how the metaphor could be used to think through a different topic, project planning or reflecting rather than career, and in a group rather than individual context.

Students’ responses to trying out this second iteration of landscape elements, applied to group projects rather than individual career paths, suggested that they found the process fun and creative, while also abstract. Many participants commented that the tool helped them understand their project and teammates’ perspectives better, especially in terms of stress, productivity, and overall emotional satisfaction at different points throughout a project’s lifetime. The format is more useful for surfacing – and reconciling – overarching understandings than probing deeper insights about the specifics of complex experiences, but, in triggering discussion, it has value in enabling members of a team to understand and interrogate each other’s perspectives and mental models of a situation (echoing ideas from organizational systems thinking experts such as Peter Senge).

We aim to develop the landscapes kit further, through iterations with application in individual reflection, project planning, and research settings.

Many thanks to Chris Stygar, Josiah Stadelmeier, and the whole School of Design 3D Lab for their help in developing the materials for the project, the Design graduate students and juniors for taking part in the different stages of the project, and Manya Krishnaswamy for helping facilitate. Thanks to Joe Lyons for putting the article on the School website.

Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes

Environments Studio: Materializing the Invisible

Timelapse of studio

Timelapse of studio, by Jasper Tom

In Materializing the Invisible, we considered invisible and intangible phenomena—the systems, constructs, relationships, infrastructures, backends and other entities, physical and conceptual, which comprise or influence much of our experience of, and interaction with, environments both physical and digital. ‘The invisible’ here is potentially everything from how the building’s heating system works, to the algorithms behind targeted ads, to who’s friends with whom, to where corruption is occurring in government, to where your IoT fridge sends the data it collects, to people’s mental imagery of time, to the electricity use of devices, to networks of cameras and sensors, to how political decisions are made. It also potentially includes things that happen at scales or in dimensions we can’t directly comprehend, from planetary processes such as climate, to the interaction of electromagnetic fields, to the microscopic. And things that happen, that enable day-to-day functioning of our lives, but we don’t know much about. Where does our food come from? Where does our waste water go? What route did that package take to get to us?

The process of revealing the invisible can improve understanding, help people explore their own thinking and relationships with these complex concepts, highlight problems, power structures and inequalities, reveal hidden truths, connect people better to the world around them, and enable people to act. It is not necessarily about visualizing the invisible—it can be about making it audible, tangible, smellable, or otherwise experienceable: we explored techniques from fields including data visualization, sonification, data physicalization, ubiquitous computing, tangible interaction, analog computing, qualitative displays, and the study of synaesthesia to create ways to materialize these invisible phenomena.

More details, including background reading, in the syllabus.

As a starting exercise we examined some ‘invisible’ and unknown things within the building itself (Margaret Morrison Carnegie Hall), noting questions and ideas with Post-It notes in situ. These ranged from questions about who has access to certain rooms or controls, to what some of the controls are in the first place. There were also traces of action and use—patterns which might be invisible in the sense of not being paid attention to, but nevertheless present in the use of the building.

The class project was to choose a phenomenon which is ‘invisible’ within a physical, digital or hybrid environment, find a way of getting access to it, and design and build / make / create a way of materializing the phenomenon, making it accessible to people more widely. As a group we brainstormed different phenomena which might be investigable, and possible forms of representation.

Ji Tae Kim’s project Whitespace looked at the invisible aspects of communication in text messaging, following on from his previous project Fear of Missing Out. Whitespace explores ways to materialize and express “rich contextual and verbal cues” through “an intuitive extension to instant messaging”. Working prototypes used copper tracks, Bare Conductive ink and Touch Board, and Arduino.

Jasper Tom and Chris Perry‘s project Kairos examined “an invisible phenomenon ingrained in everyday life”: the passage of time in a space, specifically around working at a desk. The question “Where did the time go?” and the idea of desk legacy, the patterns of use left by a previous user of the desk in a shared workspace, informed by analysis of timelapse video of the studio, came together with inspirations such as Daniel Rozin’s Wooden Mirror, MIT Tangible Media Group projects such as Daniel Leithinger’s work, and Tempurpedic foam, to create a desk surface which could ‘play back’ the patterns of how it had been used, via an interface using wooden blocks. A working prototype of part of the surface used Arduino and servo motors to demonstrate the effect.

One interesting aspect discussed during Jasper and Chris’s presentation was how while evidence of physical work is often obvious in space, such as a painter’s palette, the evidence of digital work is often invisible—a slightly worn keyboard, perhaps, but little else.

Gilly Johnson and Ty Van de Zande worked together to explore aspects of human movement (dance and exercise), and the related issues of hydration and focus. Focus + Movement proposed a color-changing bodysuit which could work together as part of a system with a water bottle, both to make the invisible patterns visible, and to enable reflection. Gilly and Ty captured movement by dancers using a Kinect, connected to Max MSP, and then simulated the body suit via After Effects.

Environments Studio: Design, Behavior and Social Interaction

Studying Pittsburgh's Greyhound Bus Station: Jasper Tom
Jasper Tom investigated patterns of people’s behavior in Pittsburgh’s Greyhound Bus Station
 

In this short introductory unit, we looked at ways in which the design of environments, and features within them, affects people’s behavior and interaction with each other. Design influences what people do, but often the ‘links’ are invisible or only apparent by their effects. Or, we notice them in passing, but do not take time to reflect on them or draw parallels across situations.

Studying the fear of missing out with messaging: Ji Tae Kim
Ji Tae (Joseph) Kim examined how the design of messaging and social media leads to ‘fear of missing out’ through unplugging himself for a week
 

As designers pioneering new approaches to creating environments for human experience, cultivating a kind of ‘hypersensitivity’ to noticing—and learning from—the ways in which design and behavior interact can be part of developing the attention to detail which will serve you well professionally. Details of the unit in the syllabus.

Studying a pedestrian crossing: Chris Perry
Chris Perry observed the different ways in which people use a pedestrian crossing at the entrance to CMU, and how the design affects those actions
 

We started with quick observation exercises aimed at developing (or refreshing) a capacity for noticing, for paying attention to the ways in which people and environments affect each other. We looked around campus for instances of points of confusion, unintended uses, constraints, and disobedience in physical environment settings, and discussed how these effects manifest in different ways—what could we find? (Photos here by Chris Perry, Gilly Johnson, Jasper Tom, Ty Van de Zande and Dan Lockton.)

We examined ideas around how environments influence people, and are in turn influenced, both physically and digitally, from thigmotaxis to stigmergy, shearing layers and pace layers, fundamental attribution error and design for behavior change. We also thought about the practice of observation, noticing and deconstruction of people’s actions in different ways, and in different levels of detail. The project brief was around designing a way to do research in this field—designing a ‘probe’ rather than a solution to a problem:

  • Choose a situation where ‘design’ seems to be affecting people’s behavior in an environment (physical or digital)
  • Find a way of studying what’s going on—what patterns exist? In what different ways are people’s behavior affected?
  • Visualize (or otherwise communicate) what you find
  • (optional: suggest ways things could be different, if you feel they need to be)
  • Keep a blog of your process (photos, sketches, notes)

Here are the projects:

Comparing a coffee shop and a tea shop: Gilly Johnson


Gilly Johnson compared structural and systemic aspects of the atmosphere and experience in Coffee Tree Roasters in Shadyside, and Dobra Tea in Squirrel Hill, including the layout and spatial division, and emerging themes such as service and trust: full details of the project.


Fear of Missing Out: Ji Tae Kim

Ji Tae Kim: Fear of Missing Out


Ji Tae Kim examined how the design of messaging and social media leads to ‘fear of missing out’ through unplugging himself for a week: full details of the project.


Greyhound Station: Jasper Tom


Jasper Tom investigated how the design of Pittsburgh’s Greyhound Bus Station influences patterns of people’s behavior: full details of the project.


Managing information across environments: Ty Van de Zande


Ty Van de Zande looked at how people manage information such as to-do lists across physical and digital environments, and developed a framework for investigating this in a structured way: more details of the project.


How to Cross the Road: Chris Perry

Project 1
Chris Perry observed the different ways in which people use a pedestrian crossing at Morewood Avenue and Forbes Avenue, at the entrance to CMU, and how the design affects those actions: more details of the project.