comment 0

Imaginaries Lab review of the year: 2018

Carnegie Mellon

It’s the end of December, which means it’s time for an update. Here at the Imaginaries Lab we’re just completing our second year, currently based within Carnegie Mellon School of Design. We’re a pretty part-time lab at present, but have aims to do much more in the years ahead. We’re using creative approaches to envision alternative ways of thinking and living, now and in the future, to inform interdisciplinary research and practical applications for social and environmental benefit. Our goal is to become a world-leading center for this kind of research, collaborating internationally to support transformative innovation. We carry out research projects, publish and run workshops internationally, teach studio classes, and build collaborations externally and within Carnegie Mellon.

What does the Imaginaries Lab do?

Imaginaries LabThe lab’s basic premise is that how we imagine affects how we understand the world, how we live, and what we see as possible in our collective futures, with consequences for sustainability, society, our relationships with technology, and our everyday lives.

At the Imaginaries Lab we believe that humanity needs tools to enable new ways of understanding and imagining, and new ways to live, that provide more equitable socially and environmentally sustainable futures. We create those tools through developing creative research methods, adapted from those used in design practice, and explore their use in a variety of cross-disciplinary contexts.

Imaginaries Lab research team Dec 2018

⇧ Imaginaries Lab research team, December 2018—upper row left-to-right: Devika Singh, Gray Crawford, Aadya Krishnaprasad, Rachel Gray Alexander; lower row left-to-right: Michelle Chou, Saloni Sabnis, Dan Lockton, Bella

In 2018, the Imaginaries Lab team, including — over the course of the year — Devika Singh, Matt Prindible, Gray Crawford, Saloni Sabnis, Silvia Mata-Marin, Rachel Gray Alexander, Shengzhi Wu, Katie Herzog, Michelle Chou, Ashlesha Dhotey, Aadya Krishnaprasad, and Dan Lockton (as well as Bella), have worked on a range of projects in three main areas:

█  Imaginaries, mental models, and mental imagery: using design methods to investigate how people understand abstract or complex concepts (from mental health to energy to metaphor generation to the structure of disciplines themselves), help them understand and imagine in new ways, and imagine new ways of living. This research covers the development of creativity methods, workshop and facilitation methods, and new kinds of interface design (human-computer interaction) and qualitative data visualization.

█  Research through design, and design as inquiry: investigating the use of design practice as a form of research and creative inquiry, including how to teach design studies through critical making, speculative and critical design, and how design methods can contribute to new knowledge generation beyond the design discipline itself.

█  Design for behavior change: exploring the links between designed technology and influence on human behavior, particularly in relation to sustainability and social benefit, and how designers can practically engage with issues of ethics and effects in this area. The Design with Intent toolkit is one of the most highly-cited pieces of work in this field, both academically and through use in design practice, but how is the field evolving in the light of mass surveillance, individual behavioral profiling, and weaponized behavioral economics?

In practice these areas have been woven through projects with a range of themes — new methods for design and creativity, new types of interface, intelligences, and futures.

New methods for design and creativity
Among our projects exploring what we might loosely call ‘new methods for design and creativity’, New Metaphors has seen the most development during 2018, with workshops at Interaction 18 in Lyon, UX Lisbon, Plurality University’s Founders’ Meeting in Paris, a keynote at Interaction Latin America in Rio de Janeiro (Dan Lockton), and numerous sessions at Carnegie Mellon including a workshop for the Swartz Center for Entrepreneurship and a dSHARP Digital Humanities talk. A metaphor is just a way of expressing one idea in terms of another, often used in design to introduce people to new ways of doing things, by relating them to familiar ideas, from desktops, files and windows, to the net, web, websites and browsers, cloud storage, even blockchain. Many of these are so familiar now that we perhaps no longer even think of them of as metaphors. But they are not inherently ‘right’; they can be challenged — including creating novel metaphors, which can persuade us to think differently and accept new ideas, or help us reframe the ways we think at present. The New Metaphors workshop format is a simple juxtaposition approach using cards and a variety of structured worksheets — or Devika Singh‘s Inspiro SMS bot — but can generate ideas applicable to a wide range of domains within and beyond design and futures.

⇧ A New Metaphor generator

⇧ Interaction Latin America 2018 keynote, ‘New Metaphors’

In November, Michelle Chou, Saloni Sabnis, Devika Singh and Dan Lockton ran a DIF On Air session for the Ellen MacArthur Foundation’s Disruptive Innovation Festival on ‘New Metaphors for Design, Economies, and the Systems of Everyday Life’ (video below). The team, facilitated by Laura Franco Henao, discussed how — inspired by the metaphor inherent in the circular economy — other kinds of metaphors could help give us an expanded conceptual vocabulary around economies and our relationships with products, reframing them for more sustainable ways of thinking.

⇧ Michelle Chou, Saloni Sabnis, Devika Singh and Dan Lockton, hosted by Laura Franco Henao, for the Ellen MacArthur Foundation DIF 2018

The New Metaphors project offers a form of intentional apophenia — deliberately provoking oneself to see patterns or relationships or parallels where (perhaps) none actually exists but where proceeding as if it does offers us some new way of thinking or interesting angle for seeing the world differently. This has some overlap with the ‘event scores’ of the Fluxus movement, for example the pieces collected in Yoko Ono’s Grapefruit — which led to Dan Lockton taking part in the Disruptive Improvisation workshop at CHI 2018 in Montreal, organized by Kristina Andersen, Laura Devendorf, James Pierce, Daniela Rosner, and Ron Wakkary. The New Metaphors method, introduced via a short paper called ‘Apophenia As Method—Or, Everything Is Either A Metaphor Or An Analogue Computer’ was tried out at the workshop along with a wide range of other generative and experimental techniques, collected in a zine. We intend to develop this direction further in more work, since the potential of ‘apophenia as method’ has interesting implications for the intersection of machine learning and creativity in particular. More on this next year.

Disruptive Improvisation, CHI 2018

Disruptive Improvisation, CHI 2018

⇧ New Metaphors at the CHI 2018 Disruptive Improvisation: Making Use of Non-Deterministic Art Practices in HCI workshop in Montreal

Other work from the lab in 2018 on new methods for design and creativity included: Dan’s participation in the Sketch Model Summer Workshop at Olin College of Engineering in Needham, MA, in June, funded by the Mellon Foundation, in which a wonderful group of people from technology, humanities, and the arts, led by Sara Hendren, Benjamin Linder, Jonathan Stolk, Deb Chachra, and Jonathan Adler, explored interdisciplinarity and how to bring both critical and creative methods into engineering education; and a paper applying ideas from R D Laing and Gregory Bateson in the context of investigating people’s understanding of systems, presented by Dan at the (new) Systemic Design Association‘s RSD 7 conference at Politecnico di Torino in October, followed by a fascinating ‘de-conference’ at the Monviso Institute in Ostana.

Sketch Model, Olin College

RSD De-Conference, Ostana, Italy

⇧ Left: Sketch Model Summer Workshop at Olin College; Right: The systemic design community explores Ostana, Piedmont

Sarah Foley presented her new method for designers to rethink services and human-technology relations, Service Fictions through Actant Switching, at the Design Research Society’s DRS 2018 conference in Limerick in June. The paper was developed from Sarah’s MDes thesis (advised by Dan Lockton and Cameron Tonkinwise) and offers an approach combining actor-network theory, design fiction, and service design to generate speculative, provocative ideas for the future of services.

DRS 2018, Limerick

DRS 2018, Limerick

⇧ Left: Sarah Foley presents her Service Fictions project at DRS 2018; Right: Drinks in Limerick after the DRS 2018 Designing for Transitions track.

Also at DRS 2018, Dan Lockton joined Joanna Boehnert (Loughborough University) and Ingrid Mulder (TU Delft) to chair a full-day track, Designing for Transitions. Building on work around Transition Design emerging from Carnegie Mellon, but also other systemic approaches to designing with wider social and environmental change in mind, the ten papers in the track explored an expanded field for design research seeking to engage with change at scale in time and place. Our editorial provided an overview of some issues and challenges in the field as we see it developing. As part of the track, Dan Lockton and friend of the Lab Stuart Candy‘s paper ‘A Vocabulary for Visions’ brought together many of the themes of imaginaries, futures, and new metaphors that underpin the Lab’s work. We covered (briefly) a set of concepts which can be thought of as seven ‘ways of seeing’, for tackling the ‘visionary’ aspect of designing for transitions—lenses, imaginaries, backcasting, dark matter, circularity, experiential futures, and new metaphors—drawing on work by a range of people and different disciplines. Dan Lockton was also a member of the Conversations committee for DRS 2018, and a discussant at PhD by Design Limerick.

Electric Acoustic installation, CMU Design Week Spring 2018

⇧ Electric Acoustic installation, CMU Design Week, spring 2018

New types of interface
A big theme through our work is exploring new kinds of interface design, through various perspectives including a more qualitative approach. One such project this year is Electric Acoustic (Shengzhi Wu, Gray Crawford, Devika Singh, Dan Lockton) which explores data sonification — turning data into sound — as an alternative way to engage with patterns in energy use data. Building on Dan’s previous Powerchord project (developed with Flora Bowden at the RCA), Electric Acoustic is supported by Carnegie Mellon College of Fine Arts’ Fund for Research and Creativity, using data provided by CMU Facilities Management Services. Following a public engagement workshop at the Pittsburgh Children’s Museum in fall 2017 for Pittsburgh Maker Faire, we have built a multi-modal prototype also incorporating cymatics (vibration displays), which we installed in May 2018 during CMU Design Week. Cymatic displays seem to offer some interesting possibilities for more qualitative ways of representing the effects of phenomena and their interactions with each other, and we’re hoping to explore this further in some different contexts.

⇧ Ubiquitous Inclusion, by MacKenzie Cherban

⇧ Silent Scene, by Chang Hee Lee

In general, shifts in sensory experiences for interaction design have been an area of lots of interest for people in our community this year. MacKenzie Cherban‘s MDes thesis, Ubiquitous Inclusion (advised by Dan Lockton) examines the design process and the role of technology in relation to the d/Deaf community, through building on the affordances of sign language (ASL) and participatory futuring, arriving at an ecosystem connecting ‘future artifacts’ developed from participants’ ideas, including a ‘machine learning for personal use’ approach to the smart home (above left), trained using Wekinator. Dr Chang Hee Lee — who passed his PhD at the RCA this October! —  (supervised by John Stevens and Dan Lockton) has been investigating synaesthesia and design for the last four years, and as a development from this work created Silent Scene (above right), an exploration of “zero interaction” as a playful mode of experience. In the grand tradition of Claude Shannon / Marvin Minsky’s Ultimate Machine, Silent Scene is “a stationary device that appears to do nothing. However, when there are no humans in its environment — when no sound, motion, or light is detected — it secretly starts to create beams and rays of stunning colors. The device will not function if anyone is near it.” Here’s Chang’s DIS 2018 abstract with Dan Lockton and Ji Eun Kim, and a write-up in Interactions‘ Demo Hour. Dr Lee’s fellow RCA Innovation Design Engineering PhD researcher — and medical practitioner — Dr Dave Pao (also supervised by John Stevens and Dan Lockton) is continuing with his redesign of electronic medical record interfaces for doctors, to use a more visual, qualitative style which enables not just better usability but also higher self-perceived clinician competence.

⇧ Dixon Lo’s ShapeShift demo

⇧ CHI 2018 presentation of Dixon Lo’s Experiential Augmentation project (presented by Dan Lockton)

How can we make use of the affordances of virtual and augmented reality — spatial computing more widely — to create new kinds of interface, with new possibilities for understanding, rather than just adapting existing paradigms? Dixon Lo‘s CHI 2018 paper, based on his MDes thesis Experiential Augmentation (advised by Stacie Rohrbach and Dan Lockton), examined qualitative indexical visualizations for AR building on our learned understanding of physical phenomena in the real world, from shadows to floating, arriving at recommendations for designers working in this space. Dan presented Dixon’s paper (see video). A very different approach is being taken in Gray Crawford‘s thesis project, Assimilating Hyperphysical Affordances (advised by Dan Lockton and Daragh Byrne), in which he is exploring neuroplasticity in relation to “more unorthodox physical phenomena” in VR — can we learn to interact in ways which are very different to the real world? Are there opportunities for new kinds of interfaces?

⇧ A demonstration of one of Gray Crawford’s experiments

Intelligences
Another theme running through our work this year has been around ‘intelligences’ and the questions of other minds (whether human, animal, or artificial). Within Carnegie Mellon, we’re situated in (and saturated with) an environment strongly flavored with AI development — and indeed, increasing consideration of ethics via an initiative funded by K&L Gates — but interaction design’s engagement with the changing intelligences around us has a lot of potential for critical and generative exploration and development.

Where are the humans in AI?

Where are the humans in AI?

⇧ Projects on show at Where Are the Humans in AI?, May 2018 — the class show for the Environments Studio ‘Intelligence(s) in Environments’

Dan Lockton’s 2018’s junior Environments Studio class Intelligence(s) in Environments — Maayan Albert, Gautam Bose, Emma Brennan, Cameron Burgess, Aisha Dev, Anna Gusman, Monica Huang, Soonho Kwon, Marisa Lu, Jessica Nip, Lucas Ochoa, and Helen Wu — examined intelligence of different kinds, from social interaction to theory of mind to everyday interaction with AI, through practical projects and guest talks focused on investigating, understanding, and materializing intelligence and other invisible and intangible qualitative phenomena and relationships (covered by Chris Togneri here). An excellent group of projects included investigating how people see different web services as analogous to rooms in their home (Maayan Albert), communal spaces as manifestations of others’ thought processes (Emma Brennan), communicating intangible emotions with computers via poetry and sculpture (Monica Huang), physicalizing moral codes and decision-making (Aisha Dev), new visual approaches to end-user programming (Cameron Burgess), Wekinator-trained voice control of 3D modeling (Anna Gusman), and a VR Museum of Taste (Soonho Kwon). See all the projects here. The highest profile project, Emoto AI, by Marisa Lu, Gautam Bose and Lucas Ochoa offers an alternative embodiment for phone-based virtual assistants such as Siri, enabling them to transform into a ‘sidekick’ (see also Michal Luria‘s work) making use of nonverbal communication cues through expressive motion. Emoto AI received an honourable mention in the Fast Company Innovation by Design Awards 2018 and a paper, ‘Emoto: From Phone to Emotive Robotic AI Sidekick’ by Gautam Bose, Lucas Ochoa, Marisa Lu and Dan Lockton has been accepted to TEI 2019‘s work-in-progress track.

⇧ Emoto AI Sidekick by Marisa Lu, Gautam Bose and Lucas Ochoa

 
The Environments Studio projects received a range of guest critique throughout, including a visit from Bruce Sterling and Jasmina Tesanovic, and culminated with a three-day show, Where Are The Humans in AI?, in May 2018, following which Cameron Burgess, Emma Brennan, Monica Huang, and Gautam Bose exhibited their projects at Data & Society’s Future Perfect event in New York, organized by Ingrid Burrington.

Data & Society Future Perfect

Data & Society Future Perfect

⇧ Emma Brennan and Cameron Burgess demonstrate their projects at the Data & Society Future Perfect event, New York

Going in depth on a specific dimension of our interaction with other intelligences, Meric Dagli‘s MDes thesis Designing for Trust (advised by Dan Lockton and Daragh Byrne) examined interaction design for trust in the context of multiple chatbots — developing designs guidelines for maintaining and increasing trust in scenarios where multiple virtual e-commerce agents collaborate with each other. Meric won a Kynamatrix Research Network Innovation through Collaboration Grant for his project.

The major question of ‘other minds’ is, of course, how can we ever know how each other thinks? How can we understand other people’s thought processes and emotions, when we have no way of experiencing others’ experiences? In some ways, much of the Lab’s work is about externalizing imaginaries and mental models, or developing tools for imaginaries to be externalized, to enable sharing and discussion. One field where this approach has a particularly practical application is in mental health — using design methods to help people think about and explore creative ways to describe, talk about, and share our own often invisible experiences. According to research compiled by the Wellcome Trust (UK), “one in four people will experience a mental health problem in any given year”, and “75% of people with a mental health problem develop it before the age of 24”. Sales of books on anxiety are “soaring”. Carnegie Mellon students, in common with many people in high-pressure environments, can experience a broad range of mental health issues.

New Ways To Think

⇧ Projects from New Ways to Think: Materializing Mental Health

Yet as a society, we don’t always have good ways of talking about mental health. In New Ways To Think: Materializing Mental Health, an eight-week research studio run by the lab, undergraduates, Master’s students, and PhDs from CMU’s School of Design, School of Art, Human-Computer Interaction Institute, Tepper School of Business, and Integrated Innovation Institute explored how we can adapt participatory design and facilitation methods, often used in user experience, service design, and working with communities, to a mental health context. We believe they have the potential to help people capture qualitative dimensions of their experiences, to make them palpable, to enable discussion, reflection, and peer support. Our initial focus has been working within the Carnegie Mellon community, including receiving very valuable input from the university’s Counseling and Psychological Services, but we hope that the methods developed can be of use more widely through further development. Four projects — Lexicon of Feelings (Aisha Dev, Kailin Dong, Katie Glass, Zhiye Jin, Soonho Kwon, and Jessica Nip), Emotional Modeling (Laura Rodriguez, Katie Herzog, Josh LeFevre, Nowell Kahle, and Arden Wolf), Empathy Rock Garden and Personalized Potions (both by Jen Brown, Carlie Guilfoile, Michal Luria, Uluwehi Mills, and Supawat Vitoorakaporn) — each work with different aspects of mental health, from anxiety and stress to loneliness, to enabling feelings that perhaps don’t have a name yet to be expressed and shared. We are currently working on finding ways to publish what we’ve done so far, and take some of this work further.

Futures
‘New ways to live’ is a dimension of the Lab’s work that brings together imaginaries of futures and some of the design for behavior change work on which Dan’s research was founded. The basic premise is that if we can develop better ways of helping people imagine themselves living and acting differently then this makes larger-scale behavior and practice changes for sustainability easier to achieve, ultimately, for humanity and for the planet. (We draw here on some of the experiential futures framing developed by our CMU colleague Stuart Candy.) Starting in November 2018, with a short course called New Ways To Live: Future Pittsburgh — and continuing in 2019 with a publication project, lab researchers Rachel Gray Alexander and Saloni Sabnis, with students Aisha Dev, Kailin Dong, Monica Huang, Soonho Kwon, Jessica Nip, Nicole Pinto, Tamara Amin, Jen Brown, Jeffrey Chou, Katie Herzog, Laura Kelly, Michal Luria, Ulu Mills, Laura Rodriguez, Devika Singh, and John Zoppina (undergrads, Master’s, PhDs, and staff, from Design, Environmental Engineering, Psychology, Business, Human-Computer Interaction, Professional Writing, and University Advancement) have been developing projects exploring the Pittsburgh of 2030 — speculative (but well-informed) scenes from possible future everyday life and work in the city, shot through a more realistic lens of the kinds of small businesses and cultural phenomena that are present here rather than the entirely shiny visions of automation that are sometimes proposed. This could be relevant to many rust-belt cities in the US, and former industrial towns elsewhere too. More on this project in due course.

The Imaginaries Lab, represented by Dan Lockton, is excited to be a founding member of Plurality University (U+), a Paris-based global collective “that detects, connects, and federates people or organizations who mobilize the resources of the imaginary to broaden the scope of thinkable futures: activist artists and sci-fi authors, speculative designers, reflexive utopians, creative futures thinkers, engaged researchers, etc”. We’re in some excellent company and looking forward to building on ideas developed at the founders’ meeting in Paris at the end of November.

New Metaphors workshop at Plurality University, Paris

New Metaphors workshop at Plurality University, Paris

⇧ Plurality University Founders’ Meeting, New Metaphors workshop

Looking ahead

We’re actively seeking collaborations, projects where we can contribute, and opportunities to apply for funding together. We’re also pretty experienced at running workshops, short courses and projects in both commercial and academic contexts. If you’re interested in any of the ideas or methods we’re working on, or think we might be able to work together in 2019, internationally or within the US, please do get in touch. As we look to the future, the Lab is exploring the options for new funding models, host institutions and partners, inside and outside of academia.

Other activities from the Lab in 2018

Finally, we should mention some of the other activities the Lab’s been involved with in 2018. We’ve been pleased to welcome guest speakers for the classes we run, both in-person and virtually, including Simone Rebaudengo, Bruce Sterling, Jasmina Tesanovic, Emily LaRosa, David Danks, Madeleine Elish, Deepa Butoliya, Jill Simpson, Tobias Revell, Cennydd Bowles, Viviana Ferrer-Medina, Cheryl Dahle, and Emily Blaze — thank you all for your time. Dan Lockton and Ahmed Ansari’s MDes Seminar III class have published a great set of articles about the research they’re doing. Dan Lockton has talked about the Lab’s work for TEDx University of Pittsburgh, Carnegie Mellon Human-Computer Interaction Institute (thanks to Brad Myers), IxDA Pittsburgh (thanks to Simon King), CMU’s dSHARP digital humanities group (thanks to Scott Weingart), CMU School of Architecture’s ‘Introduction to Ecological Design & Thinking’ (Dana Cupkova), and for London College of Communication’s MA Communication Design (thanks to Tobias Revell). In terms of professional service, Dan has been an Associate Chair for the CHI 2018 Design subcommittee, an invited discussant at PhD by Design in Limerick, a jury member for the IxDA Interaction Awards 2019, and a guest critic / respondent for CMU School of Architecture’s EX-CHANGE in May 2018. PhD advisees Chang Hee Lee (RCA), Michael Arnold Mages (CMU) and Deepa Butoliya (CMU) have all passed and are embarking on academic careers, at the RCA, Northeastern, and Stamps (University of Michigan) respectively.

We’ve published a bit during the year, mostly at conferences:

  • Joanna Boehnert, Dan Lockton, and Ingrid Mulder (2018). ‘Editorial: Designing for Transitions’. DRS 2018: Design Research Society, 25–28 June 2018, Limerick.
  • Sarah Foley and Dan Lockton (2018). ‘Service Fictions through Actant Switching‘. DRS 2018: Design Research Society, 25–28 June 2018, Limerick.
  • Chang Hee Lee, Dan Lockton, and Ji Eun Kim (2018). ‘Exploring Cognitive Playfulness Through Zero Interactions’. DIS 2018: ACM Conference on Designing Interactive Systems, 9–13 June 2018, Hong Kong.
  • Chang Hee Lee, Dan Lockton, David Verweij, David Kirk, Kay Rogage, Abigail Durrant, Aubree Ball, Audrey Desjardins, Adam Haar Horowitz, Ishaan Grover, Pedro Reynolds-Cuéllar, Oscar Rosello, Tomás Vega, Abhinandan Jain, Cynthia Breazeal, and Pattie Maes (2018). ‘Demo hour‘. Interactions 25, 6 (October 2018), 10-13. DOI: https://doi.org/10.1145/3279993
  • Dixon Lo, Dan Lockton, and Stacie Rohrbach (2018). ‘Experiential Augmentation: Uncovering The Meaning Of Qualitative Visualizations When Applied To Augmented Objects’. CHI 2018: ACM Conference on Human Factors in Computing Systems, April 21–26 2018, Montreal.
  • Dan Lockton (2018). ‘Old Rope? Laing’s Knots and Bateson’s Double Binds in Systemic Design’. RSD 7: Relating Systems Thinking and Design Symposium, 24–26 October 2018, Turin.
  • Dan Lockton and Stuart Candy (2018). ‘A Vocabulary for Visions in Designing for Transitions’. DRS 2018: Design Research Society, 25–28 June 2018, Limerick.
  • Dan Lockton, Some Cracks In The Paving, and Water Trapped In The Window Of A British Rail Class 450 Train Carriage (2018). ‘Apophenia As Method—Or, Everything Is Either A Metaphor Or An Analogue Computer’. Disruptive Improvisation: Making Use of Non-Deterministic Art Practices, workshop at CHI 2018: ACM Conference on Human Factors in Computing Systems, 21–26 April 2018, Montreal
  • Dave Pao, John Stevens, Dan Lockton, and Netta Weinstein (2018). ‘Electronic Medical Records: Provotype visualisation maximises clinical usability’. EVA London 2018: Electronic Visualisation & the Arts, 10–12 July 2018, London.
  • Dave Pao, John Stevens, Dan Lockton, and Netta Weinstein (2018). ‘Design better EPR: a mixed methods survey and ‘test drive’ comparing clinical usability across two systems and a provotype interface’. HIV Medicine 19, S99-S100.

  •  

    Thanks

    Thanks to everyone who’s helped this year, and all the students and participants in our projects and classes, to event organizers who’ve taken us all over the world, to Carnegie Mellon colleagues who’ve understood what we’re trying to do, and to our long-suffering families. Happy New Year to all: 2019 will be better.

    comment 0

    Fictions Matter Too: A Vision for an Imaginaries Lab in Design

    “If men define situations as real, they are real in their consequences”
    William Thomas and Dorothy Swaine Thomas, 1928 — later named as the ‘Thomas Theorem’

    Billboard in Bloomfield, Pittsburgh, PA, 2017

    The events of the last couple of years, from Brexit to Trump, have been a vivid demonstration for our time of the power of the imaginary to affect human affairs. Not for the first time, of course — but amplified in an unprecedented way by algorithms, bots, targeting, and strategic use of personal data via social media — huge decisions are being influenced by imagined versions of what ‘reality’ is.

    We cannot avoid trying to work out how to make sense of terms such as alternative facts, fake news, and post-truth as being part of everyday discourse, and incorporating them and their effects into our own models of how the world works. As Maciej Ceglowski says, people “will happily construct alternative realities for themselves, and adjust them as necessary to fit the changing facts,” and this is greatly aided by the technological infrastructures being employed by those who want to control public opinion. The powerful are, as always, those who can create the simplest, easiest to spread, most superficially persuasive images, myths, conceptions, metaphors, frames, cause-and-effect pairings, and indeed stories, in the public mind. We shouldn’t be surprised: it’s not like it hasn’t happened before, in other eras, using different means, and we all know the outcomes of that. Fictions are political, and they matter.

    Shared fictions as central to society

    If I were better informed by sociological theory, I could make more insightful points here about Arjun Appadurai’s consideration of “the imagination as a social practice… a form of negotiation between sites of agency (‘individuals’) and globally determined fields of possibility”, or about the concept of imaginaries in a sociotechnical sense — the specific concept developed by Sheila Jasanoff, Sang-Hyun Kim, and others around the ways in which certain dominant ‘shared’ visions of societal futures centred around certain types of (technological) progress have effects on what happens in the present — “representations of how the world works — as well as how it should work”. It’s arguable that understanding our shared (or not) visions of what climate change, or artificial intelligence, or immigration, or identity, or law, or ‘sovereignty’, or even countries themselves, are, are all important in understanding our current situation and trajectory, but also that historically, these have had potentially vital roles in the ways in which human civilisations and societies developed. Yuval Noah Harari suggests that “Any large-scale human cooperation — whether a modern state, a medieval church, an ancient city or an archaic tribe — is rooted in common myths that exist only in people’s collective imagination”, and that this is partly due to the emergence of the ability to describe the imaginary in language, to “transmit information about things that do not exist at all… entities that [people] have never seen, touched or smelled.”

    “We risk being the first people in history to have been able to make their illusions so vivid, so persuasive, so ‘realistic’ that they can live in them.”
    Daniel J. Boorstin, The Image: A Guide to Pseudo-Events in America, 1962.

    Design and imaginaries

    The idea of design (and art more broadly) as being a different form of language which can also describe the fictional or imaginary, making it real enough to be addressable, to be considered and critiqued and reflected on, is interesting. Design has the power to make visible and tangible imagined ‘better‘ (or worse) situations, to design artefacts as ‘tokens of better ages’, to apply ideas of utopia as a method, and to inspire and open up vistas – if not always actual maps — towards different futures, through speculation and design fiction. What do designers do, if not, in some sense, give us experiential pockets of imaginaries — both our own, reflected back at us, and visions of different futures, fictional at present? I find Clive Dilnot’s notion of design simultaneously stating “This!” and asking “This?” to be quite a clear way of thinking about this, because the ‘This?’ implicitly allows for speculation which is critical, which we may interpret as warnings or at least provocations to think further about what consequences might be of the proposition in question. By making our own imaginaries (more) visible, and doing the same for others’, whether new or old, design can be a translator between minds and ideas and the world. This is where I see that design essentially makes fictions matter (dual meaning intended).

    “Dreams are true while they last, and do we not live in dreams?”
    Tennyson, The Higher Pantheism, 1867

    There can be a self-fulfilling nature to imaginaries, as the Thomas Theorem implies. If we believe something to be real, and act as if it is real, and build institutions and infrastructures around that ‘reality’, the effect may be the same as if it had been real in the first place. Fictions become fact. For example, Stephen Metcalf discusses the self-fulfillingness of imagining society as a market: “The more closely the world can be made to resemble an ideal market governed only by perfect competition, the more law-like and “scientific” human behaviour, in the aggregate, becomes.” In a design context, the idea of a kind of circular causality in which designers’ imaginaries (models, or even stereotypes, we might say), of people’s lives end up being designed into systems which then effectively make those imaginaries real is not uncommon (I looked briefly at this kind of effect in this piece for the recent Science Gallery Dublin staging of Design and Violence.) There’s something here close to Anne-Marie Willis’s idea of ontological designing, or various formulations of the “We shape our X, and then our X shape us” idea by Churchill/McLuhan/Bill Mitchell and others — we shape our imaginaries, and then, through acting on them, designing systems around them, designing systems as if they were real, they shape our actions.

    Understanding understanding

    In design, human-computer interaction, and human factors research, both academic and applied, we often investigate the mental models people have, or appear to have, when they are using a piece of technology, or a system. We try to find out how they think something works, or how they expect it to work, from driverless cars to government, to heating systems, to website structure, and, learning from those insights, try to (re)design those systems, or at least interfaces to those systems. The redesigns either try to match better how people think something works, or — more rarely but more interestingly — change those models.

    “When we don’t know how a thing works, we make it up”.
    “We can only trust something if we think we know how it works”.
    Louise Downe, Chicken Shops, Platforms and Chaos, 2013 (now Head of Design for the UK Government).

    Most of the research I’ve done over the last ten years, which started in questions of how people’s behaviour is influenced by the design of the products, services, and environments they use, has moved towards something much more around using design methods to understand people’s situations, the social and environmental contexts in which people live and make decisions, how they are thinking about what they’re doing and the world more widely, and what agency they have to change things. Understanding understanding (or at least trying to) — investigating how people imagine and make sense of the world — seems as though it ought to be central to any form of design research which claims to be human-centred, and the generative, or future-facing complement is enabling people to have new understandings, new imaginaries. If you’ve followed any of my more recent work, it’s been a kind of patchy way of gradually — driven by the opportunities afforded by different funded projects and teaching needs — addressing some of these questions of current and new imaginaries, from investigating mental imagery and new kinds of display for energy, to forms of design fiction as a way of enabling students to explore consequences and ambiguity, re-imagine what interactions with AI could be, and materialise invisible phenomena.

    “The future is not empty. The future is loaded with fantasies, aspirations and fears, with persuasive visions of the future that shape our cultural imaginaries.”
    Ramia Mazé, ‘Forms and Politics of Design Futures‘, 2014

    What the Imaginaries Lab aims to do

    Part of my reason for joining Carnegie Mellon a year ago was the opportunity to build a research (and teaching) platform which explores exactly these kinds of ideas in a more structured way, through a design lens. The Imaginaries Lab is small, and so far internally funded at Carnegie Mellon, but since the start of 2017, a team of graduate research assistants and I have been looking at people’s imaginaries of local government in Pittsburgh (and their agency in relation to it), ways of externalising mental imagery through landscape metaphors, and approaches to new kinds of qualitative interface. We had a ‘soft launch’ in May, during Carnegie Mellon’s Design Week, and in the coming year will be expanding and continuing these projects and developing new collaborations and directions. One of these already announced is Electric Acoustic, a situated energy sonification installation funded by the Carnegie Mellon College of Fine Arts, but there are also some other interesting ideas in the pipeline.

    So, what’s the vision for the Lab? I see us concentrating on two big (linked) challenges: New ways to understand, and New ways to live. In both cases, we’ll be creating tools to support people’s imagining, both what they already imagine (which is still important), but also helping people imagine in new ways. What starts as fiction can become real, explorable, experiential. We will be creating new fictions, but also creating tools to help people understand and deconstruct the fictions that are already having an effect on them. The Lab’s work cannot help but be political: questions of understanding and futures are inextricable from questions of worldview, belief in how the world is and how it should be.

    New ways to understand encompasses ideas such as creating new metaphors (to use Mary Catherine Bateson’s term), new kinds of interface, new ways of explaining and visualising systems and the relationships between ideas, and using design methods to help people have agency to use these new ways of understanding. This builds on projects such as Powerchord, Drawing Energy, Qualitative Interfaces, Mental Landscapes, Materialising the Invisible, and aspects of Civic Visions, taking some of these ideas in new directions and finishing or consolidating some of the work we have already done. One particular domain that seems especially worth exploring from a design point of view is imaginaries around artificial intelligence and automation — to offer some ethical perspectives that could help designers working in the field, but also to “develop alternative narratives to technological futures” in Dunne & Raby’s words. More widely, new ways to understand could have a substantially activist stance, helping counter the intentional fictions of the post-truth world and giving people agency to challenge and change things, in their communities and beyond.

    New ways to live is more explicitly about linking imaginaries to everyday life (and indeed changes in practices and behaviours) through prototyping new ways of living — and helping people imagine new ways of living, both at a household and societal level (thus linking more explicitly to the ‘sociotechnical imaginaries’ notion in sociology as discussed earlier). What is it like to live in a different way, with different premises to your everyday routines? How can design fictions that you can actually use (or live ‘in’), together with new tools for understanding the world, affect what you do? This builds on the work I did around living labs and design for behaviour change, intersecting with some of the ideas in Carnegie Mellon’s transition design research area, and learning from the experiential futures work of futurists such as my new Carnegie Mellon colleague Stuart Candy. ‘New ways to live’ is going to involve some bigger kinds of projects, with more ambitious goals.

    As a Lab, we will grow slowly — I don’t want to be spending the entirety of my time looking for funding for the next project — but one of the things that excites me about doing this is that it is, in itself, an exploration of the power of imaginaries. Putting the lab’s name on the office door and in my email signature, and treating it as a real thing within the university and externally, has made it a real thing, in a way which was refreshingly simple. It’s not now a fiction, but once upon a time, it was — as with every other design project and every other human endeavour. We can bring different worlds into being.

    Imaginaries Lab, Carnegie Mellon School of DesignImaginaries Lab, Carnegie Mellon School of DesignImaginaries Lab, Carnegie Mellon School of DesignImaginaries Lab team, May 2017

    Above, right: The Imaginaries Lab team May 2017. Left to right: Silvia Mata-Marin, Dan Lockton, Delanie Ricketts, Nehal Vora, Theora Kvitka, Ashlesha Dhotey

    Parts of this article are based on talks I have given this year at Cornell University (the Hillier Lecture) and at the Universidad del Desarrollo in Santiago.

    I’d like to thank Delanie Ricketts, Theora Kvitka, and Nehal Vora for their work with the Lab on its first few projects and wish them the best of luck in their new careers, thank Sarah Foley for her summer research work on service fictions, welcome back Ashlesha Dhotey and Silvia Mata-Marin, and also welcome our new research assistants joining this fall, Devika Singh, Matt Prindible, and Shengzhi Wu. Thanks too to Sebastian Deterding for putting me on to the Thomas Theorem, which expresses succinctly something that otherwise would have led to a rambling explanation on my part, and to Cameron Tonkinwise and Peter Scupelli for encouraging me to put the name on the door.

    Thinking About Things That Think About How We Think

    Cross-posted from the Environments Studio IV blog, Carnegie Mellon School of Design

    We often hear the phrase ‘intelligent environments’ used to describe spaces in which technology is embedded, in the form of sensors, displays, and computational ability. This might be related to Internet of Things, conversational interfaces or emerging forms of artificial intelligence.

    But what does ‘intelligence’ mean? There is a long history of attempts to create artificial intelligence — and even to define what it might mean — but the definitions have evolved over the decades in parallel with different models of human intelligence. What was once a goal to produce ‘another human mind’ has perhaps evolved into trying to produce algorithms that claim to ‘know’ enough about how we think to be able to make decisions about us, and our lives. What we have now in ‘intelligent’ or ‘smart’ products and environments is one particular view of intelligence, but there are others, and from a design perspective, designing our interactions with those ‘intelligences’ as they evolve is likely to be a significant part of environments design in the years ahead. Is there an opportunity for designers to explore different kinds of interactions, different theories of mind, or to envisage new forms of intelligence in environments, beyond the dominant current narrative?

    Building on the first two projects’ treatment of how humans use environments, and how invisible phenomena can be materialized, for this project the brief was to create an environment in which visitors can experience different forms of ‘intelligence’, through interacting with them (or otherwise experiencing them). The project was not so much about the technical challenges of creating AI, but about the design challenges of enabling people to interact with these systems in everyday contexts. So, quick prototyping and simulation methods such as bodystorming and Wizard of Oz techniques were entirely appropriate—the aim was to provide visitors to to the end-of-semester exhibition (May 4th, 2017) with an experience which would make them think, and provoke them to consider and question the role of design in working with ‘intelligence’.

    More details, including background reading, in the syllabus.

    We considered different forms of behaviour, conversation, and ways of thinking that we might consider ‘intelligent’ in everyday life, from being knowledgeable, to being able to learn, to solving problems, to knowing when not to appear knowledgeable, or not to try to solve problems. If one is thinking about how others are thinking, when is the most intelligent thing to do actually to do nothing? Much of what we considered intelligent in others seemed to be something around adaptability to situations, and perhaps even adaptability of one’s theory of mind, rather than behaving in a fixed way. We looked at Howard Gardner’s multiple intelligences, with the ideal of interpersonal, or social, intelligence being one which seemed especially interesting from a design and technological point of view — more of a challenge to abstract into a set of rules than simply demonstrating knowledge, a condition where the feedback necessary for learning may not itself be clear or immediate, and where the ability to adjust the model assumed of how other people think is pretty important. How could a user give social feedback to a machine? Should users have to do this at all?

    Each of the three resulting projects considers a different aspect of ‘intelligence’ from the perspective of people’s everyday interaction with technologies in the emotionally- and socially-charged context of planning a party or social gathering, and some of the issues that go with it.

    Gilly Johnson and Jasper Tom‘s SAM is an “intelligent friend to guide you through social situations”, planning social gatherings through analysing interaction on social networks, but which also has Amazon Echo-like ordering ability. It’s eager to learn—perhaps too eager.




    Ji Tae Kim and Ty Van de Zande‘s Dear Me, / Miyorr takes the idea that sometimes intelligence can come from not saying anything — from listening, and enabling someone else to speak and articulate their thoughts, decisions, worries, and ideas (there are parallels with the idea of rubber-duck debugging, but also ELIZA). In this case, the system is a kind of magic mirror that listens, extracts key phrases or emphasised or repeated ideas, and (in conjunction with what else it knows about the user), composes a “letter to oneself” which is physically printed and mailed to the user. Ty and Ji Tae also created a proof-of-principle demo of annotated speech-detection that could be used by the mirror.



    Chris Perry‘s Dialectic is an exploration of the potential of discourse as part of decision-making: rather than a single Amazon Echo or Google Home-type device making pronouncements or displaying its ‘intelligence’, what value could come from actual discussion between devices with different perspectives, agendas, or points of view? What happens if the human is in the loop too, providing input and helping direct the conversation? If we were making real-world decisions, we would often seek alternative points of view—why would we not want that from AI?

    Chris’s process, as outlined in the demo, aims partly to mirror the internal dialogue that a person might have. Pre-recorded segments of speech from two devices (portrayed by paper models) are selected from (‘backstage’) by Chris, in response to (and in dialogue with) the user’s input. There are parallels with “devices talking to each other” demos, but most of all, the project reminds me of a particular Statler and Waldorf dialogue. In the demo, the devices are perhaps not seeking to “establish the truth through reasoned arguments” but rather to help someone order pizza for a party.


    comment 1

    Exploring Qualitative Displays and Interfaces

    Windsock on Burgh Island. Devon

    by Dan Lockton, Delanie Ricketts, Shruti Aditya Chowdhury (Imaginaries Lab, Carnegie Mellon School of Design) and Chang Hee Lee (Royal College of Art)

    Much of how we construct meaning in the real world is qualitative rather than quantitative. We think and act in response to, and in dialogue with, qualities of phenomena, and relationships between them. Yet, quantification has become a default mode for information display, and for interfaces supporting decision-making and behaviour change.

    There are more opportunities within design and human-computer interaction for qualitative displays and interfaces, for information presentation, and an aid to help people explore their own thinking and relationships with ideas. Here we attempt one dimension of a tentative classification to support projects exploring opportunities for qualitative displays within design.

    This blog post is a slightly edited version of a late-breaking work submission presented at CHI’17, May 06—11, 2017, Denver, CO, USA, and published in the CHI Extended Abstracts at http://dx.doi.org/10.1145/3027063.3053165

    Download this article as a PDF.

    Water trapped in train carriage door is a form of qualitative display of the train’s acceleration, deceleration and inertia.

    Introduction

    Outside of the digital, we largely live and think and act and feel in response to, and in dialogue with, the perceived qualities of people, things and phenomena, and the relationships between them, rather than their number.

    Much of our experience of—and meaning-making in—the real world is qualitative rather than quantitative. How friendly was she? How tired do I feel right now? Who’s the tallest in the group? How windy is it out there? Which route shall we take to work? How was your meal? Which apple looks tastier? Which piece of music best suits the mood? Do I need to use the bathroom? Particularly rarely do we deal with quantities in relation to abstract concepts—two coffees, half a biscuit, three children, but rarely 0.5 loves or 6.8 sadnesses.

    And yet, quantification has become the default mode of interaction with technology, of display of information, and of interfaces which aim to support decision-making and behaviour change in everyday life [27]. We need not elaborate here the phenomena of the quantified self [36, 42] and personal informatics more widely [24, 12], except to note the prevalence of numerical approaches (Figure 1) and the relative unusualness of non-numerical, pattern-based forms (Figure 2).

    Figure 1: A typical form of quantitative interface: a Fitbit’s display of number of steps taken.
     

    Figure 2: The Emulsion activity tracker, by Norwegian design studio Skrekkøgle, contains two immiscible liquids. Movement splits the colored liquid into smaller drops, making patterns.
     

    But what might we be missing through this focus on quantification? It seems as though there might be opportunities for human-computer interaction (HCI) to explore forms of qualitative display and interface, as an approach to information presentation and interaction, as an aid to help people explore their own and each other’s thinking, and specifically to help people understand their relationships and agency with systems.

    In this article, we discuss qualitative displays and interfaces, and attempt one dimension of a tentative classification supporting design projects exploring this space.

    Leaves as a qualitative interface for the wind

    What could qualitative displays and interfaces be?

    Here we define a qualitative display as being a way in which information is presented primarily through representing qualities of phenomena; a qualitative interface enables people to interact with a system through responding to or creating these qualities. ‘Displays’ are not necessarily solely visual—obvious to say, perhaps, but not always made explicit.

    Before exploring some examples, we will look at some theoretical issues. The terms ‘qualitative interface’ or ‘qualitative display’ are not commonly used outside of some introductory human factors textbooks, but forms of interface along these lines are found in lots of projects at CHI, TEI, DIS, Ubicomp (all academic human-computer interaction conferences) and other venues, without authors explicitly drawing our attention to the concept—it is perhaps just too obvious and too broad to merit specific comment in HCI and interaction design research. But, assuming the idea does have value, what are some characteristics?

    A human face is a qualitative interface, perhaps the earliest we encounter [e.g. 40] along with the voice. We learn to read and interpret emotions in others’ expressions, to recognize commonalities and differences across people, to make inferences about internal and external factors affecting the person, and monitor the effects we or others are having on that person. We understand that the face and voice and our ability to read them are abstractions, interpretations, not perfect knowledge, but a model which enables us to make decisions in conjunction with our reading of our own emotions.

    In a sense, the whole world, as we perceive it, is a very complex qualitative interface. The most accurate model of a phenomenon is the phenomenon itself, but it is only useful to us to the extent we can understand what we are observing, detect the patterns we need to, and recognize that we are constructing the ‘reality’ we perceive. We are always creating a model [14] and that model is necessarily not reality itself; all displays of information are representations of a simplified model of phenomena in the world. Levels of indexicality [32], drawing on Charles Peirce’s semiology, are relevant here, addressing the “causal distance” between the phenomenon and how it is displayed.

    One advantage of interfaces seeking to provide a qualitative display is that they have the potential to enable the preservation of at least some of the complexity of real phenomena—representing complexity without attenuating variety [2]—even if we do not pay attention to it until we actually need to, in much the same way as certain phenomena in the real world become salient only when we need to deal with them. Looking out of the window or opening the door to see and feel and hear what the weather is like outside presents us with complex phenomena, but we are able to interpret what actions we need to take, in a more experientially salient way than looking at some numbers on a weather app.

    Figure 4: It’s easy to imagine the feel of the wind on ourselves when we watch this scarf tied around a lamp post flapping in the breeze. Figure 5: A windsock gives us more sense of the wind’s qualities than a numerical display.
     

    The feel of the wind on our skin, or watching the wind affect the environment, gives us a better sense of whether we need a scarf or coat than knowing the quantitative value of the wind speed and direction (Figures 3, 4 and 5). We can see, hear and feel not just wind speed and direction, but other qualities of it—is it continuous? in short gusts? damp, dry?

    Qualitative displays could enable us to learn to recognize patterns in the world (and in data sets), and the characteristics of state changes, similarly to benefits identified in sonification research [35]. We should consider that ‘qualitative’ does not simply imply the absence of numbers. The examples we use in this paper might involve elements that could easily be quantified (rain drops, ink in a pen) but are given meaning through their display in a way that emphasises a quality or characteristic of the phenomenon. We recognise that this is potentially an ambiguous area, and are open to evolving the concept.

    A possible spectrum of one dimension of qualitative displays: directness of connection

    Here’s a tentative spectrum of one dimension of qualitative displays, relating phenomena to the display in terms of how directly they are connected.

    (Levels 0—1 involve direct use of a real-world phenomenon in the display; from about Level 2 up to Level 5, they involve increasing degrees of translation or transduction of the phenomena. This parallels ideas in indexical visualisation [32] and embedded data representation [41] in terms of ‘situatedness’ or causal distance to phenomena.)

    • Level 0: The phenomenon itself ‘creates’ the display directly
    • Level 1: The display is an ‘accidental’ side-effect of the phenomenon
    • Level 2: The side-effect is ‘incorporated’ into a display that gives it meaning
    • Level 3: The display is a designed side-effect of the phenomenon
    • Level 4: Some minor processing of the phenomenon creates the display
    • Level 5: Major processing of the phenomenon creates the display

    Figure 6: Some examples of displays from Levels 0, 1 and 2. Level 0: The pattern of raindrops hitting a translucent umbrella—frequency, coverage, and sound—directly creates a ‘rain display’ for the user, providing insight into the current state and enabling decisions about whether the umbrella is still needed; City lights create a display showing the shape of the city’s districts and indicator of population density; Water trapped in a train carriage window moves as the train ac-/de-celerates, creating a dynamic display of the train’s motion; A transparent pen is a physical progress bar for the amount of ink remaining—it could be quantified, but it is perhaps the quality of being not-yet-run-out which matters to the user. Level 1: A worn patch on a map accidentally provides a display of ‘you are here’; Use marks [5] from previous users demonstrate how to use a swipe-card for entry to a building; A spoon worn through decades of use is an accidental display of the way in which it has been used [31]; Footprints in the snow ‘accidentally’ provide a display of previous walkers’ paths. Level 2: ‘This Color For Best Taste’ label gives ‘meaning’ to the colour of a mango’s skin for the consumer (Photo used with permission of Reddit user /u/cwm2355); Writing ‘Clean Me’ or other messages in dust on a car gives meaning to the dusty property; Admiral Robert Fitzroy’s Storm Glass, as used on the voyage of the Beagle (1831—6), incorporates crystals whose changing appearance was believed to enable weather forecasting (Photo: ReneBNRW, Wikimedia Commons, public domain dedication); George Merryweather’s Tempest Prognosticator (1851[30]) incorporates “a jury of philosophical councillors”, 12 leeches whose movement on detecting an approaching storm causes a bell to ring (Photo: Badobadop, Wikimedia Commons, CC-BY-SA).
    Figure 7: Some examples of displays from Levels 3, 4 and 5. Level 3: IceAlert is designed so that freezing temperatures cause the blue reflectors to rotate to become visible; A ‘participatory bar chart’ by Dan Lockton along the lines of [22, 33, 16], designed so that ‘voting’ increases the visible height of the bar, though the votes are not numbered; A non-numerical weighing scale by Chang Hee Lee designed so liquid trapped under glass changes shape; Toilet stall door lock designed so display rotates from ‘Vacant’ to ‘Engaged’—the position of the lock itself gives us a display of actionable information. Level 4: Chronocyclegraphs (1917) by Frank and Lillian Gilbreth, tracing manual workers’ movements [10] (Photo from [15], Archive.org, out of copyright]; Live Wire (Dangling String) by Natalie Jeremijenko (1995)[39] moved a wire in proportion to local network traffic; Melbourne Mussel Choir, also by Natalie Jeremijenko with Carbon Arts [6] uses mussels with Hall effect sensors to translate the opening and closing of their shells into music; Availabot (2006), by Schulze & Webb, later BERG [3], is a USB puppet which “stands to attention when your chat buddy comes online”. Level 5: Powerchord by Dan Lockton [29] provides real-time sonification of electricity use, translating it into birdsong or other ambient sound; Immaterials: Ghost in the Field by Timo Arnall [1] visualizes “the three-dimensional physical space in which an RFID tag and a reader can interact with each other”; Ritual Machine 2 by the Family Rituals 2.0 project [23] uses patterns on a flip-dot display to visualize the countdown to a shared event for two people; Tempescope by Ken Kawamoto [21] visualizes weather conditions elsewhere in the world through re-creating them in a tabletop display (Photo used from Tempescope Press Kit).
     

    The boundaries between levels here are dependent on observers’ interpretations of what is signified (whether an effect is accidental or deliberate is a common question in design (teleonomy [25])). Nevertheless, this spectrum permits a classification of some examples and is being applied by the authors in undergraduate design studio projects. We note the absence of screen-based examples: this is not intentional, and we welcome adding relevant examples. There are many intersecting research areas we aim to explore; in current HCI research, the most relevant are data physicalisation, embedded data representation, tangible interaction, sonification, and glanceable displays.

    The work of Yvonne Jansen, Pierre Dragicevic and others [20] in data physicalisation, including compilation of examples, and embedded data representation [41], provides us with many instances of qualitative display, mostly at what we are calling Levels 2—5; likewise, development of ubiquitous computing, tangible interaction and tangible user interfaces [39, 18, 17] and Hiroshi Ishii’s subsequent vision of tangible bits [19] offers a huge set of projects, many of which provide qualitative interfaces for data or system interaction (usually at Levels 4—5).

    Sonification [35] and glanceable displays [e.g. 9, 34] also offer us diverse sets of examples often using non-numerical representation, also largely at levels 4—5. As noted earlier, qualitative does not just mean non-quantitative, and the boundaries may be blurred: if a sonification directly maps numerical values to tones, is it much different to an unlabelled line chart? Or are sparklines [37], for example, a way of turning quantitative data into a form of qualitative presentation?

    Even with a quantitative display, how a person interprets it may have a qualitative dimension: Figure 8 shows an electricity monitor used by a study participant [28] who accidentally set it to display kg CO2/day equivalent; this “meant nothing” to her but she interpreted the display such that “>1” meant “expensive”. ‘Annotations’ of values as users construct their own meaning [11] may fit here; the aim must, however, be to avoid the kind of reductive ‘qualitative’ nature of a limited set of labels [13].

    Figure 8: A quantitative electricity display that was used ‘qualitatively’ by a householder (see text). Figure 9: An example of MONIAC, the Phillips Machine, at the Reserve Bank of New Zealand (Photo by Kaihsu Tai, Wikimedia Commons, public domain dedication).
     

    Analogy and metaphor are important here, and the almost-forgotten field of Analogue Computing offers us an intriguing perspective. By “build[ing] models that created a mapping between two physical phenomena” [7], some analogue computers effectively operated as ‘direct’ displays of an analogue of the ‘original’ phenomenon—a kind of meta-level 2 type qualitative display, with devices such as the 1949 Phillips Machine [4] (Figure 9), which performed operations on flows of coloured water to model the economy of a country, enabling an interactive visualization of a system in operation as it operates (there are parallels with Bret Victor and Nicky Case’s work on explorable explanations [38, 8], and the development of visual programming languages).

    Other areas of pertinent research and inspiration, are synaesthesia and mental imagery: sensory overlaps, fusions and mappings offer a fertile field for exploring qualitative displays of phenomena.

    Conclusion: What use is all of this?

    We’re interested in using qualitative displays and interfaces for supporting decision-making, behaviour change and new practices through enabling new forms of understanding—as an aid to help people explore their own and each other’s thinking, and specifically to help people understand their relationships and agency with the systems around them [26]. Projects using qualitative displays are unlikely simply to be de-quantified ‘conversion’ of existing numerical displays; instead, the aim will be to make use of the approach to represent and translate phenomena appropriately, in ways which enable users to construct meaning and afford new ways of understanding, enabling nuance and avoiding reductiveness.

    The spectrum of the ‘directness’ dimension introduced here provides a possible starting point for this work, by giving a framework for analysing examples and suggesting ways of handling phenomena to be displayed, and is currently being used by the authors to brief an undergraduate design studio project on materialising environmental phenomena to reveal hidden relationships. We welcome the opportunity to learn from others who have thought about these kinds of ideas to inform our future explorations of this area.

    Acknowledgements

    Thanks to Dr Delfina Fantini van Ditmar, Dr Laura Ferrarello, Flora Bowden, Gyorgyi Galik, Stacie Rohrbach, Ross Atkin, Shruti Grover, Veronica Ranner and Dixon Lo for discussions in which some of these ideas were formulated and explored, and to the CHI reviewers. Unless otherwise noted, photos are by the authors.

    References

    1. Timo Arnall. 2014. Exploring ‘immaterials’: Mediating design’s invisible materials. International Journal of Design 8, 2: 101—117. http://www.ijdesign.org/ojs/index.php/IJDesign/article/view/1408

    2. W. Ross Ashby. 1956. An Introduction to Cybernetics. Chapman & Hall, London.

    3. BERG. 2008. Availabot. Retrieved Jan 10, 2017 from http://berglondon.com/projects/availabot/

    4. Chris Bissell. 2007. The Moniac: A Hydromechanical Analog Computer of the 1950s. IEEE Control Systems Magazine 27, 1:59—64. https://dx.doi.org/10.1109/MCS.2007.284511

    5. Brian Burns. 2007. From Newness to Useness and Back Again: A review of the role of the user in sustainable product maintenance. Retrieved June 1, 2009 from http://extra.shu.ac.uk/productlife/
     Maintaining%20Products%20presentations/Brian%20Burns.pdf

    6. Carbon Arts. 2013. Melbourne Mussel Choir. Retrieved Jan 10, 2017 from http://www.carbonarts.org/projects/melbourne-mussel-choir/

    7. Charles Care. 2006—7. A Chronology of Analogue Computing. The Rutherford Journal 2. Retrieved Jan 10, 2017 from http://www.rutherford
     journal.org/article020106.html

    8. Nicky Case. 2014. Explorable Explanations. Blog post (Sept 8, 2014). Retrieved Jan 10, 2017 from http://blog.ncase.me/explorable-explanations/

    9. Sunny Consolvo, Predrag Klasnja, David W. McDonald, Daniel Avrahami, Jon Froehlich, Louis LeGrand, Ryan Libby, Keith Mosher, and James A. Landay. 2008. Flowers or a Robot Army? Encouraging Awareness & Activity with Personal, Mobile Displays. In Proceedings of 10th International Conference on Ubiquitous Computing (UbiComp’08): 54—63. https://doi.org/10.1145/1409635.1409644

    10. Régine Debatty. 2012. The Chronocyclegraph. Blog post, We Make Money Not Art (May 6. 2012). Retrieved Jan 10 2017 from http://we-make-money-not-art.com/the_chronocyclegraph/

    11. Paul Dourish. 2004. What we talk about when we talk about context. Personal and Ubiquitous Computing 8, 1: 19—30. http://dx.doi.org/10.1007/
     s00779—003—0253—8

    12. Chris Elsden, David Kirk, Mark Selby, and Chris Speed. 2015. Beyond Personal Informatics: Designing for Experiences with Data. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15): 2341—2344. https://dx.doi.org/10.1145/2702613.2702632

    13. Delfina Fantini van Ditmar and Dan Lockton. 2016. Taking the Code for a Walk. Interactions 23, 1: 68—71. https://dx.doi.org/10.1145/2855958

    14. Heinz von Foerster. 1973. On constructing a reality. In F.E. Preiser (Ed.). Environmental Design Research Vol. 2. Dowden, Hutchinson & Ross, Stroudberg: 35—46. Reprinted in Heinz von Foerster. 2003. Understanding Understanding—Essays on Cybernetics and Cognition. Springer-Verlag, New York: 211—228. https://dx.doi.org/10.1007/0-387-21722-3_8

    15. Frank Gilbreth and Lillian Gilbreth. 1917. Applied Motion Study: a collection of papers on the efficient method to industrial preparedness. Sturgis & Walton, New York. Retrieved Jan 10, 2017 from https://archive.org/details/appliedmotionstu00gilbrich

    16. Hans Haacke. 2009. Lessons Learned. Tate Papers 12. Retrieved Jan 10, 2017 from http://www.tate.org.uk/download/file/fid/7265

    17. Eva Hornecker and Jacob Buur. 2006. Getting a grip on tangible interaction: a framework on physical space and social interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’06): 437—446. https://dx.doi.org/10.1145/1124772.1124838

    18. Hiroshi Ishii and Brygg Ullmer. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’97): 234—241. https://dx.doi.org/10.1145/258549.258715

    19. Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. Interactions 19, 1: 38—51. https://dx.doi.org/10.1145/2065327.2065337

    20. Yvonne Jansen, Pierre Dragicevic, Petra Isenberg, Jason Alexander, Abhijit Karnik, Johan Kildal, Sriram Subramanian, and Kasper Hornbæk. 2015. Opportunities and Challenges for Data Physicalization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’15): 3227—3236. https://dx.doi.org/10.1145/2702123.2702180

    21. Ken Kawamoto. 2012. Prototyping “Tempescope”, an ambient weather display. Blog post (Nov 15, 2012). Retrieved Jan 10, 2017 from http://kawalabo.blogspot.jp/2012/11/prototyping-tempescope-ambient-weather.html

    22. Lucy Kimbell. 2011. Physical Bar Charts. Retrieved Jan 10, 2017 from http://www.lucykimbell.com/LucyKimbell/PhysicalBarCharts.html

    23. David Kirk, David Chatting, Paulina Yurman, and Jo-Anne Bichard. 2016. Ritual Machines I & II: Making Technology at Home. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’16): 2474—2486. http://dx.doi.org/10.1145/2858036.2858424

    24. Ian Li, Anind Dey, and Jodi Forlizzi. 2010. A stage-based model of personal informatics systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10): 557—566. https://dx.doi.org/10.1145/1753326.1753409

    25. Dan Lockton. 2012. POSIWID and Determinism in Design for Behaviour Change. Social Science Research Network. http://dx.doi.org/10.2139/ ssrn.2033231

    26. Dan Lockton. 2016. Designing Agency in the City. In Lacey Pipkin (Ed.), The Pursuit of Legible Policy: Agency and Participation in the Complex Systems of the Contemporary Megalopolis. Buró-Buró, Mexico City: 53—61. http://legiblepolicy.info/book/ Legible-Policies_BB.pdf

    27. Dan Lockton, David Harrison, and Neville Stanton. 2010. The Design with Intent Method: A design tool for influencing user behaviour. Applied Ergonomics 41, 3: 382—392. http://dx.doi.org/10.1016/ j.apergo.2009.09.001

    28. Dan Lockton, Flora Bowden, Catherine Greene, Clare Brass, and Rama Gheerawo. 2013. People and energy: A design-led approach to understanding everyday energy use behaviour. In Proceedings of EPIC 2013: Ethnographic Praxis in Industry Conference: 348—362. https://dx.doi.org/
     10.1111/j.1559—8918.2013.00029.x

    29. Dan Lockton, Flora Bowden, Clare Brass, and Rama Gheerawo. 2014. Powerchord: Towards ambient appliance-level electricity use feedback through real-time sonification. In Proceedings of UCAmI 2014: 8th International Conference on Ubiquitous Computing & Ambient Intelligence: 48—51. https://dx.doi.org/10.1007/978-3-319-13102-3_10

    30. George Merryweather. 1851. An essay explanatory of the Tempest Prognosticator in the building of the Great Exhibition for the Works of Industry of All Nations. John Churchill, London. Retrieved Jan 10, 2017 from https://archive.org/details/b2804163x

    31. Bruno Munari. 1971. Design as Art (trans. Patrick Creagh). Pelican Books, London.

    32. Dietmar Offenhuber and Orkan Telhan. 2015. Indexical Visualization—the Data-Less Information Display. In Ulrik Ekman, Jay David Bolter, Lily Diaz, Morten Søndergaard, and Maria Engberg (eds.). Ubiquitous Computing, Complexity and Culture: 288—303. Routledge, New York.

    33. Jennifer Payne, Jason Johnson, and Tony Tang. 2015. Exploring Physical Visualization. In Jason Alexander, Yvonne Jansen, Kasper Hornbæk, Johan Kildal and Abhijit Karnik. Exploring the Challenges of Making Data Physical. Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15): http://architectures.danlockton.co.uk/wp-content/2015-chi2015workshop-physvis.pdf

    34. Tim Regan, David Sweeney, John Helmes, Vasillis Vlachokyriakos, Siân Lindley, and Alex Taylor. 2015. Designing Engaging Data in Communities. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘15): 271—274. http://dx.doi.org/
     10.1145/2702613.2725432

    35. Stefania Serafin, Karmen Franinovic, Thomas Hermann, Guillaume Lemaitre, Michal Rinott, and Davide Rocchesso. 2011. Sonic Interaction Design. In Thomas Hermann, Andy Hunt, and John Neuhoff (Eds.), The Sonification Handbook. Logos, Berlin: 87—110. http://sonification.de/handbook/ index.php/chapters/chapter5/

    36. Melanie Swan. 2013. The quantified self: fundamental disruption in big data science and biological discovery. Big Data 1, 2: 85—99. https://dx.doi.org/10.1089/big.2012.0002

    37. Edward Tufte. 2001. The Visual Display of Quantitative Information (2nd ed.). Graphics Press, Cheshire, CT.

    38. Bret Victor. 2011. Explorable Explanations. March 10, 2011. Retrieved Jan 10, 2017 from http://worrydream.com/
     ExplorableExplanations

    39. Mark Weiser and John Seely Brown. 1995. Designing Calm Technology. Dec 21, 1995. Retrieved Jan 10, 2017 from http://www.ubiq.com/
     weiser/calmtech/calmtech.htm

    40. Sherri C. Widen. 2013. Children’s Interpretation of Facial Expressions: The Long Path from Valence-Based to Specific Discrete Categories. Emotion Review 5, 1: 72—77. https://dx.doi.org/10.1177/
     1754073912451492

    41. Wesley Willett, Yvonne Jansen, and Pierre Dragicevic. 2017. Embedded Data Representations. IEEE Transactions on Visualization and Computer Graphics 23, 1: 461—470. https://dx.doi.org/10.1109/TVCG.2016.2598608

    42. Gary Wolf. 2010. The quantified self. Video (June 2010). Retrieved Jan 10, 2017, from https://www.ted.com/talks/gary_wolf_the_quantified_self

    Design Students Explore Landscape Metaphors for Project Modeling

    Delanie Ricketts and Dan Lockton

    This article originally appeared on the Carnegie Mellon School of Design website

    We often use landscapes as metaphors in everyday speech, particularly to talk about complex systems–understanding a complex information system as an “information landscape”, for example, helps convey the idea that such a system, like a landscape, is vast and encompasses many interacting variables. However, while landscape metaphors are common in speech–terms like “stakeholder landscape”, “lie of the land”, “ocean of possibilities”, “food desert”, even the word “field”–landscape metaphors have been used more rarely in visual applications.

    On March 30th, 45 Juniors from Carnegie Mellon University’s School of Design’s “Persuasion” class, taught by Michael Arnold Mages, Dan Lockton, and Stephen Neely, took part in a workshop to explore practically how physical and visual landscape metaphors could help elicit new insights about complex experiences–in this case, modeling and reflecting on group design projects. Facilitated by MA Design student and Research Assistant Delanie Ricketts and Assistant Professor Dan Lockton, as part of the School of Design’s new Imaginaries Lab, the workshop involved students collaboratively creating ‘landscape’ models representing projects they have worked on, using simple paper cut-outs of features such as hills, trees, weather, and people. Each group used the elements in different ways to represent different aspects of their projects, through creating ‘timeline’ landscapes in both two and three-dimensional formats.

    Some projects started with rocky beginnings, represented by different cones or hills, in order to show how difficult that part of the project was. Other projects started with trees, rivers, and stars, representing periods of calm ideation, research, or general feelings of optimism. When projects encountered new difficulties later on, many groups represented these periods with lightning, rain, hills, and cones. Several groups used (and came up with names for) metaphors within the general landscape metaphor to represent specific parts of their project experiences, such as a “plateau of exhaustion” before the project came to an end.

    Delanie’s previous prototypes of the landscape metaphor visuals, as part of her research assistantship project, have focused on how they could facilitate individual reflection on one’s own career path. However, while people found the metaphor and elements to be a useful and creative reflection tool, several expressed that it was difficult to show how their perspective changed over time within a two-dimensional format. In this second iteration of elements, we aimed to provide greater variation as well as enable three-dimensional expression. In addition, we wanted to explore how the metaphor could be used to think through a different topic, project planning or reflecting rather than career, and in a group rather than individual context.

    Students’ responses to trying out this second iteration of landscape elements, applied to group projects rather than individual career paths, suggested that they found the process fun and creative, while also abstract. Many participants commented that the tool helped them understand their project and teammates’ perspectives better, especially in terms of stress, productivity, and overall emotional satisfaction at different points throughout a project’s lifetime. The format is more useful for surfacing – and reconciling – overarching understandings than probing deeper insights about the specifics of complex experiences, but, in triggering discussion, it has value in enabling members of a team to understand and interrogate each other’s perspectives and mental models of a situation (echoing ideas from organizational systems thinking experts such as Peter Senge).

    We aim to develop the landscapes kit further, through iterations with application in individual reflection, project planning, and research settings.

    Many thanks to Chris Stygar, Josiah Stadelmeier, and the whole School of Design 3D Lab for their help in developing the materials for the project, the Design graduate students and juniors for taking part in the different stages of the project, and Manya Krishnaswamy for helping facilitate. Thanks to Joe Lyons for putting the article on the School website.

    Mental Landscapes
    Mental Landscapes
    Mental Landscapes
    Mental Landscapes
    Mental Landscapes
    Mental Landscapes
    Mental Landscapes
    Mental Landscapes
    Mental Landscapes
    Mental Landscapes

    Environments Studio: Materializing the Invisible

    Timelapse of studio

    Timelapse of studio, by Jasper Tom

    In Materializing the Invisible, we considered invisible and intangible phenomena—the systems, constructs, relationships, infrastructures, backends and other entities, physical and conceptual, which comprise or influence much of our experience of, and interaction with, environments both physical and digital. ‘The invisible’ here is potentially everything from how the building’s heating system works, to the algorithms behind targeted ads, to who’s friends with whom, to where corruption is occurring in government, to where your IoT fridge sends the data it collects, to people’s mental imagery of time, to the electricity use of devices, to networks of cameras and sensors, to how political decisions are made. It also potentially includes things that happen at scales or in dimensions we can’t directly comprehend, from planetary processes such as climate, to the interaction of electromagnetic fields, to the microscopic. And things that happen, that enable day-to-day functioning of our lives, but we don’t know much about. Where does our food come from? Where does our waste water go? What route did that package take to get to us?

    The process of revealing the invisible can improve understanding, help people explore their own thinking and relationships with these complex concepts, highlight problems, power structures and inequalities, reveal hidden truths, connect people better to the world around them, and enable people to act. It is not necessarily about visualizing the invisible—it can be about making it audible, tangible, smellable, or otherwise experienceable: we explored techniques from fields including data visualization, sonification, data physicalization, ubiquitous computing, tangible interaction, analog computing, qualitative displays, and the study of synaesthesia to create ways to materialize these invisible phenomena.

    More details, including background reading, in the syllabus.

    As a starting exercise we examined some ‘invisible’ and unknown things within the building itself (Margaret Morrison Carnegie Hall), noting questions and ideas with Post-It notes in situ. These ranged from questions about who has access to certain rooms or controls, to what some of the controls are in the first place. There were also traces of action and use—patterns which might be invisible in the sense of not being paid attention to, but nevertheless present in the use of the building.












    The class project was to choose a phenomenon which is ‘invisible’ within a physical, digital or hybrid environment, find a way of getting access to it, and design and build / make / create a way of materializing the phenomenon, making it accessible to people more widely. As a group we brainstormed different phenomena which might be investigable, and possible forms of representation.

    Ji Tae Kim’s project Whitespace looked at the invisible aspects of communication in text messaging, following on from his previous project Fear of Missing Out. Whitespace explores ways to materialize and express “rich contextual and verbal cues” through “an intuitive extension to instant messaging”. Working prototypes used copper tracks, Bare Conductive ink and Touch Board, and Arduino.


    Jasper Tom and Chris Perry‘s project Kairos examined “an invisible phenomenon ingrained in everyday life”: the passage of time in a space, specifically around working at a desk. The question “Where did the time go?” and the idea of desk legacy, the patterns of use left by a previous user of the desk in a shared workspace, informed by analysis of timelapse video of the studio, came together with inspirations such as Daniel Rozin’s Wooden Mirror, MIT Tangible Media Group projects such as Daniel Leithinger’s work, and Tempurpedic foam, to create a desk surface which could ‘play back’ the patterns of how it had been used, via an interface using wooden blocks. A working prototype of part of the surface used Arduino and servo motors to demonstrate the effect.

    One interesting aspect discussed during Jasper and Chris’s presentation was how while evidence of physical work is often obvious in space, such as a painter’s palette, the evidence of digital work is often invisible—a slightly worn keyboard, perhaps, but little else.






    Gilly Johnson and Ty Van de Zande worked together to explore aspects of human movement (dance and exercise), and the related issues of hydration and focus. Focus + Movement proposed a color-changing bodysuit which could work together as part of a system with a water bottle, both to make the invisible patterns visible, and to enable reflection. Gilly and Ty captured movement by dancers using a Kinect, connected to Max MSP, and then simulated the body suit via After Effects.