More educational architectures of control: museums

A display case in the Kremlin museum, Moscow
A ‘traditional’ museum display cabinet in the Kremlin museum, Moscow. I liked the owl.

Two very interesting posts from last week looked at the use of control in museum design – Frankie Roberto discusses trying to get children (in particular) to learn interactively, and Josh Clark has some thoughts on the way that museum and gallery visitors can be encouraged to think more about the work on display.

Slipping information into play

Frankie – who works for London’s Science Museum – notes the approach of using interactive games or exhibits with forcing functions to (force?-)feed the user information whilst playing: users are “surreptitiously slipped educational information whilst they’re having fun”:

Museums often try to force visitor behaviour in order to achieve learning outcomes, sometimes more successfully than others. A common example of this is a game – designed to appeal to children – which has factual text embedded within it. The ‘Mobile Mayhem’ game included within our recent Dead Ringers exhibition is a typical example. The gameplay, essentially about pressing the right buttons at the right time, is bookended by some factual paragraphs about mobile phone recycling. By revealing the content word by word, and making the screens unskippable until the whole paragraph has been displayed, the player is meant to be forced to read the text, and hence to take in the new and educational information.

Mobile Mayhem, from the Science Museum
Mobile Mayhem, from the Science Museum
The Mobile Mayhem game, from the Science Museum’s Dead Ringers exhibition website. In the screen shown in the first image, educational text appears word by word, forcing the reader to read it (or at least wait for it to be revealed) before proceeding to the actual game.

The word by word revealing of text is familiar from so many indistinguishable Powerpoint presentations (usually accompanied by that awful typewriter noise, of course), and seeing it used in a ‘control’ context makes me wonder how many speakers/lecturers/managers intentionally (even if subconsciously) reveal their dull text or bullet points word by word so that the audience is forced to stick with the information in the order it’s presented and not read (or think) ahead? I’ve had a few teachers and lecturers in my time who used a bit of paper to cover up parts of OHP transparencies they didn’t want us to read yet, in the hope that we’d pay more attention to what they were saying, and I remember how much that used to irritate me (I like reading ahead!), but I understand why they did it.

Relating back to my recent look at forcing functions in textbooks, Frankie makes the point that:

The problem is, of course, that it’s not that difficult to ignore the education and just focus on the game… it’s pretty impossible for software to actually evaluate educational ‘understanding’, and so attempting to force can be somewhat disingenuous.

[As an aside - and this is something I really should develop in a separate post - there does, equally, come a point where our understanding of how other people understand ideas and concepts makes a one size-fits-all evaluation very difficult. I expect someone has done a study like this (I do hope so - I'd love to read it), but wouldn't it be fascinating to find out whether certain ways of understanding (or visualising) certain concepts help certain people think laterally and draw conclusions that others have missed? For example, this is Richard Feynman, in 'It's as Simple as One, Two, Three':

When I see equations, I see the letters in colors - I don't know why. As I'm talking, I see vague pictures of Bessel functions... with light-tan js, slightly violet-bluish ns and dark brown xs flying around. And I wonder what the hell it must look like to the students.

I first noted that quote down a few years ago when reading a collection of Feynman's essays, as I'd always had the same kind of very mild grapheme-colour synæsthesia that the quote implies, but I wonder whether the phenomenon actually helped Feynman structure mentally and remember mathematical concepts? And can we learn from it in designing educational systems? Anyway, I'll come back to that idea in a future, more relevant, post!]

Encouraging visitors to think

Beldam Gallery, Uxbridge, 2002 The Foundry, London, 2006
Left: When issued with a booklet explaining artwork on display, many visitors walk around reading this before forming their own impressions of the work. This is an exhibition at Uxbridge’s Beldam Gallery in 2002. Right: Displaying work with no explanatory text, captions or booklets compels visitors to make their own judgments and form their own interpretations of the work (or ignore it, but that’s something of a judgment in itself). This is Dave Cranmer’s Pixelly Paintings at the Foundry, London, in 2002.

Josh’s post argues that many museums and galleries would better fulfil their educational and inspirational potential if they encouraged visitors to think more about what they are looking at, rather than spoon-feeding them information and an ‘established’ opinion – especially pertinent to art:

My wife Ellen is an art historian and a professional museumgoer. She tells me that museum visitors commonly spend more time reading wall texts than looking at the art… It’s a law of interface behavior that users will always follow the path of least resistance. Looking at art is hard. Many find it intimidating, unfamiliar, uncomfortable. It’s easier to read wall text, go shopping or listen to audio commentary than it is to actually face down the work itself.

The interface is broken.

The support materials should be less prominent. What a work “means” or why it’s “important” is second-order information. The important experience is simply to look at the work, to absorb its sensual impact. Respond to it, rather than study it like a schoolbook. For lots of visitors, though, the support materials seem to distract, reducing the time that visitors take to reflect on the works.

The design question: How do you get people to consider the art instead of plunging into its documentation?

As Josh notes, there are designers who think entirely the opposite, and long for more structured lead-ins in galleries, with the artwork’s title and rationale defined clearly up-front. (The always-interesting David Friedman subverts the concept further.)

I can see both points of view. When I was very young I used to get frustrated visiting ‘traditional’ museums that really interested me (mostly motor museums and those with technology) because there was rarely a pre-defined route around them, and I wanted to see everything. When you’re a little kid, zig-zagging across a room from one side to the other to make sure you don’t miss anything out can be difficult, especially when every other visitor is much taller than you and the room seems intimidatingly large. I remember thinking how a museum with displays only along one wall, so that you had to look at them in a certain order, would be good. Now, of course, I would tend to see that as excessive control, and want to be able to miss out things that don’t interest me, and indeed, form my own interpretations of what’s on display.

Josh goes on to give the example of a fairly simple compromise which both allows the visitor to form his or her own interpretations of the work, and to read interpretations if desired:

I think that it would be better to make wall text less prominent, encouraging visitors to spend their time with the art instead.

The modern art museum in Paris, the Centre Pompidou, uses an architecture of control that does just that. Each gallery has a stand with a set of cards offering commentary on the works in the gallery. The wall text is limited only to title, artist and materials. The behavior of museumgoers changes: People walk into the gallery, and spend time with the works. Afterward, those who are curious to learn more go retrieve a card and return to look at the works some more after reading about them.

The educational and background materials are still there, but presented in a way that still encourages people to confront the works first.

It’s interesting that this really does apparently change people’s behavior. (An alternative might be to have more information under a hinged flap on the wall or a pedestal so that only those who want/feel the need to have an established opinion on the piece end up reading it. Or perhaps even the title, artist and materials could be listed under the flap, so that visitors who want to form entirely independent opinions aren’t even swayed by the pieces’ titles or the artists’ names.)

Would you feel cheated if you visited an art gallery and there were no interpretation or explanation of the pieces available at all? Before it became so well-known, how many people picked up The Catcher In the Rye (with its famously sparse blurb-less covers) from a library shelf and put it back, unable to make a commitment to reading it without having an idea what it was about?

Of course, the argument can shift considerably when the subject is a museum dedicated to educating visitors about the exhibits and why they are important, rather than an art gallery, but the principle that Josh outlines of the visitor interfacing (as it were) directly with the exhibit, whether that’s a painting (and the interfacing is figuring out one’s own response to it) or a hands-on science experiment, or anything in between, has a good degree of commonality. The ‘middle man’, the filter of best-fit interpretation drawn up to fit on the standard-size card and fit standard-size opinions, is stripped out.

The Science Museum does a fantastic job of explaining concepts and opening visitors’ eyes to things they actively want to understand, but may never have known how to approach before. It doesn’t tell them how to think about something, but allows them to find out things they didn’t know, and thing more about the things they thought they did know. There is a difference. Bristol’s Exploratory, sadly now closed, was immensely inspirational to me as a child: this was somewhere where all learning was through actual interaction with the (mostly physics-based) exhibits plores.

As we’ve noted before, much of education is about changing behaviour, even if we define the behaviour we want to change as “being ignorant”. Control is one way of attempting to force a change in behaviour, manipulative persuasion is another (thanks Toby) but allowing people to learn because something interests them cuts out the necessity to use force or deceit. If you can make something interesting, you overcome the resistance.

Friday quote: Super-Cannes by J G Ballard

A street in Cannes, autumn 2005

J G Ballard, Super-Cannes, chapter 29:

Thousands of people live and work here without making a single decision about right and wrong. The moral order is engineered into their lives along with the speed limits and the security systems.

Re-reading Ballard’s excellent Super-Cannes, since the way the winter afternoon sunlight suddenly caught a building a few days ago made me think sharply, momentarily, of the vast technology parks of Sophia-Antipolis. The above quote describes, essentially, architecture of control in a structural, sub-surface, context: in the sense of Robert Moses‘ low bridges, perhaps. Not just artefacts with politics, but entire environments and systems with agendas.

More on Ballard at the brilliant Ballardian.

(This is the first Friday quote for a long time. In fact there’s only been one previously; I’ll try to make it a regular feature of the blog. They won’t always be about architectures of control, but I’ll endeavour to make sure they’re always interesting.)

No photography allowed

A couple of recent stories on photography of certain items being ‘banned’ – Cory Doctorow on a Magritte exhibition’s hypocrisy, and Jen Graves on a sculpture of which “photography is prohibited” – highlight what makes me tense up and want to scream about so much of the ‘intellectual property debate’: photons are no more regulable than bits. And bits, like knowledge itself, aren’t regulable either (Cory again). Just as he who lights his taper at mine, receives light without darkening me, so he who receives an idea from me, receives instruction himself without lessening mine (Jefferson, via Scott Carpenter).

So this sign available from ACID (Anti-Copying In Design) made me laugh with astonishment, and cringe a little:

No photography allowed, from ACID
Image from an ACID leaflet, “You wouldn’t say that copying was the sincerest form of flattery if it cost you your business”. The sign doesn’t seem to be shown on ACID’s Deterrent Products online store.

I understand what ACID is trying to do, and unlike most anti-copying initiatives, ACID is set up specifically to protect the little guy rather than enormous intransigent oligarchies. ACID’s sample legal agreements and advice for freelancers on dealing with clients, registering designs, etc, are great initiatives and I’m sure they’ve been a fantastic help to a lot of young designer-makers.

But a sign ‘banning’ photography at exhibitions? At design exhibitions where new aesthetic ideas are the primary reason for most visitors attending? That seems hopelessly naïve, akin to a child defensively wrapping his or her arm around a piece of work to stop the kid at the next desk copying what’s being written, but then pleading with teacher to put it up on the wall.

And I would have thought, to be honest, that “with phone cameras your ideas… [being] sent globally within seconds” is more likely to lead to instant fame and international recognition for the designer on sites such as Cool Hunting, We Make Money Not Art, or Core77 than (presumably unauthorised) “mass production”. But maybe I’m wrong: I’m sure you’ll let me know!

Most young designers are desperate for exposure. I know every design exhibition I’ve shown stuff at (not many, to be fair), I’ve been delighted when someone photographs my work. ACID’s sign also raises the question, of course, whether when someone displaying the sign actually sells a piece of work, it comes with a label attached telling the purchaser than he or she may not photograph it, or show it to friends. Wouldn’t that be a logical extension?

P.S. We’ve looked before at actual technologies to ‘prevent’ photography, such as Georgia Tech’s CCD-blinder and Hewlett-Packard’s “remote image degradation” device (in the wider context of “plugging the analogue hole”). As I replied to a commenter on the Georgia Tech story:

It won’t be too long (20 years?) before photographic (eidetic) memory and computers start to overlap (or even interface), to some extent, even if it’s only a refinement of something like the Sensecam. What’s going to happen then? If I can ‘print out’ anything I’ve ever seen, on a whim, why will I worry about what anyone else thinks?

Education, forcing functions and understanding

Engineering Mathematics, by K Stroud

Mr Person at Text Savvy looks at an example of ‘Guided Practice’ in a maths textbook – the ‘guidance’ actually requiring attention from the teacher before the students can move on to working independently – and asks whether some type of architecture of control (a forcing function perhaps) would improve the situation, by making sure (to some extent) that each student understood what’s going on before being able to continue:

Image from Text Savvy
Image from Text Savvy
Is there room here for an architecture of control, which can make Guided Practice live up to its name?

This is a very interesting problem. Of course, learning software could prevent the student moving to the next screen until the correct answer is entered in a box. This must have been done hundreds of times in educational software, perhaps combined with tooltips (or the equivalent) that explain what the error is, or how to think differently to solve it – something like the following (I’ve just mocked this up, apologies for the hideous design):

Greyed-out Next button as a forcing function

The ‘Next’ button is greyed out to prevent the student advancing to the next problem until this one is correctly solved, and the deformed speech bubble thing gives a hint on how to think about correcting the error.

But just as a teacher doesn’t know absolutely if a student has really worked out the answer for him/herself, or copied it from another student, or guessed it, so the software doesn’t ‘know’ that the student has really solved the problem in the ‘correct’ way. (Certainly in my mock-up above, it wouldn’t be too difficult to guess the answer without having any understanding of the principle involved. We might say, “Well, implement a ’3 wrong answers and you’re out’ policy to stop guessing,” but how does that actually help the student learn? I’ll return to this point later.)

Blind spots in understanding

I think that brings us to something which, frankly, worried me a lot when I was a kid, and still intrigues (and scares) me today: no-one can ever really know how (or how well) someone else ‘understands’ something.

What do I mean by that?

I think we all, if we’re honest, will admit to having areas of knowledge / expertise / understanding on which we’re woolly, ignorant, or with which we are not fully at ease. Sometimes the lack of knowledge actually scares us; other times it’s merely embarrassing.

For many people, maths (anything beyond simple arithmetic) is something to be feared. For others, it’s practical stuff such as car maintenance, household wiring, and so on. Medicine and medical stuff worries me, because I have never made the effort to learn enough about it, and it’s something that could affect me in a major way; equally, I’m pretty ignorant of a lot of literature, poetry and fine art, but that’s embarrassing rather than worrying.

Think for yourself: which areas of knowledge are outside your domain, and does your lack of understanding scare/intimidate you, or just embarrass you? Or don’t you mind either way?

Bringing this back to education, think back to exams, tests and other assessments you’ve taken in your life. How much did you “get away with”? Be honest. How many aspects did you fail to understand, yet still get away without confronting? In some universities in the UK, for instance, the pass mark for exams and courses is 40%. That may be an extreme, and it doesn’t necessarily follow that some students actually fail to understand 60% of what they’re taught and still pass, but it does mean that a lot of people are ‘qualified’ without fully understanding aspects of their own subject.

What’s also important is that even if everyone in the class got, say, 75% right, that 75% understanding would be different for each person: if we had four questions, A, B, C and D, some people would get A, B, and C right and D wrong; others A, B, D right and C wrong, and so on. Overall, the ‘understanding in common’ among a sample of students would be nowhere near 75%. It might, in fact, be small. And even if two students have both got the same answer right, they may ‘understand’ the issue differently, and may not be able to understand how the other one understands it. How does a teacher cope with this? How can a textbook handle it? How should assessors handle it?

I’ll admit something here. I never ‘liked’ algebraic factorisation when I was doing GCSE (age 14-15) A-level (16-17) or engineering degree level maths – I could work out that, say, (2x² + 2)(3x + 5)(x – 1) = 6x^4 + 4x³ – 4x² + 4x – 10 (I think! I don’t think there’s an HTML character code for a superscript 4, sorry), but there’s no way I could have done that in reverse, extracting the factors (2x² + 2)(3x + 5)(x – 1) from the expanded expression, other than by laborious trial and error. Something in my mathematical understanding made me ‘unable’ to do this, but I still got away with it, and other than meaning I wasted a bit more time in exams, I don’t think this blind spot affected me too much.

OK, that’s an excessively boring example, but there must be many much, much worse examples where an understanding blind spot has actually adversely affected a situation, or the competence of a whole company or project. Just reading sites such as Ben Goldacre’s Bad Science (where some shocking scientific misunderstandings and nonsense are highlighted) or even SharkTank (where some dreadful IT misunderstandings, often by management, are chronicled) or any number of other collections of failures, shows very clearly that there are a lot of people in influential positions, with great power and resources at their fingertips, who have significant knowledge and understanding blind spots even within domains with which they are supposedly professionally involved.

Forcing functions in textbooks

Back to education again, then: assuming that we agree that incompetence is bad, then gaps in understanding are important to resolve, or at least to investigate. How well can a teaching system or textbook be designed to make sure students really understand what they’re doing?

Putting mistake-proofing (poka-yoke) or forcing functions into conventional paper textbooks is much harder than doing it in software, but there are ways of doing it. A few years ago, I remember coming across a couple of late-1960s SI Metric training manuals which claimed to be able to “convert” the way the reader thought (i.e. Imperial to SI) through a “unique” method, which was quoted on the cover (in rather direct language) as something like “You make a mistake: you are CORRECTED. You fail to grasp a fundamental concept: you CANNOT proceed.” The way this was accomplished was simply by, similarly to (but not the same as) the classic Choose Your Own Adventure method, having multiple routes through the book, with the ‘page numbers’ being a three digit code generated by the student based on the answers to the questions on the current page. I’ve tried to mock up (from distant memory) the top and bottom sections of a typical page:

Mock-up of a 1960s 'guided learning' textbook

In effect, the instructions routed the student back and forth through the book based on the level of understanding demonstrated by answering the questions: a kind of flow chart or algorithm implemented in a paperback book, and with little incentive to ‘cheat’ since it was not obvious how far through the book one was. (Of course, the ‘length’ of the book would differ for different students depending on how well they did in the exercises they did.) There were no answers to look up: proceeding to whatever next stage was appropriate would show the student whether he/she had understood the concept correctly.

When I can find the books again (along with a lot of my old books, I don’t have them with me where I’m living at present), I will certainly post up some real images on the blog, and explain the system further. (It’s frustrating me now as I type this early on a Sunday morning that I can’t remember the name of the publisher: there may well already be an enthusiasts’ website devoted to them. Of course, I can remember the cover design pretty well, with wide sans-serif capital letters on striped blue/white and murky green/white backgrounds; I guess that’s why I’m a designer!)

A weaker way of achieving a ‘mistake-proofing’ effect is to use the output of one page (the result of the calculation) as the input of the next page’s calculation, wherever possible, and confirm it at that point so that the student’s understanding at each stage is either confirmed or shown to be erroneous. So long as the student has to display his/her working, there is little opportunity to ‘cheat’ by turning the page to get the answer. No marks would be awarded for the actual answer; only for the working to reach it, and a student who just cannot understand what’s going wrong with one part of the exercise can go on to the next part with the starting value already known. This would also make marking the exercise much quicker for the teacher, since he or she does not have to follow through the entire working with incorrect values as often happens where a student has got a wrong value very early on in a major series of calculations (I’ve been that student; I had a very patient lecturer once who worked through an 18-side set of my calculations about a belt-driven lawnmower which all had wrong values, based on something I got wrong on the first page.)

Overall, the field of ‘control’ as a way of checking (or assisting) understanding is clearly worth much further consideration. Perhaps there are better ways of recognising users’ blind spots and helping resolve them before problems occur which depend on that knowledge. I’m sure I’ll have more to say too, at a later point, on the issue of widespread ignorance of certain subjects, and gaps in understanding and their effects; it would be interesting to hear readers’ thoughts, though.

Footnote: Security comparison

We saw earlier that there seems to be little point in educational software limiting the number of guesses a student can have at the answer, at least when the student isn’t allowed to proceed until the correct answer is entered. I’m not saying any credit should be awarded for simply guessing (it probably shouldn’t), just that deliberately restricting progress isn’t usually desirable in education. But it is in security: indeed that’s what most password and PIN implementations use. Regular readers of the blog will know that the work of security researchers such as Bruce Schneier, Ross Anderson, Ed Felten and Alex Halderman is frequently mentioned, often in relation to digital rights management, but looking at forcing functions in an educational context also shows how relevant security research is to other areas of design. Security techniques say “don’t let that happen until this has happened”; so do many architectures of control.

Some links

Some links. Guess what vehicle this is.

First, an apology for anyone who’s had problems with the RSS/Atom feeds over the last month or so. I think they’re fixed now (certainly Bloglines has started picking them up again) but please let me know if you don’t read this. Oops, that won’t work… anyway:

  • ‘Gadgets as Tyrants’ by Xeni Jardin, looks at digital architectures of control in the context of the 2007 Consumer Electronics Show in Las Vegas :

    Many of the tens of thousands of products displayed last week on the Vegas expo floor, as attractive and innovative as they are, are designed to restrict our use… Even children are bothered by the increasing restrictions. One electronics show attendee told me his 12-year-old recently asked him, “Why do I have to buy my favorite game five times?” Because the company that made the game wants to profit from each device the user plays it on: Wii, Xbox, PlayStation, Game Boy or phone.

    At this year’s show, the president of the Consumer Electronics Association, Gary Shapiro, spoke up for “digital freedom,” arguing that tech companies shouldn’t need Hollywood’s permission when they design a new product.

  • The Consumerist – showing a 1981 Walmart advert for a twin cassette deck – comments that “Copying music wasn’t always so taboo”.

    I’m not sure it is now, either.

  • George Preston very kindly reminds me of the excellent Trusted Computing FAQ by Ross Anderson, a fantastic exposition of the arguments. For more on Vista’s ‘trusted’ computing issues, Peter Guttmann has some very clear explanations of how shocking far we are from anything sensible. See also Richard Stallman’s ‘Right to Read’.
  • David Rickerson equally kindly sends me details of a modern Panopticon prison recently built in Colorado – quite impressive in a way:

    Image from Correctional News

    …Architects hit a snag when they realized too much visibility could create problems.

    “We’ve got lots of windows looking in, but the drawback is that inmates can look from one unit to another through the windows at the central core area of the ward,” Gulliksen says. “That’s a big deal. You don’t want inmates to see other inmates across the hall with gang affiliations and things like that.”

    To minimize unwanted visibility, the design team applied a reflective film to all the windows facing the wards. Deputies can see out, but inmates cannot see in. Much like the 18th-century Panopticon, the El Paso County jail design keeps inmates from seeing who is watching them.

    Image from Correctional News website

  • Should the iPhone be more open?

    As Jason Devitt says, stopping users installing non-Apple (or Apple-approved) software means that the cost of sending messages goes from (potentially) zero, to $5,000 per megabyte:

    Steve typed “Sounds great. See you there.” 28 characters, 28 bytes. Call it 30. What does it cost to transmit 30 bytes?

    * iChat on my Macbook: zero.
    * iChat running on an iPhone using WiFi: zero.
    * iChat running on an iPhone using Cingular’s GPRS/EDGE data network: 6 hundredths of a penny.
    * Steve’s ‘cool new text messaging app’ on an iPhone: 15c.

    A nickel and a dime.

    15c for 30 bytes = $0.15 X 1,000,000 / 30 = $5,000 per megabyte.

    “Yes, but it isn’t really $5,000,” you say. It is if you are Cingular, and you handle a few billion messages like this each quarter.

    … [I] assumed that I would be able to install iChat myself. Or better still Adium, which supports AIM, MSN, ICQ, and Jabber. But I will not be able to do that because … it will not be possible to install applications on the iPhone without the approval of Cingular and Apple… But as a consumer, I have a choice. And for now the ability to install any application that I want leaves phones powered by Windows Mobile, Symbian, Linux, RIM, and Palm OS with some major advantages over the iPhone.

    Aside from the price discrimination (and business model) issue (see also Control & Networks), one thing that strikes me about a phone with a flat touch screen is simply how much less haptic feedback the user gets.

    I know people who can text competently without looking at the screen, or indeed the phone at all. They rely on the feel of the buttons, the pattern of raised and lowered areas and the sensation as the button is pressed, to know whether or not the character has actually been entered, and which character it was (based on how many times the button is pressed). I would imagine they would be rather slow with the iPhone.