Category Archives: Internet economics

Swoopo: Irrational escalation of commitment

Swoopo

Swoopo, a new kind of “entertainment shopping” auction site, takes Martin Shubik’s classic Dollar Auction game to a whole new, automated, mass participation level. It’s an example of the escalation of commitment, or a sunk cost fallacy, where we increase our commitment (in this case with real money) even though (in this case) most users’ positions are becoming less and less valuable.

Thee Cake Scraps has a good analysis of how this works:

It is a ‘auction’ site…sort of. Swoopo sells bids for $1. Each time you use a bid on an item the price is increased by $0.15 for that item. So here is an example:

Person A buys 5 bids from Swoopo for $5 total. Person A sees an auction for $1000 and places the first bid. The auction is now at $0.15. Person A now has a sunk cost of $1 (the cost of the bid they used). There is no way to get that dollar back, win or lose. If Person A wins they must pay the $0.15.

Person B also purchased $5 of bids. Person B sees the same auction and places the second bid. The auction price is now $0.30 (because each bid increases the cost by exactly 15 cents). Person B now has a sunk cost of $1. If Person B wins they must pay the $0.30. Swoopo now has $2 in the bank and the auction is at 30 cents.

This can happen with as many users as there are suckers to start accounts. Why are they suckers? Because everybody that does not have the top spot just loses the money they spent on bids. *Poof* Gone. If you think this sounds a little like gambling or a complete scam you are not alone. People get swept up into the auction and don’t want to get nothing for the money they spent on bids.

The key thing seems to be that some bidders will win items at lower than RRP, i.e. they get a good deal, but for every one of those, there are many, many others who have all paid for their bids (money going to Swoopo) and received nothing as a result. The house will always win.

Swoopo staff respond here and here (at Crunchgear).

As is obligatory with this blog, I need to ask: where else have systems been designed to use this behaviour-shaping technique? There must be many examples in auctions, games and gambling in general – but can the idea be applied to consumer products/services, using escalating commitment to shape user behaviour? Can this be applied to help users save energy, do more exercise, etc as opposed merely to extracting value from them with no benefit in return?

Pretty Cuil Privacy

Cuil screenshot

New search engine Cuil has an interesting privacy policy (those links might not work right now due to the load). They’re apparently not going to track individual users’ searches at all, which, in comparison to Google’s behaviour, is quite a difference. As TechCrunch puts it:

User IP addresses are not recorded to their servers, they say, and cookies are not used to associate a computer with queries. The data is simply dumped as it is created. That means user data cannot be turned over to others, whether its via blind stupidity or lawsuits.

This strategy’s similar to an issue Scott Craver discussed a couple of years ago as part of his ‘privacy ceiling’ concept (I covered it a bit here at the time): effectively, whatever information you collect could become a liability for you at some point, so if you don’t need it, design the system so it simply doesn’t collect it in the first place.

Apologies for the delay to this service

You’re owed an apology, dear reader, for the 2-month hiatus with the blog. It’s down to a variety of reasons compounding each other, and alternately forcing me to prioritise other pressing problems, then when I tried seizing the initiative again, frustrating me with technical issues and actually preventing posting. You probably never noticed it, due to the nature of the exploit, but this blog was drawn into this nightmare of invisible insertion of hundreds of spam links into the header and footer, incorporating the URLs of dozens of other similarly attacked WordPress blogs, redirecting to the spammers’ intended destination.
Continue reading

Digital control round-up

An 'Apple' dongle

Mac as a giant dongle

At Coding Horror, Jeff Atwood makes an interesting point about Apple’s lock-in business model:

It’s almost first party only– about as close as you can get to a console platform and still call yourself a computer… when you buy a new Mac, you’re buying a giant hardware dongle that allows you to run OS X software.

There’s nothing harder to copy than an entire MacBook. When the dongle — or, if you prefer, the “Apple Mac” — is present, OS X and Apple software runs. It’s a remarkably pretty, well-designed machine, to be sure. But let’s not kid ourselves: it’s also one hell of a dongle.

If the above sounds disapproving in tone, perhaps it is. There’s something distasteful to me about dongles, no matter how cool they may be.

Of course, as with other dongles, there are plenty of people who’ve got round the Mac hardware ‘dongle’ requirement. Is it true to say (à la John Gilmore) that technical people interpret lock-ins (/other constraints) as damage and route around them?

Screenshot of Mukurtu archive website

Social status-based DRM

The BBC has a story about the Mukurtu Wumpurrarni-kari Archive, a digital photo archive developed by/for the Warumungu community in Australia’s Northern Territory. Because of cultural constraints, social status, gender and community background have been used to determine whether or not users can search for and view certain images:

It asks every person who logs in for their name, age, sex and standing within their community. This information then restricts what they can search for in the archive, offering a new take on DRM.

For example, men cannot view women’s rituals, and people from one community cannot view material from another without first seeking permission. Meanwhile images of the deceased cannot be viewed by their families.

It’s not completely clear whether it’s intended to help users perform self-censorship (i.e. they ‘know’ they ‘shouldn’t’ look at certain images, and the restrictions are helping them achieve that) or whether it’s intended to stop users seeing things they ‘shouldn’t’, even if they want to. I think it’s probably the former, since there’s nothing to stop someone putting in false details (but that does assume that the idea of putting in false details would be obvious to someone not experienced with computer login procedures; it may not).

While from my western point of view, this kind of social status-based discrimination DRM seems complete anathema – an entirely arbitrary restriction on knowledge dissemination – I can see that it offers something aside from our common understanding of censorship, and if that’s ‘appropriate’ in this context, then I guess it’s up to them. It’s certainly interesting.

Neverthless, imagining for a moment that there were a Warumungu community living in the EU, would DRM (or any other kind of access restriction) based on a) gender or b) social status not be illegal under European Human Rights legislation?

Disabled buttonsDisabling buttons

From Clientcopia:

Client: We don’t want the visitor to leave our site. Please leave the navigation buttons, but remove the links so that they don’t go anywhere if you click them.

It’s funny because the suggestion is such a crude way of implementing it, but it’s not actually that unlikely – a 2005 patent by Brian Shuster details a “program [that] interacts with the browser software to modify or control one or more of the browser functions, such that the user computer is further directed to a predesignated site or page… instead of accessing the site or page typically associated with the selected browser function” – and we’ve looked before at websites deliberately designed to break in certain browers and disabling right-click menus for arbitrary purposes.

Slanty design

Library of Congress, Main Reading Room
The Main Reading Room, Library of Congress. Image from CIRLA.

In this article from Communications of the ACM from January 2007, Russell Beale uses the term slanty design to describe “design that purposely reduces aspects of functionality or usability”:

It originated from an apocryphal story that some desks in the US Library of Congress in Washington, DC, are angled down toward the patron, with a glass panel over the wood, so when papers are being viewed, nothing harmful (like coffee cups, food and ink pens) can be put on top of them. This makes them less usable (from a user-centric point of view) but much more appropriate for their overall purpose.

[S]lanty design is useful when the system must address wider goals than the user might have, when, say, they wish to do something that in the grander scheme of things is less than desirable.

New Pig cigarette binCone cup
The angled lid on this cigarette bin prevents butts being placed on top; the cone shape of cup subtly discourages users from leaving it on the table.

We’ve looked before on this site at a couple of literally ‘slanty’ examples – notably, cigarette bins with angled lids and paper cone cups (above) – and indeed “the common technique of architects to use inclined planes to prevent people from leaving things, such as coffee cups, on flat spaces” is noted on the Designweenie blog here – but in his article, Beale expands the scope of the term to encompass interfaces or interaction methods designed to prevent or discourage certain user behaviour, for strategic reasons: in essence, what I’ve tried to corral under the heading ‘architectures of control‘ for the last few years, but with a different way of arriving at the idea:

We need more than usability to make things work properly. Design is (or should be) a conversation between users and design experts and between desired outcomes and unwanted side effects… [U]ser-centred design is grounded in the user’s current behavior, which is often less than optimal.

Slanty design incorporates the broader message, making it difficult for users to do unwanted things, as well as easy to do wanted things. Designers need to design for user non-goals – the things users do not want to do or should not be able to do even if they want to [my emphases]. If usability is about making it easy for users to do what they must do, then we need to have anti-usability as well well, making it difficult for them to do the things we may not want them to do.

He gives the example of Gmail (below), where Google has (or had – the process is apprently not so difficult now) made it difficult for users to delete email – “Because Google uses your body of email to mine for information it uses to target the ads it delivers to generate revenue; indeed, deleting it would be detrimental to the service” but that in fact, this strategy might be beneficial for the user – “By providing a large amount of storage space for free, Gmail reduces any resource pressure, and by making the deletion process difficult it tries to re-educate us to a new way of operating, which also happens to achieve Google’s own wider business goals.” This is an interesting way of looking at it, and somewhat reminscent of the debate on deleting an Amazon or eBay account – see also Victor Lombardi’s commentary on the where the balance lies.

How to delete an email in Gmail

However, from my point of view, if there’s one thing which has become very clear from investigating architectures of control in products, systems and environments, it’s that the two goals Beale mentions – “things users do not want to do” and things users “should not be able to do” – only coincide in a few cases, and with a few products, and a few types of user. Most poka-yoke examples would seem to be a good fit, as would many of the design methods for making it easier to save energy on which my PhD is focusing, but outside these areas, there are an awful lot of examples where, in general, the goal of the user conflicts with the goal of the designer/manufacturer/service provider/regulator/authority, and it’s the user’s ability which is sacrificed in order to enforce or encourage behaviour in line with what the ‘other’ party wants. “No-one wakes up in the morning wanting to do less with his or her stuff,” as Cory Doctorow puts it.

Beale does recognise that conflicts may occur – “identify wider goals being pursued by other stakeholders, including where they conflict with individual goals” – and that an attempt should be made to resolve them, but – personally – I think an emphasis on using ‘slanty’ techniques to assist the user (and assist the ‘other party’, whether directly or simply through improving customer satisfaction/recommendation) would be a better direction for ‘slanty design’ to orient itself.

Slanty carousel - image by Russell Beale
“Slanty-designed baggage carousel. Sloping floor keeps the area clear”. From ‘Slanty Design’ article by Russell Beale.

Indeed, it is this aim of helping individual users while also helping the supersystem (and actually using a slant, in fact) which informs a great suggestion on which Beale elaborates, airport baggage carousels with a slanted floor (above):

The scrum of trolleys around a typical [carousel] makes it practically impossible to grab a bag when it finally emerges. A number of approaches have been tried. Big signs… a boundary line… a wide strip of brightly coloured floor tiles…

My slanty design would put a ramp of about 30 degrees extending two meters or so up toward the belt… It would be uncomfortable to stand on, and trolleys would not stay there easily, tending to roll off backward or at least be awkward to handle. I might also add a small dip that would catch the front wheels, making it even more difficult to get the trolley or any other wheeled baggage on it in the first place, but not enough to trip up a person.

If I was being really slanty, I’d also incorporate 2 cm-high bristles in the surface, making it a real pain for the trolleys on it and not too comfy for the passengers to stay there either. Much easier for people to remain (with their trolleys) on the flat floor than negotiate my awkward hill. We’d retain the space we need, yet we could manage the short dash forward, up the hill, to grab our bags, then return to our trolleys, clearing the way for the next baggage-hungry passenger.

There are some very interesting ideas embodied in this example – I’m not sure that using bristles on such a slope would be especially easy for wheelchair users, but the overall idea of helping both the individual user, and the collective (and probably the airport authority too: reducing passenger frustration and necessity for supervision of the carousel), is very much something which this kind of design, carefully thought out, can bring about.

The future of academic exposure?

Too many papers
A lot of research is published each year.

Now that I’m a student again, I’ve got access (via Athens) to a vastly increased amount of academic journals, papers and so on. Far more than I could have done ‘legitimately’ without that Athens login, aside from travelling from library to library to library. And while it’s good for me to have that login, right at this moment, the necessity for such a login is hardly good for society as a whole. As an independent researcher, I simply could not keep on top of my subject properly.

I think it’s fairly clear that open access is the way to go, and certainly where research has enjoyed any degree of public funding there should be no case otherwise. But even where research is freely or easily available, its impact, as a result of limited exposure, is often also very limited or nonexistent, even within academia.

This is surely an omnipresent worry/headache/frustration for many researchers, and the issue was brought home to me the other day. I was reading a (fairly academic) book, published in the UK in 2005, written by a design professor at a university about 50 miles from here, and found a comment, within a discussion of a particular issue, along the lines of “no research has been done on the issue of to what extent A relates to B in the field of C, but it is safe to assume D” and yet, in front of me on the desk, was a PhD thesis completed in 2003, at my university, addressing not only the exact issue specified, but also showing D to be incorrect. Now, a paper was written based on this thesis, and published in an engineering journal, and also presented at a conference, but it clearly escaped the notice of the author of the book.

Now, of course, this probably happens a thousand times a day in academia. It’s not an especially interesting example, and there may be many possible explanations, the book maybe having taken a long period to go from being researched to publication being somewhat likely. But assuming it didn’t, and assuming the book’s author, despite being, by all accounts, an ‘expert’ in his field, really was unaware of research going on not too far away, then there is a failure of communication. (In this case, there might also be the often self-imposed disconnect between the ‘design’ community, and the ‘engineering’ community: the assumption that research done in a different field is irrelevant or likely not to be understandable. That, perhaps, is another problem again.)

This type of communication failure is not necessarily entirely the fault of either side, but it is a problem, across all fields of knowledge and endeavour. So what’s the answer?

I don’t know, from that kind of distance, but closer up, I have a hunch that broad subject blog families, such as Scienceblogs, ‘research digest’ blogs such as the British Psychological Society‘s, and individual blogs with a fairly wide scope, such as Mind Hacks (these latter two both examples from the same field) are going to become increasingly important mechanisms for disseminating research advances to both an academic and a wider audience. Whether the actual awareness of a particular new piece of research comes directly by a researcher reading the site, or by a colleague or friend-of-a-friend referring the researcher, the path from ignorance to awareness is (potentially) shorter and easier than before. It’s (potentially) less likely that anyone reasonably well-informed about a field will not have had an opportunity to learn about other research in the field, at least that which is either newly published or which somehow comes to the attention of the bloggers (so the bloggers’ filtering and discriminatory abilities are very important, in this sense).

Something I’m planning to do, on this blog, from now on, is to review useful or interesting academic papers or journal articles (or books, of course) I come across, from a variety of academic areas, which are relevant to the field of architectures of control, and design for behaviour change in general – shot through the lens of my PhD research focus, extracting pertinent arguments, quotes, following up references, and so on. I hope, in some small way, this will also bring particular areas of research to the attention of researchers from other disciplines, in the same way (for example) that Lawrence Lessig’s “code is law” concept made me think more about constraints and behaviour-shaping in product design in the first place.

From a practical point of view, this approach also seems like it might be a very useful way to document the process of getting to grips with the literature on a subject – helping immensely when it comes to putting together my actual literature review for the PhD – and allowing input (commentary, recommendations, suggestions) from a very diverse set of readers worldwide, in a way which the traditional ivory tower or even open-plan research office doesn’t, or can’t, at least during this stage of the research. While I’m sure there are plenty of other people who’ve had a similar idea (any links would be very interesting: I love seeing how other people structure their research), this approach seems quite excitingly fresh to me, imbuing the literature review process with a vibrancy and immediacy that simply wouldn’t have been as easy to do in the past.