Architectures of Control in the Digital Environment

The design field where architectures of control have become most firmly established is software; to a large extent any application which affords the user a limited range of behaviours is, by definition, an architecture of control.

This may seem obvious, but it is not a trivial statement to make: a system which uses a limited set of algorithms to determine how it functions is different to our experience of the ‘real’ world, in which the rules also exist but are (mostly) too complex for us to analyse deterministically. However, it may be argued that the architectures of control are what gives the software its function in the first place, so it is more useful here to look at the ‘next level up’ of control in software–architectures of control with strategic intentions of some kind.

Digital rights management

Digital rights management (DRM) can encompass a variety of architectures of control–in the words of Andreas Bovens, “in essence, every use that is not specifically permitted by the content [or indeed hardware] provider is in fact prohibited” [11].

This situation, whilst it has legal precedents in the idea of explicitly enumerated lists of rights (as opposed to a more evolutionary common law approach), has never before been applicable to products. The implications of this level of control for unanticipated ‘freedom to tinker’ innovation cannot yet be fully appreciated, but, as will be examined later, could be significant.

One factor driving DRM’s adoption is that digital electronics permits (indeed, relies upon) exact copies of information being made at low or zero marginal costs. Thus if the information vendors (who may or may not be the rights-holders) wish to maintain their revenues or restrict the availability of information, technology needs to be embedded in the architecture of the information, or copying device, or both, which controls or restricts that ability to copy. DRM allows the balance of control to be shifted from the user (e.g. “Who’ll know if I photocopy a book in the library rather than buying a copy?”) to the content or hardware provider (e.g. “We’ll build a photocopier that will refuse to copy the book in the first place”). Similarly, then, to the ‘disciplinary architecture’ outlined in the built environment context, DRM, both as copy-prevention and for other purposes, can be used to prevent legal infractions.

However, it can equally be used to prevent behaviours which are by no means illegal, but which the DRM controller desires to prevent for its own strategic reasons–in some cases, infringing established rights on the part of the consumer. For example, in most legislatures, it is accepted that a backup copy may be made of software, audio or video purchased by the consumer; yet DRM can prevent this ‘fair use’ copying with impunity [12]. Equally, there is the right of a customer to re-sell an item he or she has purchased; this, too can be restricted using DRM, to the extent that, say, software could not be installed on a subsequent purchaser’s machine, even if it had been uninstalled from the original–to what extent this affects the statutory property rights of the purchaser will be an area of increased debate as DRM becomes more prevalent.

There is increasing potential for DRM to provide the architectures of control to enforce the (often very restrictive) end-user licence agreements (EULAs) for software; whilst it is likely [13] that many users do not fully abide by the EULAs to which they currently ‘agree,’ architectures of control embedded in both software and hardware could greatly reduce the possibilities for deviance (see also the EULA forcing function).

Another implication of some DRM architectures is the control of user access: certain users could be prevented from viewing information or using functions (trivial strategic hardware analogues might be keeping certain items on high shelves to prevent children reaching them, or ‘child-proof’ lids on medicine bottles).

The discrimination could well be purely for security reasons (just as the first encryption of a message was, in itself, an architecture of control), but when a combination of economic and political motivations comes into play, the dystopian science-fiction vision presented back in 1997 in Richard Stallman’s “The Right to Read” does not appear especially exaggerated:

“In his software class, Dan had learned that each [electronic] book had a copyright monitor that reported when and where it was read, and by whom, to Central Licensing. (They used this information to catch reading pirates, but also to sell personal interest profiles to retailers.) The next time his computer was networked, Central Licensing would find out.” [14]

Trusted computing

Indeed, as the quote shows, Stallman also anticipated the rise of ‘trusted computing,’ in the sense of a computer which will report on its owner’s behaviour and–perhaps more importantly–is built with the ability for a third party, such as Microsoft, or a government agency (“absentees with clout” in Stallman’s phrase) to control it remotely. Of course, any attempt by the user to prevent this would be automatically reported, as would any attempts to tinker with or modify the hardware.

There is insufficient space here to explore the full range of architectures of control which trusted computing permits, but the most notable example identified by Cambridge’s Ross Anderson [15] is automatic document destruction across a whole network, which could remove incriminating material, or even be used to ‘unpublish’ particular authors or information (shades of Fahrenheit 451). Users who are identified as violators could be blacklisted from using the network of trusted computers, and anyone who is recorded to be contacting or have contacted blacklisted users would automatically be put under some suspicion.

Within organisations (corporate and governmental), as Anderson points out, these architectures of control could be very useful security features–indeed, perhaps the salient features which spur widespread adoption of trusted computing. Confidential documents could be kept confidential with much less fear of leakage; documents could be prevented from being printed (as some levels of Adobe PDF security already permit
[16, 26]); and those who have printed out restricted information (whether they be correspondence, CAD data, or minutes of meetings) would be recorded as such. Sensitive data could ‘expire,’ just as Flexplay’s DVDs [17] self-destruct 48 hours after they are removed from the package (another product architecture of control).

Flexplay's DVDs become unusable 48 hours after the packet is opened
Flexplay‘s self-expiring DVDs use an architecture of control – becoming unusable 48 hours after the packet is opened – to create a new business model for DVD ‘rental’

The impact of data expiry on long-term archiving and Freedom of Information legislation, where internal government communications are concerned, is as yet unclear [18]; equally, the treatment of works which are legally in the public domain, yet fall under the control of access restrictions (the Adobe Alice in Wonderland eBook débâcle [e.g. 19, 27] being a DRM example) is a potential area of conflict. It is possible that certain works will never practically fall into the public domain, even though their legal copyright period has expired, simply because of the architectures of control which restrict how they can be used or distributed.

The wider implications of trusted computing architectures of control are numerous–including a significant impact on product design as so many consumer products now run software of one form or another. The network effects of, for example, only being able to open files that have been created ‘within’ the trusted network will work heavily against non-proprietary and open-source formats. Those outside of the ‘club’ may be under great pressure to join; a wider move towards a two-tier technological society (with those who wish to tinker, or have to, from economic or other necessity, being very much sidelined by the ‘consensus’ of ‘trusted’ products and users) is possible. 

Texas Instruments ICL7135CN, a CMOS analogue-to-digital converter IC
Analogue-to-digital converters (ADCs) such as these Texas Instruments ICL7135CNs, are classed as ‘endangered gizmos‘ by the Electronic Frontier Foundation, as, along with digital-to-analogue converters (DACs), they allow DRM circumvention.

The analogue hole

The ‘analogue hole’ is another issue which architectures of control in both products and software aim to address. In simple terms, this is the idea that however sophisticated the DRM copy prevention system is on, say, a music CD, the data still have to be converted into an analogue form (sound) for humans to hear. So, if one can capture that sound and re-digitise it (or store it in an analogue form), a near-perfect copy can be made, circumventing any copy-prevention measures. Indeed, digital-to-analogue-to-digital conversion (as used in most modems) has also been used for some innovative reverse engineering, such as extracting the iPod’s firmware as a series of clicks in order to aid the iPodLinux project [20].With such uses, it is perhaps no wonder that analogue-to-digital converter ICs themselves (ADCs) are considered as “endangered gizmos” by the Electronic Frontier Foundation [21].

Architectures of control to plug the analogue hole could include products which refuse to record any input unless a verified authorisation signature is detected in the signal, or a product which deliberately degrades anything recorded using it (or only provides degraded output for connection to another device). Indeed, a ‘Broadcast Flag’ or equivalent [22], embedded in the signal or content, could explicitly list characteristics of any recording made, such as quality degradation, prevention of advertisement skipping, or number of subsequent copies that can be made.

Extending this idea, cameras and camcorders could detect the presence of copyrighted, trademarked or DRM’d material in an image or broadcast and refuse to record it, thus preventing the use of camcorders in cinemas–but also, perhaps, preventing your hobby of photographing company logos, or, as Cory Doctorow points out, “[refusing] to store your child’s first steps because he is taking them within eyeshot of a television playing a copyrighted cartoon” [23].

Already, some graphics software, such as Adobe Photoshop CS, prevents scanned images of banknotes being opened or pasted–one might argue this is with both commercial and social benefit intentions, but as noted by posters at Metafilter, this may be the thin end of the wedge. How long will it be before Photoshop refuses to open an image which is marked as copyrighted? [107, 108]

Could this really be what you see in your viewfinder if you try to photograph or film a copyrighted logo or image?
Cameras and camcorders could include an architecture of control which prevents the user making unauthorised images which include copyrighted material, or trademarks. It’s more likely though, that rather than neatly pixellating the ‘unauthorised’ content, the device would simply refuse to take the photo.

A possible extension of this would be cameras / camcorders / scanners (and associated software) which automatically censor certain images for reasons other than copyright–for example, censoring significant areas of flesh. Indeed, Hewlett-Packard patented a ‘paparazzi-proof’ camera-phone image inhibitor system in 2004 (thanks to both Frank Field and Julian Wood for bringing this to my attention); from News.com:

“An image captured by a camera could be automatically modified based on commands sent by a remote device. In short, anyone who doesn’t want their photo taken at a particular time could hit a clicker to ensure that any cameras or camera-equipped gadgets in range got only a fuzzy outline of their face.”[109]

Whilst this innovation isn’t, apparently, intended to be commercialised, it does have some parallels with the idea of the slave-flash to prevent car registration numbers being photographed by speed cameras; or indeed, by ‘celebrities’ who don’t wish to be photographed.

The issue of the proposed Broadcast Flag [22]–whilst still not ultimately resolved [e.g. 24]–is another in a series of attempts by economic interests to lobby legislators to incorporate support for architectures of control into law. The major example in this field is the Digital Millennium Copyright Act (and its worldwide equivalents), which prohibits the development or distribution of technology intended to avoid copy prevention measures [25]; whether this is a genuine attempt to promote creativity through protecting copyright, or just rent-seeking, has been the subject of an enormous amount of debate over the past few years [e.g. 28]. The precedent set with DVD region-coding, for example, suggests that commercial benefit is the only motive of much work in this field, with no benefits for the consumer.

Other digital architectures of control

The architectures of computer networks themselves can, of course, be an important method of controlling user behaviour (and, along with other network architectures, have been studied extensively–see Control & networks). Without going into too much detail here, it is clear that much of the growth of the Internet can be put down to very loose, yet still functional, architectures of control, or code, as Lawrence Lessig puts it [29]. Anyone is free to write software and distribute it, publish information or ideas, transfer files, contact other users, or interact with and use data in different ways.

Architectures that introduce a more restrictive, prescriptive (and proscriptive) network structure may have benefits for security in online commerce and certainly offer governments a strategic tool for more effective control and censorship. As more and more consumer products operate as part of networks (from computers themselves to mobile phones and even toys), the potential for the network structure to be a significant architecture of control also increases.

Finally, the idea of captology [30], or “computers as persuasive technology”–using features inherent to computer-based systems to persuade users to modify their behaviour (for example, giving up smoking, or increasing motivation to exercise)–is a growing area in itself, and whilst captology always intends to persuade rather than coerce or force, the thinking has much in common with strategic design and architectures of control. Captology is examined further in Everyday things & persuasive technology.


Previous: Architectures of control in the built environment | Next: Simple control in products