Night modes and the new hue of our screens

Information & Culture just published (paywall; or free pre-print) an article I wrote about “night modes,” in which I try to untangle the history of light, screens, sleep loss, and circadian research. If we navigate our lives enmeshed with technologies and their attendant harms, I wanted to know how we make sense of our orientation to the things that prevent harm. To think, in other words, of the constellation of people and things that are meant to ward off, prevent, stave off, or otherwise mitigate the endemic effects of using technology.

If you’re not familiar with “night modes”: in recent years, hardware manufacturers and software companies have introduced new device modes that shift the color temperature of screens during evening hours. To put it another way: your phone turns orange at night now. Perhaps you already use f.lux, or Apple’s “Night Shift,” or “Twilight” for Android.

All of these software interventions come as responses to the belief that untimely light exposure closer to bedtime will result in less sleep or a less restful sleep. Research into human circadian rhythms has had a powerful influence on how we think and talk about healthy technology use. And recent discoveries in the human response to light, as you’ll learn in the article, are based on a tiny subset of blind persons who lack rods and cones. As such, it’s part of a longer history of using research on persons with disabilities to shape and optimize communication technologies – a historical pattern that the media and disability studies scholar, Mara Mills, has documented throughout her career.

 apple night shift

Continue reading “Night modes and the new hue of our screens”

Beyond bugs and features: A case for indeterminacy

spandrels-of-san-marco
Spandrels of San Marco. [CC License from Tango7174]
In 1979, Harvard professors Stephen Jay Gould and Richard Lewontin identified what they saw as a shortcoming in American and English evolutionary biology. It was, they argued, dominated by an adaptationist program.[1] By this, they meant that it embraced a misguided atomization of an organism’s traits, which then “are explained as structures optimally designed by natural selection for their function.”[2] For example, an exaggerated version of the adaptationist program might look at a contemporary human face, see a nose, and argue that it was adapted and selected for its ability to hold glasses. Such a theory of the nose not only ignores the plural functions the nose serves, but the complex history of its evolution, its shifting usefulness for different kinds of activities, its mutational detours, the different kinds of noses, and the nose’s evolution as part of the larger systems of faces, bodies, and environments.  So how should we talk about noses? Or, more importantly, how do we talk about any single feature of a complex system? Continue reading “Beyond bugs and features: A case for indeterminacy”

Discourse Matters: Designing better digital futures

A very similar version of this blog post originally appeared in Culture Digitally on June 5, 2015.

Words Matter. As I write this in June 2015, a United Nations committee in Bonn is occupied in the massive task of editing a document overviewing global climate change. The effort to reduce 90 pages into a short(er), sensible, and readable set of facts and positions is not just a matter of editing but a battle among thousands of stakeholders and political interests, dozens of languages, and competing ideas about what is real and therefore, what should or should not be done in response to this reality.

discoursematters

I think about this as I complete a visiting fellowship at Microsoft Research, where over a thousand researchers worldwide study complex world problems and focus on advancing state of the art computing. In such research environments the distance between one’s work and the design of the future can feel quite small. Here, I feel like our everyday conversations and playful interactions on whiteboards has the potential to actually impact what counts as the cutting edge and what might get designed at some future point.

But in less overtly “future making” contexts, our everyday talk still matters, in that words construct meanings, which over time and usage become taken for granted ways of thinking about the way the world works. These habits of thought, writ large, shape and delimit social action, organizations, and institutional structures.

In an era of web 2.0, networked sociality, constant connectivity, smart devices, and the internet of things (IoT), how does everyday talk shape our relationship to technology, or our relationships to each other? If the theory of social construction is really a thing, are we constructing the world we really want? Who gets to decide the shape of our future? More importantly, how does everyday talk construct, feed, or resist larger discourses?

rhetoric as world-making

From a discourse-centered perspective, rhetoric is not a label for politically loaded or bombastic communication practices, but rather, a consideration of how persuasion works. Reaching back to the most classic notions of rhetoric from ancient Greek philosopher Aristotle, persuasion involves a mix of logical, emotional, and ethical appeals, which have no necessary connection to anything that might be sensible, desirable, or good to anyone, much less a majority. Persuasion works whether or not we pay attention. Rhetoric can be a product of deliberation or effort, but it can also function without either.

When we represent the techno-human or socio-technical relation through words, images, these representations function rhetorically. World making is inherently discursive at some level. And if making is about changing, this process inevitably involves some effort to influence how people describe, define, respond to, or interact with/in actual contexts of lived experience.

I have three sisters, each involved as I am in world-making, if such a descriptive phrase can be applied to the everyday acts of inquiry that prompt change in socio-technical contexts. Cathy is an organic gardener who spends considerable time improving techniques for increasing her yield each year.  Louise is a project manager who designs new employee orientation programs for a large IT company. Julie is a biochemist who studies fish in high elevation waterways.

Perhaps they would not describe themselves as researchers, designers, or even makers. They’re busy carrying out their job or avocation. But if I think about what they’re doing from the perspective of world-making, they are all three, plus more. They are researchers, analyzing current phenomena. They are designers, building and testing prototypes for altering future behaviors. They are activists, putting time and energy into making changes that will influence future practices.

Their work is alternately physical and cognitive, applied for distinct purposes, targeted to very different types of stakeholders.  As they go about their everyday work and lives, they are engaged in larger conversations about what matters, what is real, or what should be changed.

Everyday talk is powerful not just because it has remarkable potential to persuade others to think and act differently, but also because it operates in such unremarkable ways. Most of us don’t recognize that we’re shaping social structures when we go about the business of everyday life. Sure, a single person’s actions can become globally notable, but most of the time, any small action such as a butterfly flapping its wings in Michigan is difficult to link to a tsunami halfway around the world. But whether or not direct causality can be identified, there is a tipping point where individual choices become generalized categories. Where a playful word choice becomes a standard term in the OED. Where habitual ways of talking become structured ways of thinking.

The power of discourse: Two examples

I mention two examples that illustrate the power of discourse to shape how we think about social media, our relationship to data, and our role in the larger political economies of internet related activities. These cases are selected because they cut across different domains of digital technological design and development. I develop these cases in more depth here and here.

‘Sharing’ versus ‘surfing’

The case of ‘sharing’ illustrates how a term for describing our use of technology (using, surfing, or sharing) can influence the way we think about the relationship between humans and their data, or the rights and responsibilities of various stakeholders involved in these activities. In this case, regulatory and policy frameworks have shifted the burden of responsibility from governmental or corporate entities to individuals. This may not be directly caused by the rise in the use of the term ‘sharing’ as the primary description of what happens in social media contexts, but this term certainly reinforces a particular framework that defines what happens online. When this term is adopted on a broad scale and taken for granted, it functions invisibly, at deep structures of meaning. It can seem natural to believe that when we decide to share information, we should accept responsibility for our action of sharing it in the first place.

It is easy to accept the burden for protecting our own privacy when we accept the idea that we are ‘sharing’ rather than doing something else. The following comment seems sensible within this structure of meaning: “If you didn’t want your information to be public, you shouldn’t have shared it in the first place.”  This explanation is naturalized, but is not the only way of seeing and describing this event. We could alternately say we place our personal information online like we might place our wallet on the table. When someone else steals it, we’d likely accuse the thief of wrongdoing rather than the innocent victim who trusted that their personal belongings would be safe.

A still different frame might characterize personal information as an extension of the body or even a body part, rather than an object or possession. Within this definition, disconnecting information from the person would be tantamount to cutting off an arm. As with the definition of the wallet above, accountability for the action would likely be placed on the shoulders of the ‘attacker’ rather than the individual who lost a finger or ear.

‘Data’ and quantification of human experience

With the rise of big data, we have entered (or some would say returned to) an era of quantification. Here, the trend is to describe and conceptualize all human activity as data—discrete units of information that can be collected and analyzed. Such discourse collapses and reduces human experience. Dreams are equalized with body weight; personality is something that can be categorized with a similar statistical clarity as diabetes.

The trouble of using data as the baseline unit of information is that it presents an imaginary of experience that is both impoverished and oversimplified. This conceptualization is coincidental, of course, in that it coincides with the focus on computation as the preferred mode of analysis, which is predicated on the ability to collect massive quantities of digital information from multiple sources, which can only be measured through certain tools.

“Data” is a word choice, not an inevitable nomenclature. This choice has consequence from the micro to macro, from the cultural to the ontological. This is the case because we’ve transformed life into arbitrarily defined pieces, which replace the flow of lived experience with information bits. Computational analytics makes calculations based on these information bits. This matters, in that such datafication focuses attention on that which exists as data and ignores what is outside this configuration. Indeed, data has become a frame for that which is beyond argument because it always exists, no matter how it might be interpreted (a point well developed by many including Daniel Rosenberg in his essay Data before the fact).

We can see a possible outcome of such framing in the emerging science and practice of “predictive policing.” This rapidly growing strategy in large metropolitan cities is a powerful example of how computation of tiny variables in huge datasets can link individuals to illegal behaviors. The example grows somewhat terrifying when we realize these algorithms are used to predict what is likely to occur, rather than to simply calculate what has occurred. Such predictions are based on data compiled from local and national databases, focusing attention on only those elements of human behavior that have been captured in these data sets (for more on this, see the work of Sarah Brayne)

We could alternately conceptualize human experience as a river that we can only step in once, because it continually changes as it flows through time-space. In such a Heraclitian characterization, we might then focus more attention on the larger shape and ecology of the river rather than trying to capture the specificities of the moment when we stepped into it.

Likewise, describing behavior in terms of the chemical processes in the brain, or in terms of the encompassing political situation within which it occurs will focus our attention on different aspects of an individual’s behavior or the larger situation to which or within which this behavior responds. Each alternative discourse provokes different ways of seeing and making sense of a situation.

When we stop to think about it, we know these symbolic interactions matter. Gareth Morgan’s classic work about metaphors of organization emphasizes how the frames we use will generate distinctive perspectives and more importantly, distinctive structures for organizing social and workplace activities.  We might reverse engineer these structures to find a clash of rivaling symbols, only some of which survive to define the moment and create future history. Rhetorical theorist Kenneth Burke would talk about these symbolic frames as myths. In a 1935 speech to the American Writer’s Congress he notes that:

“myth” is the social tool for welding the sense of interrelationship by which [we] can work together for common social ends. In this sense, a myth that works well is as real as food, tools, and shelter are.

These myths do not just function ideologically in the present tense. As they are embedded in our everyday ways of thinking, they can become naturalized principles upon which we base models, prototypes, designs, and interfaces.

Designing better discourses

How might we design discourse to try to intervene in the shape of our future worlds? Of course, we can address this question as critical and engaged citizens. We are all researchers and designers involved in the everyday processes of world-making. Each, in our own way, are produsing the ethics that will shape our future.

This is a critical question for interaction and platform designers, software developers, and data scientists. In our academic endeavors, the impact of our efforts may or may not seem consequential on any grand scale. The outcome of our actions may have nothing to do with what we thought or desired from the outset. Surely, the butterfly neither intends nor desires to cause a tsunami.

butterfly effect comic
Image by J. L. Westover

Still, it’s worth thinking about. What impact do we have on the larger world? And should we be paying closer attention to how we’re ‘world-making’ as we engage in the mundane, the banal, the playful? When we consider the long future impact of our knowledge producing practices, or the way that technological experimentation is actualized, the answer is an obvious yes.  As Laura Watts notes in her work on future archeology:

futures are made and fixed in mundane social and material practice: in timetables, in corporate roadmaps, in designers’ drawings, in standards, in advertising, in conversations, in hope and despair, in imaginaries made flesh.

It is one step to notice these social construction processes. The challenge then shifts to one of considering how we might intervene in our own and others’ processes, anticipate future causality, turn a tide that is not yet apparent, and try to impact what we might become.

Acknowledgments and references

Notably, the position I articulate here is not new or unique, but another variation on a long running theme of critical scholarship, which is well represented by members of the Social Media Collective. I am also indebted to a long list of feminist and critical scholarship.  This position statement is based on my recent interests and concerns about social media platform design, the role of self-learning algorithmic logics in digital culture infrastructures, and the ethical gaps emerging from rapid technological development. It derives from my previous work in digital identity, ethnographic inquiry of user interfaces and user perceptions, and recent work training participants to use auto-ethnographic and phenomenology techniques to build reflexive critiques of their lived experience in digital culture. There are, truly, too many sources and references to list here, but as a short list of what I directly mentioned:

Kenneth L. Burke. 1935. Revolutionary symbolism in America. Speech to the American Writer’s Congress, February 1935. Reprinted in The Legacy of Kenneth Burke. Herbert W. Simons and Trevor Melia (eds). Madison: U of Wisconsin Press, 1989. Retrieved 2 June 2015 from: http://parlormultimedia.com/burke/sites/default/files/Burke-Revolutionary.pdf

Annette N. Markham. Forthcoming. From using to sharing: A story of shifting fault lines in privacy and data protection narratives. In Digital Ethics (2nd ed). Baastian Vanaker, Donald Heider (eds). Peter Lang Press, New York. Final draft available in PDF here

Annette N. Markham. 2014. Undermining data: A critical examination of a core term in scientific inquiry. First Monday, 18(10).

Gareth Morgan. 1986. Images of Organization. Sage Publications, Thousand Oaks, CA.

Daniel Rosenberg. 2013. Data before the fact. In Raw data’ is an oxymoron. Lisa Gitelman (ed). Cambridge, Mass.: MIT Press, pp. 15–40.

Laura Watts. 2015. Future archeology: Re-animating innovation in the mobile telecoms industry. In Theories of the mobile internet: Materialities and imaginaries. Andrew Herman, Jan Hadlaw, Thom Swiss (Eds). Routledge Press,

Turn This into That: a Remixing Experiment

Two sides of social production: crowdsourcing and remixing

Networked technologies have facilitated two forms of social production: remixing and crowdsourcing. Remixing has been typically associated with creative, expressive, and unconstrained work such as the creation of video mashups or funny image macros that we often see on social media websites. Crowdsourcing, on the other hand, has been associated with large-scale mechanical work, like labeling images or transcribing audio, performed as microtasks on services like Amazon Mechanical Turk. So the stereotype is that remixing is playful, creative, expressive, but undirected and often chaotic, while crowdsourcing is useful to achieve actual work but it is monotonous, and requires (small) financial incentives.

Crowdsoucing Creativity: “Mixsourcing”

The space between remixing and crowdsourcing has partially been explored. For example, one could argue that Wikipedia exists in a unique space in between these two ideas as it relies on some, albeit small, degree of human creativity, requires no financial incentives, and leverages large numbers of contributors who are encouraged to tweak one another’s submissions. However, Wikipedia’s texts are mainly functional, purposely devoid of any personal expressiveness, and constrained by the task at hand.

On the more creative end of the spectrum, artists have explored the use of crowdsourcing, such as the Johnny Cash Project and the Sheep Market, and researchers have evaluated the uses of creative crowdsourcing for design. We wondered then, if there is a way to create a generic platform to perform creative and artistic work in a more directed, crowdsourcing-like way, some kind of “bounded creativity,” which we called “mixsourcing.”

The mixsourcing of a “Moonicorn”

We decided to play with this idea of mixsourcing through an exercise that involved giving people a creative, yet directed task. The exercise consisted first in creating a novel piece of content, an image, to serve as a creative seed and then ask specific people, using plain old e-mail, to turn it into something else, i.e., to remix it. The task was specifically crafted for each individual based on their interests, which we knew through pre-existing personal relationships with them.

The seed content used in this first exercise was an image, hand-drawn by one of the researchers, showing of a unicorn with a moon as a head. We sent the email to a group of friends, appealing to their social relationship as a group and with one of the researchers; each person was offered a task: an invitation to turn the “moonicorn” into something based on what we knew they were good at:

“Hello my dear friends! […] I’m writing to see if you can help me with my summer project […] I gave you all top secret assignments below. If you can help, it would mean so so so so so much to me. You don’t have to spend a ton of time on it. And I’ll throw a boozy thanks-you party next week when I’m in town. I couldn’t ask for a better lump of friends. Much love from the west coast.

This is my moonicorn:

Jables: please create a moonicorn cocktail recipe.

Celia: please create an iPhone video documenting the rare, nut-eating moonicorn, played by @Ian

[…]

Jables not only created his moonicorn cocktail, he also prepared one, took a picture of it, and emailed a make-believe recipe to accompany the beverage:

The Sanguine Moonicorn

6oz Fresh Moonicorn Blood.

1oz Pure Moonicorn Tears [hold for virgins]

Topped with Moonicorn Sweetbreads, Moonicorn Gonad, Moon Cheese, and an olive.

Cleansed by fire and served over ice, with a Moonicorn Jerky Moon Dagger.

Celia and Ian also completed their collaborative task and produced a short video. Similarly, we received remixes in the form of a fiction article, a felt toy, and a music mix. Here is a collage of all the remixes people produced:

The moonicorn experiment was quite successful, as more than half of the people actually completed their remix. A lot of them spent quite a bit of time on it and were very keen on narrating their creative process as much as they were in sharing their finished work.

Subsequently, we decided to recreate a similar exercise on a larger scale. This failed, however. We used a school mailing list and a group on Facebook, but both failed to attract many participants. The message in both cases did not include personalized tasks and the groups included many strangers. Hence, we hypothesized that there were three key attributes for the success of the first experiment:

  1. Pre-existing personal relationships.
  2. Well-crafted, personalized tasks directed at specific individuals, compared to the diffusion of responsibilities, well-described in the social psychology literature.
  3. Detailed tasks. The messages to broader group were to open putting a burden of choosing on the remixers.

Turn This Into That

Using the insights gained from the previous exercises, we began to envision a mixsourcing platform to enable people to create and participate in remixing exchanges like the moonicorn one.

We called the platform “Turn This Into That,” as it describes the system’s premise.

To convey the spirit of the social relationships that we thought were instrumental to the success of the first exercise, we decided to build the system on a postcard metaphor.

People can send postcards challenging one another to turn something into something else, and the responses themselves can also be thought of as postcards. Furthermore, given the interest people showed in talking about their process the submissions provide space and encouragement for people to address the whys and the hows of their challenges and their remixes. Also, the challenges are specific in terms of what the remix should be, a photo for example, yet it leaves the choice of what to do up to the remixer.

Practically speaking, we aimed for this platform to merely be the embodiment of a mechanism to provide creative sparks, playfulness, and interactions among people without actually having to deal with the complexities of creating tools or even repositories for the content itself. Our system would rely completely on the social media ecosystem for all that, and Turn This Into That would create the linkages.

Before implementing the actual system we built a semi-functional prototype, and we would like to invite you to use it and give us feedback.

—-

Turn This into That is a project by FUSE Lab‘s intern Sarah Hallacher, in collaboration with Jenny Rodenhouse and Andrés Monroy-Hernández.

Cross-posted at FUSE Labs blog

Can objects be evil? A review of “Addiction by Design”

Bliss by Sean O'Brien
Bliss by Sean O’Brien

Schüll, Natasha Dow. (2012) Addiction by Design: Machine Gambling in Las Vegas. Princeton: Princeton University Press.

Addiction by Design is a nonfiction page-turner. A richly detailed account of the particulars of video gaming addiction, worth reading for the excellence of the ethnographic narrative alone, it is also an empirically rigorous examination of users, designers, and objects that deepens practical and philosophical questions about the capacities of players interacting with machines designed to entrance them. Many books that make worthy contributions to the theoretical literature of a particular field are slogs to read. Addiction by Design is as compelling as a horror story—a sad, smart horror story that calls off the Luddite witch hunt (Down with the machines!) in favor of an approach that examines the role of gaming designers within existing social systems of gender and class disparity.

The most popular gaming machines serve up video slots and video poker. They run on paycards because inserting cash and coinage slows down the rate of play, compromising the experience. By the mid-1990s in Las Vegas, Schüll reports, the vast majority of people at Gamblers Anonymous meetings were addicted to machines—not the table games, ponies, or lotteries previously associated with problem gambling. In 2003 it was estimated that 85 percent of industry profits nationally came from video gaming. For the people (mostly women) who become addicts, the draw of the machines has little to do with the possibility of winning big. Problem gamblers are attracted to the machines because they offer portals to an appealing parallel universe in which they can disconnect from the anxieties and pressures of everyday life. One of Schüll’s interviewees, Mollie, explains, “It’s like being in the eye of a storm, is how I’d describe it. Your vision is clear on the machine in front of you but the whole world is spinning around you, and you can’t really hear anything. You aren’t really there—you’re with the machine and that’s all you’re with.”

woman playing video slots
Woman playing video slots, 2010, Copyright Kate Krueger

Mollie’s experience is typical in at least two ways. First, she has a traumatic past that predisposes her to addictive behaviors. Second, she repeatedly spends all that she has in binges. But before we blame Mollie and other victims and then expound the benefits of 12-step programs with earnest optimism, Schüll asks readers to consider the insidious dependencies that arise between machine designers, casino owners, and gamblers, especially “problem gamblers,” whose struggle to control personal spending generates 30 to 60 percent of casino revenue. Schüll’s Addiction systematically builds on her basic argument that, “just as certain individuals are more vulnerable to addiction than others, it is also the case that some objects, by virtue of their unique pharmacologic or structural characteristics, are more likely than others to trigger or accelerate an addiction.”

Schüll describes the progression of changes the industry has introduced in search of higher profits. For a while, ergonomics was economics. Then high-priced animators were hired to design pleasing sounds and animations to reward winners. But some players were annoyed that the animations were too slow, so the animations were dropped. Play sped up. Faster play was great for increasing dopamine delivery to the brain. It also tended to speed players toward the end of their credits, which lowered their loyalty to particular machines and the casinos that housed them. Chip-driven gaming allowed designers to respond to this problem by tweaking the programs so that frequent small wins (often less than the cost of playing a single hand) kept dopamine surging while players’ cash trickled steadily into casino coffers. One player in a gambling support group compared video machines to crack cocaine, a comparison frequently repeated by researchers and psychologists.[1] By some accounts, the recidivism rate is now higher for gambling than for any other addiction.

The demons here are not the machines, though they are manifest in the machines. The demons are not the people who design the machines nor the people who build palaces in which the machines are arrayed in blinking, burbling gardens of vertiginous electronica. The demons are not located in the players’ genes or childhoods. The demons are not the state regulators who now embrace video gaming after corralling it on American Indian reservations for decades. There is no single devil here, and no particular exorcism can right the wrong, but there is something devilish about the way designers’ intentions and users’ neurology meet up to make video gaming so devastating for some and so profitable for others.

[1] Mary Sojourner, She Bets Her Life: A True Story of Gambling Addiction (New York: Seal Books, 2010).

This review is cross-posted at publicbooks.org, a new book review and visual essay website affiliated with Public Culture, a peer-reviewed academic journal.

In Defense of Friction

1903 telephone operator (John McNab on Flickr)

There is no doubt that technology has made my life much easier. I rarely share the romantic view that things were better when human beings used to do the boring tasks that machines now do. For example, I do not think there is much to gain by bringing back the old telephone operators. However, there are reasons to believe social computing systems should not automate social interactions.

In his paper about online trust, Coye Cheshire points out how automated trust systems undermine trust itself by incentivizing cooperation because of the fear of punishment rather than actual trust among people. Cheshire argues that:

strong forms of online security and assurance can supplant, rather than enhance, trust.

Leading to what he calls the trust paradox:

assurance structures designed to make interpersonal trust possible in uncertain environments undermine the need for trust in the first place

My collaborators and I found something similar when trying to automate credit-giving in the context of a creative online community. We found that automatic attribution given by a computer system, does not replace the manual credit given by another human being. Attribution, turns out, is a useful piece of information given by a system, while credit given by a person, is a signal of appreciation, one that is expected and that cannot be automated.

Slippery when icy - problems with frictionless spaces (ntr23 on Flickr)

Similarly, others have noted how Facebook’s birthday reminders have “ruined birthdays” by “commoditizing” social interactions and people’s social skills. Furthermore, some have argued that “Facebook is ruining sharing” by making it frictionless.

In many scenarios, automation is quite useful, but with social interactions, removing friction can have a harmful effect on the social bonds established through friction itself. In other cases, as Shauna points out, “social networking sites are good for relationships so tenuous they couldn’t really bear any friction at all.”

I am not sure if sharing has indeed been ruined by Facebook, but perhaps this opens new opportunities for new online services that allow people to have “friction-full” interactions.

What kind of friction would you add to existing online social systems?

Do Anonymous Websites Work?

Meme: Not sure if Facebook or 4chan
Hateful content gets posted on "real name" websites too

Some advocates of “real name” policies argue that pseudonymity is far too easy to abuse. They suggest that “real name” policies help reduce spamming and trolling. This might be true, however, you can still get a fair amount of troll-like behavior and hateful discourse in “real name” sites. Just sit on these Facebook searches for a few minutes and you will see the things people are willing to say using their real names. But what about anonymity? Do anonymous websites get run over by spammers and trolls until they collapse?

In a recent paper, my colleagues and I explored how anonymity looks in the wild. First, we started by mapping out the design choices for social sites. Some recent discussions on “real name” policies might imply there are only two options: real names and pseudonyms. This dichotomy is, to a degree, limiting and inaccurate. So we start by mapping the range of design choices for online identity along two axis:

A. Identity Representation

This refers to the identity metadata of a participant that a system displays when he or she interacts with others. Identity representation ranges from strong identity, such as Google+ and Facebook’s “real name” policies, to pseudonymity, such as Twitter @handles, to anonymity, such as 4chan’s complete lack of user names. It is important to note that the information a system collects or requires from the user is not necessarily the same as the one it displays to its peers. For example, most websites collect (at least temporarily) the IP addresses of their users but few show them to others (Wikipedia does this when not logged in). Even anonymous websites like 4chan bans users based on their IP addresses.

B. Archiving Strategies.

This axis refers to the longevity and availability of content associated with a specific person in the system. These strategies range from permanent archival, such as Google’s everlasting logs, to temporary archival, such as Twitter’s limited history, to ephemerality, such as 4chan’s five-minute post lifespan (from our paper).

online identity deisgn choices
Design choices for social websites.

Of course, even these two axis are a simplification of the design choices available. Many websites use clever hybrid models, for example, Formspring lets people link their accounts with their “real names” (using Facebook) and post content pseudonymously (with their Formspring user name) and anonymously (without user names at all). Similarly, Canvas uses a unique hybrid model that combines some of the options described above.

I mentioned before that even “real name” websites have a fair amount of inappropriate and offensive content. Pseudonymous websites are not strangers to that either, in fact, it might well be possible that they are even more likely to host undesirable content. However, pseudonymous websites can also be highly prosocial. Two of my favorite online communities, StackOverflow and Reddit, often display astonishing examples of altruistic and pro-social behavior.

But what about completely anonymous communities? Do they eventually get run over by spammers and trolls until they eventually die? The answer is not exactly.

In our paper, we analyzed a specific community with anonymous and ephemeral content: 4chan. Say what you will about 4chan, but the website has already survived Friendster, MySpace and Digg (OK, these sites are not gone, but you know what I mean). Despite its archaic visual design and its offensive and extremely inappropriate content, 4chan is a thriving community with more than 7 million users and with about 400 thousand posts per day in only one of its boards, /b/.

OK, 4chan has been alive for seven years and it is still thriving, but what about its content? The media coverage of 4chan has portrayed the site as “the Internet hate machine“. But the reality is much more nuanced than that. First, 4chan has several discussion boards. Some are more offensive than others, but the one that grabs the headlines is the random board /b/ because of its “rowdiness and lawlessness“, as 4chan’s creator put it. Indeed, a lot of /b/’s content is pornographic and offensive, sometimes it resembles public bathroom graffiti or even dadaist art, as Amy Bruckman once said to us.

The media has placed a lot of attention on the cases of off-line harassment that originated in /b/. However, our data showed that only 7% of the posts intend to agitate off-line action. The rest are mainly people sharing funny image macros, themed discussion, links, personal stories, sharing the grievances of everyday life or even asking for advice. Most of the agitation to action fails to gain any traction, they get shut with responses such as “/b/ is not your personal army”. Participants take nothing seriously and are happy to make fun of everything (except violence against cats or puppies). Understanding 4chan is also complicated. Uninitiated users might take the posts at face value which does not always capture their real intent or meaning. For example, participants often call each other “/b/tards” or some version of the word “fag” (e.g., “newfag” to refer to new users, “eurofag” for Europeans). These terms are clearly offensive, but in the context of 4chan words and insults are often re-interpreted and co-opted.

word cloud of five million 4chan posts
Common words used on 4chan's /b/. Word cloud of five million posts.

4chan’s /b/ is probably not the strongest example to argue for the value of anonymity.Protecting activists, victims of abuse or whistleblowers, to name a few, are much stronger reasons for anonymity. But what I am saying here is that anonymity and ephemerality, even at it its worst, do not necessarily lead to a community’s collapse. And in fact, 4chan’s long record as the birthplace of a lot of Internet culture and memes might suggest anonymity is conducive to social creativity.

Update: interview on MarketPlace.


If you liked this, follow me on Twitter or identi.ca.

Designing for Social Norms (or How Not to Create Angry Mobs)

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code. In thinking about social media systems, plenty of folks think about monetization. Likewise, as issues like privacy pop up, we regularly see legal regulation become a factor. And, of course, folks are always thinking about what the code enables or not. But it’s depressing to me how few people think about the power of social norms. In fact, social norms are usually only thought of as a regulatory process when things go terribly wrong. And then they’re out of control and reactionary and confusing to everyone around. We’ve seen this with privacy issues and we’re seeing this with the “real name” policy debates. As I read through the discussion that I provoked on this issue, I couldn’t help but think that we need a more critical conversation about the importance of designing with social norms in mind.

Good UX designers know that they have the power to shape certain kinds of social practices by how they design systems. And engineers often fail to give UX folks credit for the important work that they do. But designing the system itself is only a fraction of the design challenge when thinking about what unfolds. Social norms aren’t designed into the system. They don’t emerge by telling people how they should behave. And they don’t necessarily follow market logic. Social norms emerge as people – dare we say “users” – work out how a technology makes sense and fits into their lives. Social norms take hold as people bring their own personal values and beliefs to a system and help frame how future users can understand the system. And just as “first impressions matter” for social interactions, I cannot underestimate the importance of early adopters. Early adopters configure the technology in critical ways and they play a central role in shaping the social norms that surround a particular system.

How a new social media system rolls out is of critical importance. Your understanding of a particular networked system will be heavily shaped by the people who introduce you to that system. When a system unfolds slowly, there’s room for the social norms to slowly bake, for people to work out what the norms should be. When a system unfolds quickly, there’s a whole lot of chaos in terms of social norms. Whenever a network system unfolds, there are inevitably competing norms that arise from people who are disconnected to one another. (I can’t tell you how much I loved watching Friendster when the gay men, Burners, and bloggers were oblivious to one another.) Yet, the faster things move, the faster those collisions occur, and the more confusing it is for the norms to settle.

The “real name” culture on Facebook didn’t unfold because of the “real name” policy. It unfolded because the norms were set by early adopters and most people saw that and reacted accordingly. Likewise, the handle culture on MySpace unfolded because people saw what others did and reproduced those norms. When social dynamics are allowed to unfold organically, social norms are a stronger regulatory force than any formalized policy. At that point, you can often formalize the dominant social norms without too much pushback, particularly if you leave wiggle room. Yet, when you start with a heavy-handed regulatory policy that is not driven by social norms – as Google Plus did – the backlash is intense.

Think back to Friendster for a moment… Remember Fakester? (I wrote about them here.) Friendster spent ridiculous amounts of time playing whack-a-mole, killing off “fake” accounts and pissing off some of the most influential of its userbase. The “Fakester genocide” prompted an amazing number of people to leave Friendster and head over to MySpace, most notably bands, all because they didn’t want to be configured by the company. The notion of Fakesters died down on MySpace, but the most central practice – the ability for groups (bands) to have recognizable representations – ended up being the most central feature of MySpace.

People don’t like to be configured. They don’t like to be forcibly told how they should use a service. They don’t want to be told to behave like the designers intended them to be. Heavy-handed policies don’t make for good behavior; they make for pissed off users.

This doesn’t mean that you can’t or shouldn’t design to encourage certain behaviors. Of course you should. The whole point of design is to help create an environment where people engage in the most fruitful and healthy way possible. But designing a system to encourage the growth of healthy social norms is fundamentally different than coming in and forcefully telling people how they must behave. No one likes being spanked, especially not a crowd of opinionated adults.

Ironically, most people who were adopting Google Plus early on were using their real names, out of habit, out of understanding how they thought the service should work. A few weren’t. Most of those who weren’t were using a recognizable pseudonym, not even trying to trick anyone. Going after them was just plain stupid. It was an act of force and people felt disempowered. And they got pissed. And at this point, it’s no longer about whether or not the “real names” policy was a good idea in the first place; it’s now an act of oppression. Google Plus would’ve been ten bazillion times better off had they subtly encouraged the policy without making a big deal out of it, had they chosen to only enforce it in the most egregious situations. But now they’re stuck between a rock and a hard place. They either have to stick with their policy and deal with the angry mob or let go of their policy as a peace offering in the hopes that the anger will calm down. It didn’t have to be this way though and it wouldn’t have been had they thought more about encouraging the practices they wanted through design rather than through force.

Of course there’s a legitimate reason to want to encourage civil behavior online. And of course trolls wreak serious havoc on a social media system. But a “real names” policy doesn’t stop an unrepentant troll; it’s just another hurdle that the troll will love mounting. In my work with teens, I see textual abuse (“bullying”) every day among people who know exactly who each other is on Facebook. The identities of many trolls are known. But that doesn’t solve the problem. What matters is how the social situation is configured, the norms about what’s appropriate, and the mechanisms by which people can regulate them (through social shaming and/or technical intervention). A culture where people can build reputation through their online presence (whether “real” names or pseudonyms) goes a long way in combating trolls (although it is by no means a fullproof solution). But you don’t get that culture by force; you get it by encouraging the creation of healthy social norms.

Companies that build systems that people use have power. But they have to be very very very careful about how they assert that power. It’s really easy to come in and try to configure the user through force. It’s a lot harder to work diligently to design and build the ecosystem in which healthy norms emerge. Yet, the latter is of critical importance to the creation of a healthy community. Cuz you can’t get to a healthy community through force.