Discourse Matters: Designing better digital futures

A very similar version of this blog post originally appeared in Culture Digitally on June 5, 2015.

Words Matter. As I write this in June 2015, a United Nations committee in Bonn is occupied in the massive task of editing a document overviewing global climate change. The effort to reduce 90 pages into a short(er), sensible, and readable set of facts and positions is not just a matter of editing but a battle among thousands of stakeholders and political interests, dozens of languages, and competing ideas about what is real and therefore, what should or should not be done in response to this reality.

discoursematters

I think about this as I complete a visiting fellowship at Microsoft Research, where over a thousand researchers worldwide study complex world problems and focus on advancing state of the art computing. In such research environments the distance between one’s work and the design of the future can feel quite small. Here, I feel like our everyday conversations and playful interactions on whiteboards has the potential to actually impact what counts as the cutting edge and what might get designed at some future point.

But in less overtly “future making” contexts, our everyday talk still matters, in that words construct meanings, which over time and usage become taken for granted ways of thinking about the way the world works. These habits of thought, writ large, shape and delimit social action, organizations, and institutional structures.

In an era of web 2.0, networked sociality, constant connectivity, smart devices, and the internet of things (IoT), how does everyday talk shape our relationship to technology, or our relationships to each other? If the theory of social construction is really a thing, are we constructing the world we really want? Who gets to decide the shape of our future? More importantly, how does everyday talk construct, feed, or resist larger discourses?

rhetoric as world-making

From a discourse-centered perspective, rhetoric is not a label for politically loaded or bombastic communication practices, but rather, a consideration of how persuasion works. Reaching back to the most classic notions of rhetoric from ancient Greek philosopher Aristotle, persuasion involves a mix of logical, emotional, and ethical appeals, which have no necessary connection to anything that might be sensible, desirable, or good to anyone, much less a majority. Persuasion works whether or not we pay attention. Rhetoric can be a product of deliberation or effort, but it can also function without either.

When we represent the techno-human or socio-technical relation through words, images, these representations function rhetorically. World making is inherently discursive at some level. And if making is about changing, this process inevitably involves some effort to influence how people describe, define, respond to, or interact with/in actual contexts of lived experience.

I have three sisters, each involved as I am in world-making, if such a descriptive phrase can be applied to the everyday acts of inquiry that prompt change in socio-technical contexts. Cathy is an organic gardener who spends considerable time improving techniques for increasing her yield each year.  Louise is a project manager who designs new employee orientation programs for a large IT company. Julie is a biochemist who studies fish in high elevation waterways.

Perhaps they would not describe themselves as researchers, designers, or even makers. They’re busy carrying out their job or avocation. But if I think about what they’re doing from the perspective of world-making, they are all three, plus more. They are researchers, analyzing current phenomena. They are designers, building and testing prototypes for altering future behaviors. They are activists, putting time and energy into making changes that will influence future practices.

Their work is alternately physical and cognitive, applied for distinct purposes, targeted to very different types of stakeholders.  As they go about their everyday work and lives, they are engaged in larger conversations about what matters, what is real, or what should be changed.

Everyday talk is powerful not just because it has remarkable potential to persuade others to think and act differently, but also because it operates in such unremarkable ways. Most of us don’t recognize that we’re shaping social structures when we go about the business of everyday life. Sure, a single person’s actions can become globally notable, but most of the time, any small action such as a butterfly flapping its wings in Michigan is difficult to link to a tsunami halfway around the world. But whether or not direct causality can be identified, there is a tipping point where individual choices become generalized categories. Where a playful word choice becomes a standard term in the OED. Where habitual ways of talking become structured ways of thinking.

The power of discourse: Two examples

I mention two examples that illustrate the power of discourse to shape how we think about social media, our relationship to data, and our role in the larger political economies of internet related activities. These cases are selected because they cut across different domains of digital technological design and development. I develop these cases in more depth here and here.

‘Sharing’ versus ‘surfing’

The case of ‘sharing’ illustrates how a term for describing our use of technology (using, surfing, or sharing) can influence the way we think about the relationship between humans and their data, or the rights and responsibilities of various stakeholders involved in these activities. In this case, regulatory and policy frameworks have shifted the burden of responsibility from governmental or corporate entities to individuals. This may not be directly caused by the rise in the use of the term ‘sharing’ as the primary description of what happens in social media contexts, but this term certainly reinforces a particular framework that defines what happens online. When this term is adopted on a broad scale and taken for granted, it functions invisibly, at deep structures of meaning. It can seem natural to believe that when we decide to share information, we should accept responsibility for our action of sharing it in the first place.

It is easy to accept the burden for protecting our own privacy when we accept the idea that we are ‘sharing’ rather than doing something else. The following comment seems sensible within this structure of meaning: “If you didn’t want your information to be public, you shouldn’t have shared it in the first place.”  This explanation is naturalized, but is not the only way of seeing and describing this event. We could alternately say we place our personal information online like we might place our wallet on the table. When someone else steals it, we’d likely accuse the thief of wrongdoing rather than the innocent victim who trusted that their personal belongings would be safe.

A still different frame might characterize personal information as an extension of the body or even a body part, rather than an object or possession. Within this definition, disconnecting information from the person would be tantamount to cutting off an arm. As with the definition of the wallet above, accountability for the action would likely be placed on the shoulders of the ‘attacker’ rather than the individual who lost a finger or ear.

‘Data’ and quantification of human experience

With the rise of big data, we have entered (or some would say returned to) an era of quantification. Here, the trend is to describe and conceptualize all human activity as data—discrete units of information that can be collected and analyzed. Such discourse collapses and reduces human experience. Dreams are equalized with body weight; personality is something that can be categorized with a similar statistical clarity as diabetes.

The trouble of using data as the baseline unit of information is that it presents an imaginary of experience that is both impoverished and oversimplified. This conceptualization is coincidental, of course, in that it coincides with the focus on computation as the preferred mode of analysis, which is predicated on the ability to collect massive quantities of digital information from multiple sources, which can only be measured through certain tools.

“Data” is a word choice, not an inevitable nomenclature. This choice has consequence from the micro to macro, from the cultural to the ontological. This is the case because we’ve transformed life into arbitrarily defined pieces, which replace the flow of lived experience with information bits. Computational analytics makes calculations based on these information bits. This matters, in that such datafication focuses attention on that which exists as data and ignores what is outside this configuration. Indeed, data has become a frame for that which is beyond argument because it always exists, no matter how it might be interpreted (a point well developed by many including Daniel Rosenberg in his essay Data before the fact).

We can see a possible outcome of such framing in the emerging science and practice of “predictive policing.” This rapidly growing strategy in large metropolitan cities is a powerful example of how computation of tiny variables in huge datasets can link individuals to illegal behaviors. The example grows somewhat terrifying when we realize these algorithms are used to predict what is likely to occur, rather than to simply calculate what has occurred. Such predictions are based on data compiled from local and national databases, focusing attention on only those elements of human behavior that have been captured in these data sets (for more on this, see the work of Sarah Brayne)

We could alternately conceptualize human experience as a river that we can only step in once, because it continually changes as it flows through time-space. In such a Heraclitian characterization, we might then focus more attention on the larger shape and ecology of the river rather than trying to capture the specificities of the moment when we stepped into it.

Likewise, describing behavior in terms of the chemical processes in the brain, or in terms of the encompassing political situation within which it occurs will focus our attention on different aspects of an individual’s behavior or the larger situation to which or within which this behavior responds. Each alternative discourse provokes different ways of seeing and making sense of a situation.

When we stop to think about it, we know these symbolic interactions matter. Gareth Morgan’s classic work about metaphors of organization emphasizes how the frames we use will generate distinctive perspectives and more importantly, distinctive structures for organizing social and workplace activities.  We might reverse engineer these structures to find a clash of rivaling symbols, only some of which survive to define the moment and create future history. Rhetorical theorist Kenneth Burke would talk about these symbolic frames as myths. In a 1935 speech to the American Writer’s Congress he notes that:

“myth” is the social tool for welding the sense of interrelationship by which [we] can work together for common social ends. In this sense, a myth that works well is as real as food, tools, and shelter are.

These myths do not just function ideologically in the present tense. As they are embedded in our everyday ways of thinking, they can become naturalized principles upon which we base models, prototypes, designs, and interfaces.

Designing better discourses

How might we design discourse to try to intervene in the shape of our future worlds? Of course, we can address this question as critical and engaged citizens. We are all researchers and designers involved in the everyday processes of world-making. Each, in our own way, are produsing the ethics that will shape our future.

This is a critical question for interaction and platform designers, software developers, and data scientists. In our academic endeavors, the impact of our efforts may or may not seem consequential on any grand scale. The outcome of our actions may have nothing to do with what we thought or desired from the outset. Surely, the butterfly neither intends nor desires to cause a tsunami.

butterfly effect comic
Image by J. L. Westover

Still, it’s worth thinking about. What impact do we have on the larger world? And should we be paying closer attention to how we’re ‘world-making’ as we engage in the mundane, the banal, the playful? When we consider the long future impact of our knowledge producing practices, or the way that technological experimentation is actualized, the answer is an obvious yes.  As Laura Watts notes in her work on future archeology:

futures are made and fixed in mundane social and material practice: in timetables, in corporate roadmaps, in designers’ drawings, in standards, in advertising, in conversations, in hope and despair, in imaginaries made flesh.

It is one step to notice these social construction processes. The challenge then shifts to one of considering how we might intervene in our own and others’ processes, anticipate future causality, turn a tide that is not yet apparent, and try to impact what we might become.

Acknowledgments and references

Notably, the position I articulate here is not new or unique, but another variation on a long running theme of critical scholarship, which is well represented by members of the Social Media Collective. I am also indebted to a long list of feminist and critical scholarship.  This position statement is based on my recent interests and concerns about social media platform design, the role of self-learning algorithmic logics in digital culture infrastructures, and the ethical gaps emerging from rapid technological development. It derives from my previous work in digital identity, ethnographic inquiry of user interfaces and user perceptions, and recent work training participants to use auto-ethnographic and phenomenology techniques to build reflexive critiques of their lived experience in digital culture. There are, truly, too many sources and references to list here, but as a short list of what I directly mentioned:

Kenneth L. Burke. 1935. Revolutionary symbolism in America. Speech to the American Writer’s Congress, February 1935. Reprinted in The Legacy of Kenneth Burke. Herbert W. Simons and Trevor Melia (eds). Madison: U of Wisconsin Press, 1989. Retrieved 2 June 2015 from: http://parlormultimedia.com/burke/sites/default/files/Burke-Revolutionary.pdf

Annette N. Markham. Forthcoming. From using to sharing: A story of shifting fault lines in privacy and data protection narratives. In Digital Ethics (2nd ed). Baastian Vanaker, Donald Heider (eds). Peter Lang Press, New York. Final draft available in PDF here

Annette N. Markham. 2014. Undermining data: A critical examination of a core term in scientific inquiry. First Monday, 18(10).

Gareth Morgan. 1986. Images of Organization. Sage Publications, Thousand Oaks, CA.

Daniel Rosenberg. 2013. Data before the fact. In Raw data’ is an oxymoron. Lisa Gitelman (ed). Cambridge, Mass.: MIT Press, pp. 15–40.

Laura Watts. 2015. Future archeology: Re-animating innovation in the mobile telecoms industry. In Theories of the mobile internet: Materialities and imaginaries. Andrew Herman, Jan Hadlaw, Thom Swiss (Eds). Routledge Press,

Tumblr, NSFW porn blogging, and the challenge of checkpoints

After Yahoo’s high-profile purchase of Tumblr, when Yahoo CEO Marissa Mayer said that she would “promise not to screw it up,” this is probably not what she had in mind. Devoted users of Tumblr have been watching closely, worried that the cool, web 2.0 image blogging tool would be tamed by the nearly two-decade-old search giant. One population of Tumblr users, in particular, worried a great deal: those that used Tumblr to collect and share their favorite porn. This is a distinctly large part of the Tumblr crowd: according to one analysis, somewhere near or above 10% of Tumblr is “adult fare.”

Now that group is angry. And Tumblr’s new policies, that made them so angry, are a bit of a mess. Two paragraphs from now, I’m going to say that the real story is not the Tumblr/Yahoo incident, or how it was handled, or even why it’s happening. But the quick run-down, and it’s confusing if you’re not a regular Tumblr user. Tumblr had a self-rating system: blogs with “occasional” nudity should self-rate as “NSFW”. Blogs with “substantial” nudity should rate themselves as “adult.” About two months ago, some Tumblr users noticed that blogs rated “adult” were no longer being listed with the major search engines. Then in June, Tumblr began taking both “NSFW” and “adult” blogs out of their internal search results — meaning, if you search in Tumblr for posts tagged with a particular word, sexual or otherwise, the dirty stuff won’t come up. Unless the searcher already follows your blog, then the “NSFW” posts will appear, but not the “adult” ones. Akk, here, this is how Tumblr tried to explain it:

What this meant is that your existing followers of a blog can largely still see your “NSFW” blog, but it would be very difficult for anyone new to find it. David Karp, founder and CEO of Tumblr, dodged questions about it on the Colbert Report, saying only that Tumblr doesn’t want to be responsible for drawing the lines between artistic nudity, casual nudity, and hardcore porn.

Then a new outrage emerged when some users discover that, in the mobile  version of Tumblr, some tag searches turn up no results, dirty or otherwise — and not just for obvious porn terms, like “porn,” but also for broader terms, like “gay”. Tumblr issued a quasi-explanation on their blog, which some commentators and users found frustratingly vague and unapologetic.

Ok. The real story is not the Tumblr/Yahoo incident, or how it was handled, or even why it’s happening. Certainly, Tumblr could have been more transparent about the details of their original policy, or the move in May or earlier to de-list adult Tumblr blogs in major search engines, or the decision to block certain tag results. Certainly, there’ve been some delicate conversations going on at Yahoo/Tumblr headquarters, for some time now, on how to “let Tumblr be Tumblr” (Mayer’s words) and also deal with all this NSFW blogging “even though it may not be as brand safe as what’s on our site” (also Mayer). Tumblr puts ads in its Dashboard, where only logged-in users see them, so arguably the ads are never “with” the porn — but maybe Yahoo is looking to change that, so that the “two companies will also work together to create advertising opportunities that are seamless and enhance the user experience.”

What’s ironic is that, I suspect, Tumblr and Yahoo are actually trying to find ways to remain permissive when it comes to NSFW content. They are certainly (so far) more permissive than some of their competitors, including Instagram, Blogger, Vine, and Pinterest, all of whom have moved in the last year to remove adult content, make it systematically less visible to their users, or prevent users from pairing advertising with it. The problem here is their tactics.

Media companies, be they broadcast or social, have fundamentally two ways to handle content that some but not all of their users find inappropriate.

First, they can remove some of it, either by editorial fiat or at the behest of the community. This means writing up policies that draw those tricky lines in the sand (no nudity? what kind of nudity? what was meant by the nudity?), and then either taking on the mantle (and sometimes the flak) of making those judgments themselves, or having to decide which users to listen to on which occasions for which reasons.

Second, and this is what Tumblr is trying, is what I’ll call the “checkpoint” approach. It’s by no means exclusive to new media: putting the X-rated movies in the back room at the video store, putting the magazines on the shelf behind the counter, wrapped in brown paper, scheduling the softcore stuff on Cinemax after bedtime, or scrambling the adult cable channel, all depend on the same logic. Somehow the provider needs to keep some content from some people and deliver it to others. (All the while, of course, they need to maintain their reputation as defender of free expression, and not appear to be “full of porn,” and keep their advertisers happy. Tricky.)

To run such a checkpoint requires (1) knowing something about the content, (2) knowing something about the people, and (3) having a defensible line between them.

First, the content. That difficult decision, about what is artistic nudity, what’s casual nudity, and what’s pornographic? It doesn’t go away, but the provider can shift the burden of making that decision to someone else — not just to get it off their shoulders, but sometimes to hand it someone more capable of making it. Adult movie producers or magazine publishers can self-rate their content as pornographic. An MPAA-sponsored board can rate films. There are problems, of course: either the “who are these people?” problem, as in the mysterious MPAA ratings board, or the “these people are self-interested” problem, as when TV production houses rate their own programs. Still, this self-interest can often be congruent with the interests of the provider: X-rated movie producers know that their options may be the back room or not at all, and gain little i pretending that they’re something they’re not.

Next, the people. It may seem like a simple thing, just keeping the dirty stuff on the top shelf and carding people who want to buy it. Any bodega shopkeep can manage to do it. But it is simple only because it depends on a massive knowledge architecture, the driver’s license, that it didn’t have to generate itself. This is a government sponsored, institutional mechanism that, in part, happens to be engaged in age verification. It requires a massive infrastructure for record keeping, offices throughout the country, staff, bureaucracy, printing services, government authorization, and legal consequences for cases of fraud. All that so that someone can show a card and prove they’re of a certain age. (That kind of certified, high-quality data is otherwise hard to come by, as we’ll see in a moment.)

Finally, a defensible line. The bodega has two: the upper shelf and the cash register. The kids can’t reach, and even the tall ones can’t slip away uncarded, unless they’re also interested in theft. Cable services use encryption: the signal is scrambled unless the cable company authorizes it to be unscrambled. This line is in fact not simple to defend: the descrambler used to be in the box itself, which was in the home and, with the right tools and expertise, openable by those who might want to solder the right tab and get that channel unscrambled. This meant there had to be laws against tampering, another external apparatus necessary to make this tactic stick.

Tumblr? Well. All of this changes a bit when we bring it into the world of digital, networked, and social media. The challenges are much the same, and if we notice that the necessary components of the checkpoint are data, we can see how this begins to take on the shape that it does.

The content? Tumblr asked its users to self-rate, marking their blog as “NSFW” or “adult.” Smart, given that bloggers sharing porn may share some of Tumblr’s interest in putting it behind the checkpoint: many would rather flag their site as pornographic and get to stay on Tumblr, than be forbidden to put it up at all. Even flagged, Tumblr provides them what they need: the platform on which to collect content, a way to gain and keep interested viewers. The categories are a little ambiguous — where is the line between “occasional” and “substantial” nudity to be drawn? Why is the criteria only about amount, rather than degree (hard core vs soft core), category (posed nudity vs sexual act), or intent (artistic vs unseemly)? But then again, these categories are always ambiguous, and must always privilege some criteria over others.

The people? Here it gets trickier. Tumblr is not imposing an age barrier, they’re imposing a checkpoint based on desire, dividing those who want adult content from those who don’t. This is not the kind of data that’s kept on a card in your wallet, backed by the government, subject to laws of perjury. Instead, Tumblr has two ways to try to know what a user wants: their search settings, and what they search for. If users have managed to correctly classify themselves into “Safe Mode,” indicating in the settings that they do not want to see anything flagged as adult, and people posting content have correctly marked their content as adult or not, this should be an easy algorithmic equation: “safe” searcher is never shown “NSFW” content. The only problems would be user error: searchers who do not set their search settings correctly, and posters who do not flag their adult content correctly. Reasonable problems, and the kind of leakage that any system of regulation inevitably faces. Flagging at the blog level (as opposed to flagging each post as adult or not) is a bit of a dull instrument: all posts from my “NSFW” blog are being withheld from safe searchers, even the ones that have no questionable content — despite the fact that by their own definition a “NSFW” tumblr blog only has “occasional” nudity. Still, getting people to rate every post is a major barrier, few will do so diligently, and it doesn’t fit into simple “web button” interfaces.

Defending the dividing line? Since the content is digital, and the information about content and users is data, it should not be surprising that the line here is algorithmic. Unlike the top shelf or the back room, the adult content on Tumblr lives amidst the rest of the archive. And there’s no cash register, which means that there’s no unavoidable point at which use can be checked. There is the login, which explains why non-logged-in users are treated as only wanting “safe” content. But, theoretically, an “algorithmic checkpoint” should work based on search settings and blog ratings. As a search happens, compare the searcher’s setting with the content’s rating, and don’t deliver the dirty to the safe.

But here’s where Tumblr took two additional steps, the ones that I think raise the biggest problem for the checkpoint approach in the digital context.

Tumblr wanted to extend the checkpoint past the customer who walks into the store and brings adult content to the cash register, out to the person walking by the shop window. And those passersby aren’t always logged in, they come to Tumblr in any number of ways. Because here’s the rub with the checkpoint approach: it does, inevitably, remind the population of possible users, that you do allow the dirty stuff. The new customer who walks into the video store, and sees that there is a back room, even if the never go in, may reject your establishment for even offering it. Can the checkpoint be extended, to decide whether to even reveal to someone that there’s porn available inside? If not in the physical world, maybe in the digital?

When Tumblr delisted its adult blogs from the major search engines, they wanted to keep Google users from seeing that Tumblr has porn. This, of course, runs counter to the fundamental promise of Tumblr, as a publishing platform, that Tumblr users (NSFW and otherwise) count on. And users fumed: “Removal from search in every way possible is the closest thing Tumblr could do to deleting the blogs altogether, without actually removing 10% of its user base.” Here is where we may see the fundamental tension at the Yahoo/Tumblr partnership: they may want to allow porn, but do they want to be known for allowing porn?

Tumblr also apparently wanted to extend the checkpoint in the mobile environment — or perhaps were required to, by Apple. Many services, especially those spurred or required by Apple to do so, aim to prevent the “accidental porn” situation: if I’m searching for something innocuous, can they prevent a blast of unexpected porn in response to my query? To some degree, the “NSFW” rating and the “safe” setting should handle this, but of course content that a blogger failed (or refused) to flag still slips through. So Tumblr (and other sites)  institute a second checkpoint: if the search term might bring back adult content, block all the results for that term. In Tumblr, this is based on tags: bloggers add tags that describe what they’ve posted, and search queries seek matches in those tags.

When you try to choreograph users based on search terms and tags, you’ve doubled your problem. This is not clean, assured data like a self-rating of adult content or the age on a driver’s license. You’re ascertaining what the producer meant when they tagged a post using a certain term, and what the searcher meant when they use the same term as a search query. If I search for the word “gay,” I may be looking for a gay couple celebrating the recent DOMA decision on the steps of the Supreme Court — or “celebrating” bent over the arm of the couch. Very hard for Tumblr to know which I wanted, until I click or complain.

Sometimes these terms line up quite well, either by accident, or on purpose: for instance when users of Instagram indicated pornographic images by tagging them “pornstagram,” a made-up word that would likely mean nothing else. (This search term no longer returns any results, although  — whoa! — it does on Tumblr!.) But in just as many cases, when you use the word gay to indicate a photo of your two best friends in a loving embrace, and I use the word gay in my search query to find X-rated pornography, it becomes extremely difficult for the search algorithm to understand what to do about all of those meanings converging on a single word.

Blocking all results to the query “gay,” or “sex”, or even “porn” may seem, form one vantage point (Yahoo’s?), to solve the NSFW problem. Tumblr is not alone in this regard: Vine and Instagram return no results to the search term “sex,” though that does not mean that no one’s using it as a tag – though Instagram returns millions of results for “gay,” Vine, like Tumblr, returns none. Pinterest goes further, using the search for “porn” as a teaching moment: it pops up a reminder that nudity is not permitted on the site, then returns results which, because of the policy, are not pornographic. By blocking search terms/tags, no porn accidentally makes it to the mobile platform or to the eyes of its gentle user. But, this approach fails miserably at getting adult content to those that want it, and more importantly, in Tumblr’s case, it relegates a broadly used and politically vital term like “gay” to the smut pile.

Tumblr’s semi-apology has begun to make amends. The two categories, “NSFW” and “adult” are now just “NSFW” and the blogs masked as such are now available in Tumblr’s internal search and in the major search engines. Tumblr has promised to work on a more intelligent filtering system. But any checkpoint that depends on data that’s expressive rather than systemic — what we say, as opposed to what we say we are — is going to step clumsily both on the sharing of adult content and the ability to talk about subjects that have some sexual connotations, and could architect the spirit and promise out of Tumblr’s publishing platform.

This was originally posted at Culture Digitally.

Free Speech, Context, and Visibility: Protesting Racist Ads

On Tuesday, Egyptian-American activist Mona Eltahawy was arrested for “criminal mischief” – or “the willful damaging of property” – when she responded to disturbingly racist ads that were posted in the New York City subway system with spray paint. Her act of political resistance went beyond spray paint however. In some ways, it was intentionally designed to get the attention of the internet. When she encountered resistance from a person defending the ads – who clearly knew Mona and kept responding to her by name – Eltahawy chose to create a challenge over her right to engage in what she called “freedom of expression.” This altercation escalates as the two argue on camera over whether or not Eltahawy is violating free speech or “making an expression on free speech.” (The video can be seen here.) As this encounter unfolds, Eltahawy regularly turns to the video and speaks to “the internet,” indicating that she knew full well that this video would be made available online. In constructing her audience, Eltahawy also switches between talking to Americans (“see this America”) and to a broader international public, presumably of people who are angry at the perceived hypocrisy of how America constructs free speech in light of the video mocking Islam’s prophet that sparked riots around the globe.

As I watch this video and try to untangle the dynamics going on, I can’t help but reflect on the cultural collision course underway as the notion of “free speech” gets decontextualized in light of heightened visibility. But before I get there, I need to offer some more context.

Free Speech in the United States

In the United States, the First Amendment to the Constitution states: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” This is the foundation of the “free speech” clause that is one of the most unique aspects of American political life. It means that people have the right to speak their mind, even if their speech is unpopular, blasphemous, or critical.

Over the last 200+ years, there have been interesting cases that pit free speech against other issues that result in what may be perceived to be special carve-outs. For example, “hate speech” is not protected under civil rights clauses when it constitutes a form of harassment. Child pornography is not considered free speech but, rather, photographic evidence of a crime against a child. And speech that incites violence is not considered free speech if it serves to create an imminent threat of violence. (Of course, the edge cases on this are often dicey.) But content that depicts many things that are deemed offensive – including grotesque imagery, obscene pornography, and extreme violence – is often protected by free speech, even if public display of it is limited.

Of course, what taketh also giveth. Many European countries have begun banning women from wearing the hijab, seeing it as an oppressive dress. In the United States, the same first amendment that permits racist and blasphemous content also protects Muslim women in their choice of clothing. Even when people are racist shits, Muslims have a tremendous amount of freedom afforded to them because of US laws that forbid discrimination on the basis of religion. Does that make it easy to be Muslim in the US? No. But being Muslim in the US is a hell of a lot more protected than being Jewish in any Arab state.

As offensive (and, frankly, dreadfully awful) as the pseudo-pornographic film “Innocence of Muslims” is, it’s protected under free speech in the United States. This is not the first film to depict religious figures in problematic ways, nor will it be the last. As The Onion satirically reminds us, there are plenty of sexualized images out there depicting religious figures in all sorts of upsetting ways.

Yet, this video spread far beyond the walls of the United States, into other regions where the very notion of “free speech” is absent. Many Muslims were outraged at the idea that their prophet might be depicted in such an offensive manner and some took to the streets in anger. Some interpreted the video as hateful and couldn’t understand why such content would ever be allowed. Meanwhile, many Americans failed to understand why such a video would be uniquely provocative in Muslim communities. On more than one occasion, I heard Americans ask questions like: Why should it be illegal to represent a religious figure in a negative light when it’s so common in Muslim societies to be so hateful to people of other religions? Or to be hateful towards women or LGBT people? Or to depict women in negative ways? Needless to say, all of this rests on a fundamental moral disconnect around what values can and should shape a society.

Meanwhile, in the United States, a lawsuit was moving through the courts concerning a deeply racist advertisement that “The American Freedom Defense Initiative” wanted to pay to have displayed by New York City’s subway system (MTA). The MTA went to the courts in an effort to block the advertisement which implicitly linked Muslims to savages. The MTA lost their court battle when a judge argued that this racist ad was protected speech, thereby forcing the MTA to accept and post the advertisements. Begrudgingly, they did. And this is where we get to Mona.

While posting racist images is covered under free speech law, not just any act is covered under the freedom of expression. When Eltahawy chose to express her dissent by spray painting the ads, she did commit a crime, just as anyone who graffitis any public property is committing a crime. Freedom of speech does not permit anyone to damage property and, as horrid as those ads are, they were the property of the MTA. Unfortunately for Eltahawy, her act is also not non-violent protest because she committed a crime. [Update: Eltahawy uses non-violent protest as her justification to the police officers for why she should not be arrested. I’m not suggesting that her act was violent, but rather, that she can’t claim that she’s simply engaged in non-violent protest and assume that this overrides the illegal nature of her actions. If she knows her actions are illegal, she can claim she’s engaged in civil disobedience, but civil disobedience and non-violent protest are not synonymous.] Had she chosen to stand in front of the ad and said whatever was on her mind, she would’ve been fully within her rights (provided that it did not escalate to “disturbing the peace”). Now, we might not like that vandalism is a crime – and we might recognize that most graffiti these days goes unpunished – but the fact is that spray painting public property is unquestionably illegal.

Of course, the whole thing reaches a new level of disgusting today when Pamela Hall – the anti-Islam activist behind “Stop the Islamization of America” – sues Eltahawy for damage to _her_ property. While I don’t believe that Eltahawy was in the right when she vandalized MTA property, the video makes it very clear that Hall actively provokes Eltahawy and because of Hall’s aggressions, Hall’s property is damaged. I hope that the courts throw this one out entirely.

Making Protests Visible

By circulating the video of Eltahawy getting arrested, activists are asking viewers to have sympathy with Eltahawy. In some ways, this isn’t hard. That poster is disgusting and I’m embarrassed by it. But her choice to consistently exclaim that she’s engaged in freedom of expression and non-violent protest is misleading and inaccurate. What she did, whether she knew it or not, was illegal and not within the dominion of either free speech or non-violent protest. Interestingly, her aggressive interlocutor accepts her frame and just keeps trying to negate it by saying that she’s “violating free speech.” This too is inaccurate. Free speech is not the issue at play in the altercation between Eltahawy and Hall or when Eltahawy vandalizes the poster. Free speech only matters in that that stupid poster was posted in the first place.

The legal details of this will get worked out in the court, but I’m bothered by the way in which the circulation of this video and the discussion around it polarizes the conversation without shedding light on the murky realities of how free speech operates of what is and is not free speech and of what is and is not illegal in the United States when it comes to protesting. Let me be clear: I think that we should all be protesting those racist ads. And I’m fully aware that some acts of protest can and must blur the lines between what is legal and illegal because law enforcement regularly suppresses protester’s rights and arrests people in oppressive ways that undermine important acts of resistance. And I also realize that one of the reasons that activists engage in acts that get them arrested because, when they do, news media covers it and bringing attention to an issue is often a desired end-goal by many activists. But what concerns me is that there’s a huge international disconnect brewing over American free speech and our failure to publicly untangle these issues undermines any effort to promote its value.

I’m deeply committed to the value of free speech. I understand its costs and I despise when it’s used as a tool to degrade and demean people or groups. I hate when it’s used to justify unhealthy behavior or reinforce norms that disgust me. But I tolerate these things because I believe that it’s one of the most critical tools of freedom. I firmly believe that censoring speech erodes a society more than allowing icky speech does. I also firmly believe that efforts to hamper free speech do a greater disservice to oppressed people than permitting disgusting speech. It’s a trade-off and it’s a trade-off that I accept. Yet, it’s also a trade-off that cannot be taken for granted, especially in a global society.

Through the internet, content spreads across boundaries and cultural contexts. It’s sooo easy to take things out of context or not understand the context in which they are produced or disseminated. Or why they are tolerated. Contexts collapse and people get upset because their local norms and rules don’t seem to apply when things slip over the borders and can’t be controlled. Thus, we see a serious battle brewing over who controls the internet. What norms? What laws? What cultural contexts? Settling this is really bloody hard because many of the issues at stake are so deeply conflicting as to appear to be irresolvable.

I genuinely don’t know what’s going to happen to freedom of speech as we enter into a networked world, but I suspect it’s going to spark many more ugly confrontations. Rather, it’s not the freedom of speech itself that will, but the visibility of the resultant expressions, good, bad, and ugly. For this reason, I think that we need to start having a serious conversation about what freedom of speech means in a networked world where jurisdictions blur, norms collide, and contexts collapse. This isn’t going to be worked out by enacting global laws nor is it going to be easily solved through technology. This is, above all else, a social issue that has scaled to new levels, creating serious socio-cultural governance questions. How do we understand the boundaries and freedoms of expression in a networked world?

Is Twitter us or them? #twitterfail and living somewhere between public commitment and private investment

This is about the fourth Olympics that’s been trumpeted as the first one to embrace social media and the Internet — just as, depending on how you figure it, it’s about the fourth U.S. election in a row that’s the first to go digital. It may be in the nature of new technologies that we appear perpetually, or at least for a very long time, to be just on the cusp of something. NBC has proudly trumpeted its online video streaming, its smartphone and tablet apps, and most importantly its partnership with microblogging platform Twitter. NBC regularly displays the #Olympics hashtag on the broadcasts, their coverage includes tweets and twit pics from athletes, and their website has made room for sport-specific Twitter streams.

It feels like an odd corporate pairing, at least from one angle. Twitter users have tweeted about past Olympics, for sure. But from a user’s perspective, its not clear what we need or get from a partnership with the broadcast network that’s providing exclusive coverage of the event. Isn’t Twitter supposed to be the place we talk about the things out there, the things we experience or watch or care about? But from another angle, it makes perfect sense. Twitter needs to reinforce the perception that it is the platform where chatter and commentary about what’s important to us should occur, and convince a broader audience to try it; it gets to do so here as “official narrator” of the Games. NBC needs ways to connect its coverage to the realm of social media, but without allowing anything digital to pre-empt its broadcasts. From a corporate perspective, interdependence is a successful economic strategy; from the users’ perspective, we want more independence between the two.

This makes the recent dustup about Twitter’s suspension of the account of Guy Adams, correspondent for The Independent (so perfect!), so troubling to so many. Adams had spent the first days of the Olympics criticizing NBC’s coverage of the games, particularly for time-delaying events to suit the U.S. prime time schedule, trimming the opening ceremony, and for some of the more inane commentary from NBC’s hosts. When Adams suggested that people should complain to Gary Zenkel, executive VP at NBC Sports and director of their Olympics coverage, and included Zenkel’s NBC email address, Twitter suspended his account.

Just to play out the details of the case, from the coverage that has developed thus far, we can say a couple of things. Twitter told Adams that his account had been suspended for “posting an individual’s private information such as private email address, physical address, telephone number, or financial documents.” Twitter asserts that it only considers rule violations if there is a complaint filed about them, suggesting that NBC had complained; in response, NBC says that Twitter brought the tweet (or tweets?) to NBC’s attention, who then submitted a complaint. Twitter has since reinstated Adams’ account, and reaffirmed the care and impartiality it takes in enforcing its rules.

Much of the conversation online, including on Twitter, has focused on two things: expressions of disappointment in Twitter for the perceived crime of shutting down a journalist’s account for criticizing a corporate partner, and a debate about whether Zenkel’s email should be considered public or private, and as such, making Twitter’s decision (despite its motivation) a legitimate or illegitimate interpretation of their own rules. This second question is an interesting one: Twitter’s rules not clarify the difference between the “private email addresses” they prohibit, and whatever the opposite is. Is Zenkel’s email address public because he’s a professional acting in a professional capacity? because it has appeared before on the web? Because it can be easily figured out (by the common firstname.lastname structure of NBC’s emails addresses? Alexis Madrigal at The Atlantic has a typically well-informed take on the issue.)

But I think this question of whether Twitter was appropriately acting on its own rules, and even the broader charge of whether its actions were motivated by their economic partnership with NBC, are both founded on a deeper question: what do we expect Twitter to be? This can be posed in naïve terms, as it often is in the heat of debate: are they an honorable supporter of free speech, or are they craven corporate shills? We may know these are exaggerated or untenable positions, both of them, but they’re still so appealing they continue to frame our debates. For example, in a widely circulated critique of Twitter’s decision, Jeff Jarvis proclaims that

For this incident itself is trivial, the fight frivolous. What difference does it make to the world if we complain about NBC’s tape delays and commentators’ ignorance? But Twitter is more than that. It is a platform. It is a platform that has been used by revolutionaries to communicate and coordinate and conspire and change the world. It is a platform that is used by journalists to learn and spread the news. If it is a platform it should be used by anyone for any purpose, none prescribed or prohibited by Twitter. That is the definition of a platform.

Adams himself titled his column for The Independent about the incident, “I thought the internet age had ended this kind of censorship.”

I want Jarvis and Adams to be right, here. But the reality is not so inspiring. We know that Twiiter is neither a militant guardian of free speech nor a glorified corporate billboard, that Twitter’s relationship to NBC and other commercial partners matters but does not determine, that Twitter is attempting to be a space for contentious speech and have rules of conduct that balance a many communities, values, and legal obligations. But exactly what we expect of Twitter in real contexts is imprecise, yet it matters for how we use it and how we grapple with a decision like the suspension of Adams’ account for the comments he made. And what these expectations are help to reveal, may even constitute, or experience of digital culture as a space for public, critical, political speech.

What if we put these possible expectations on a spectrum, if only so we can step away from the extremes on either end:

  • Social media are private services; we sign up for them. Their rules can be arbitrary, capricious, and self-serving if they choose. They can partner with content providers, including priviliging that content and protecting them from criticism. Users can take a walk if they don’t like it.
  • Social media are private services; we sign up for them. Their rules can be arbitrary and self-serving, but they should be fairly enforced. They can partner with content providers, including priviliging that content and protecting them from criticism, but they should be transparent about that promotion.
  • Social media are private services used by the public; Their rules are up to them, but should be justifiable and necessary; they should be fairly enforced, though taking into account the logistical challenges. They can partner with content providers, including priviliging that content, but they should be demarcate that content from what users produce.
  • Social media are private services used by the public; because of that public trust, those rules should balance honoring the public’s fair use of the network and protecting the service’s ability to function and profit; they should be fairly enforced, despite the logistical challenges. They can partner with content providers, including priviliging that content; they should be demarcate that content from what users produce.
  • Social media are private services and public platforms; because of that public trust, those rules should impartially honor the public’s fair use of the network; they should be fairly enforced, despite the logistical challenges. They can partner with sponsors that support this public forum through advertising, but it has a journalistic commitment to allow speech, even if its critical of its partners or of itself.
  • Social media are private but have become public platforms; the only rules it can set should be in the service of adhering to the law, and protecting the public forum itself from the harm users can do to it (such as hate speech). They can partner with sponsors that support this public forum through advertising, but it has a journalistic commitment to allow speech, even if its critical of its partners or of itself.
  • Social media are public platforms; and as such must have a deep commitment to free speech. While they can curtail the most egregious content under legal obligations, they should otherwise err on the side of allowing and protecting all speech, even when it is unruly, disrespectful, political contentious, or critical of itself. Sponsors and other corporate partnerships are nearly anathema to this mission, and should be constrained to the only the most cordoned off forms of advertising.
  • Social media should facilitate all speech and block none, no matter how reprehensible, offensive, dangerous, or illegal. Any commercial partnership is a suspicious distortion of this commitment. Users can take a walk if they don’t like it.

While the possibilities on the extreme ends of this spectrum may sound theoretically defensible to some, they are easily cast aside by test cases. Even the most ardent defender of free speech would pause if a platform allowed or defended the circulation of child pornography. And even the most ardent free market capitalist would recognize that a platform solely and capriciously in the service of its advertisers would undoubtedly fail as a public medium. What we’re left with, then, is the messier negotiations and compromises in the middle. Publicly, Twitter has leaned towards the public half of this spectrum: many celebrated when the company appealed court orders requiring them to reveal the identity of users involved in the Occupy protests, and Twitter has regularly celebrated itself for its role in protests and revolutions around the world. At the same time, they do have an array of rules that govern the use of their platform, rules that range from forbidding inappropriate content, limiting harassing or abusive behavior, prohibiting technical tricks that can garner more followers, establishing best practices for automated responders, and spelling out privacy violations. Despite their nominal (and in practice substantive) commitment to protecting speech, they are a private provider, that retains the rights and responsibilities to curate their user content according to rules they choose. This is the reality of platforms that we are reluctant to, but in the end must, accept.

What may be most uncharacteristic in the Adams case, and most troubling to Twitter’s critics, is not that Twitter enforced a vague rule, or did so when Adams was criticizing their corporate partner, in a way that, while scurrilous, was not illegal. It was that Twitter proactively identified Adams as a trouble spot for NBC — whether for his specific posting the Zenkel’s email or for the whole stream of criticism — and brought it to NBC’s attention. What Twitter did was to think like a corporate partner, not like a public platform. Of course it was within Twitter’s right to do so, and to suspend Adams’ account in response. And yes, there is a some risk of lost good will and public trust. But the suspension is an indication that, while Twitter’s rhetoric leans towards the claim of a public forum, their mindset about who they are and what purpose they serve remains enmeshed with their private status and their private investments than users might hope.

This is the tension lurking in Twitter’s apology about the incident, where they acknowledge that they had in fact alerted NBC about Adams’ post and encouraged therm to complain, then acted on that complaint. “This behavior is not acceptable and undermines the trust our users have in us. We should not and cannot be in the business of proactively monitoring and flagging content, no matter who the user is — whether a business partner, celebrity or friend.” Twitter can do its best to reinstate that sense of quasi-journalistic commitment to the public. But the fact that the alert even happened suggests that this promise of public commitment, and the expectations we have of Twitter to hold to it, may not be a particularly accurate grasp of the way their public commitment is entangled with their private investment.

Cross posted at Culture Digitally.

The dirty job of keeping Facebook clean

Last week, Gawker received a curious document. Turned over by an aggrieved worker from the online freelance employment site oDesk, the document iterated, over the course of several pages and in unsettling detail, exactly what kinds of content should be deleted from the social networking site that had outsourced its content moderation to oDesk’s team. The social networking site, as it turned out, was Facebook.

The document, antiseptically titled “Abuse Standards 6.1: Operation Manual for Live Content Moderators” (along with an updated version 6.2 subsequently shared with Gawker, presumably by Facebook) is still available from Gawker. It represents the implementation of the Facebook’s Community Standards, which present Facebook’s priorities around acceptable content, but stay miles back from actually spelling them out. In the Community Standards, Facebook reminds users that “We have a strict ‘no nudity or pornography’ policy. Any content that is inappropriately sexual will be removed. Before posting questionable content, be mindful of the consequences for you and your environment.” But, an oDesk freelancer looking at hundreds of pieces of content every hour needs more specific instructions on what exactly is “inappropriately sexual” — such as removing “Any OBVIOUS sexual activity, even if naked parts are hidden from view by hands, clothes or other objects. Cartoons / art included. Foreplay allowed (Kissing, groping, etc.). even for same sex (man-man / woman-woman”. The document offers a tantalizing look into a process that Facebook and other content platforms generally want to keep under wraps, and a mundane look at what actually doing this work must require.

It’s tempting, and a little easy, to focus on the more bizarre edicts that Facebook offers here (“blatant depictions of camel toes” as well as “images of drunk or unconscious people, or sleeping people with things drawn on their faces” must be removed; pictures of marijuana are OK, as long as it’s not being offered for sale). But the absurdity here is really an artifact of having to draw this many lines in this much sand. Any time we play the game of determining what is and is not appropriate for public view, in advance and across an enormous and wide-ranging amount of content, the specifics are always going to sound sillier than the general guidelines. (It was not so long ago that “American Pie’s” filmmakers got their NC-17 rating knocked down to an R after cutting the scene in which the protagonist has sex with a pie from four thrusts to two.)

Lines in the sand are like that. But there are other ways to understand this document: for what it reveals about the kind of content being posted to Facebook, the position in which Facebook and other content platforms find themselves, and the system they’ve put into place for enforcing the content moderation they now promise.

Facebook or otherwise, it’s hard not to be struck by the depravity of some of the stuff that content moderators are reviewing. It’s a bit disingenuous of me to start with camel toes and man-man foreplay, when what most of this document deals with is so, so much more reprehensible: child pornography, rape, bestiality, graphic obscenities, animal torture, racial and ethnic hatred, self-mutilation, suicide. There is something deeply unsettling about this document in the way it must, with all the delicacy of a badly written training manual, explain and sometimes show the kinds of things that fall into these categories. In 2010, the New York Times reported on the psychological toll that content moderators, having to look at this “sewer channel” of content reported to them by users, often experience. It’s a moment when Supreme Court Justice Potter Stewart’s old saw about pornography, “I know it when I see it,” though so problematic as a legal standard, does feel viscerally true. It’s a disheartening glimpse into the darker side of the “participatory web”: no worse or no better than the depths that humankind has always been capable of sinking to, though perhaps boosted by the ability to put these coarse images and violent words in front of the gleeful eyes of co-conspirators, the unsuspecting eyes of others, and sometimes the fearful eyes of victims.

This outpouring of obscenity is by no means caused by Facebook, and it is certainly reasonable for Facebook to take a position on the kinds of content it believes many of its users will find reprehensible. But, that does not let Facebook off the hook for the kind of position it takes: not just where it draws the lines, but the fact that it draws lines at all, the kind of custodial role it takes on for itself, and the manner in which it goes about performing that role. We may not find it difficult to abhor child pornography or ethnic hatred, but we should not let that abhorrence obscure the fact that sites like Facebook are taking on this custodial role — and that while goofy frat pranks and cartoon poop may seem irrelevant, this is still public discourse. Facebook is now in the position of determining, or helping to determine, what is acceptable as public speech — on a site in which 800 million people across the globe talk to each other every day, about all manner of subjects.

This is not a new concern. The most prominent controversy has been about the removal of images of women breastfeeding, which has been a perennial thorn in Facebook’s side; but similar dustups have occurred around artistic nudity on Facebook, political caricature on Apple’s iPhone, gay themed books on Amazon, and fundamentalist Islamic videos on YouTube. The leaked document, while listing all the things that should be removed, is marked with the residue of these past controversies, if you know how to look for them. The document clarifies the breastfeeding rule, a bit, by prohibiting “Breastfeeding photos showing other nudity, or nipple clearly exposed.” Any commentary that denies the existence of the Holocaust must be escalated for further review, not surprising after years of criticism. Concerns for cyber-bullying, which have been taken up so vehemently over the last two years, appear repeatedly in the manual. And under the heading “international compliance” are a number of decidedly specific prohibitions, most involving Turkey’s objection to their Kurdish separatist movement, including prohibitions on maps of Kurdistan, images of the Turkish flag being burned, and any support for PKK (The Kurdistan Workers’ Party) or their imprisoned founder Abdullah Ocalan.

Facebook and its removal policies, and other major content platforms and their policies, are the new terrain for longstanding debates about the content and character of public discourse. That images of women breastfeeding have proven a controversial policy for Facebook should not be surprising, since the issue of women breastfeeding in public remains a contested cultural sore spot. That our dilemmas about terrorism and Islamic fundamentalism, so heightened over the last decade, should erupt here too is also not surprising. The dilemmas these sites face can be seen as a barometer of our society’s pressing concerns about public discourse more broadly: how much is too much; where are the lines drawn and who has the right to draw them; how do we balance freedom of speech with the values of the community, with the safety of individuals, with the aspirations of art and the wants of commerce.

But a barometer simply measures where there is pressure. When Facebook steps into these controversial issues, decides to authorize itself as custodian of content that some of its users find egregious, establishes both general guidelines and precise instructions for removing that content, and then does so, it is not merely responding to cultural pressures, it is intervening in them, reifying the very distinctions it applies. Whether breastfeeding is made more visible or less, whether Holocaust deniers can use this social network to make their case or not, whether sexual fetishes can or cannot be depicted, matters for the acceptability or marginalization of these topics. If, as is the case here, there are “no exceptions for news or awareness-related content” to the rules against graphic imagery and speech, well, that’s a very different decision, with different public ramifications, than if news and public service did enjoy such an exception.

But the most intriguing revelation here may not be the rules, but how the process of moderating content is handled. Sites like Facebook have been relatively circumspect about how they manage this task: they generally do not want to draw attention to the presence of so much obscene content on their sites, or that they regularly engage in “censorship” to deal with it. So the process by which content is assessed and moderated is also opaque. This little document brings into focus a complex chain of people and activities required for Facebook to play custodian.

The moderator using this leaked manual would be looking at content already reported or ‘flagged” by a Facebook user. The moderator would either “confirm” the report (thereby deleting the content), “unconfirm” it (the content stays) or “escalate” it, which moves it to Facebook for further or heightened review. Facebook has dozens of its own employees playing much the same role; contracting out to oDesk freelancers, and to companies like Caleris and Telecommunications On Demand, serves as merely a first pass. Facebook also acknowledges that it looks proactively at content that has not yet been reported by users (unlike sites like YouTube that claim to wait for their users to flag before they weigh in). Within Facebook, there is not only a layer of employees looking at content much as the oDesk workers do, but also a team charged with discussing truly gray area cases, empowered both to remove content and to revise the rules themselves.

At each level, we might want to ask: What kind of content gets reported, confirmed, and escalated? How are the criteria for judging determined? Who is empowered to rethink these criteria? How are general guidelines translated into specific rules, and how well do these rules fit the content being uploaded day in and day out? How do those involved, from the policy setter down to the freelance clickworker, manage the tension between the rules handed to them and their own moral compass? What kind of contextual and background knowledge is necessary to make informed decisions, and how is the context retained or lost as the reported content passes from point to point along the chain? What kind of valuable speech gets caught in this net? What never gets posted at all, that perhaps should?

Keeping our Facebook streets clean is a monumental task, involving multiple teams of people, flipping through countless photos and comments, making quick judgments, based on regularly changing proscriptions translated from vague guidelines, in the face of an ever-changing, global, highly contested, and relentless flood of public expression. And this happens at every site, though implemented in different ways. Content moderation is one of those undertakings that, from one vantage point, we might say it’s amazing that it works at all, and as well as it does. But from another vantage point, we should see that we are playing a dangerous game: the private determination of the appropriate boundaries of public speech. That’s a whole lot of cultural power, in the hands of a select few who have a lot of skin in the game, and it’s being done in an oblique way that makes it difficult for anyone else to inspect or challenge. As users, we certainly cannot allow ourselves to remain naive, believing that the search engine shows all relevant results, the social networking site welcomes all posts, the video platform merely hosts what users generate. Our information landscape is a curated one. What is important, then, is that we understand the ways in which it is curated, by whom and to what ends, and engage in a sober, public conversation about the kind of public discourse we want and need, and how we’re willing to get it.

This article first appeared on Salon.com, and is cross-posted at Culture Digitally.

How Parents Normalized Teen Password Sharing

In 2005, I started asking teenagers about their password habits. My original set of questions focused on teens’ attitudes about giving their password to their parents, but I quickly became enamored with teens’ stories of sharing passwords with friends and significant others. So I was ecstatic when Pew Internet & American Life Project decided to survey teens about their password sharing habits. Pew found that one third of online 12-17 year olds share their password with a friend or significant other and that almost half of those 14-17 do. I love when data gets reinforced.

Last week, Matt Richtel at the New York Times did a fantastic job of covering one aspect of why teens share passwords: as a show of affection. Indeed, I have lots of fun data that supports Richtel’s narrative — and complicates it. Consider Meixing’s explanation for why she shares her password with her boyfriend:

Meixing, 17, TN: It made me feel safer just because someone was there to help me out and stuff. It made me feel more connected and less lonely. Because I feel like Facebook sometimes it kind of like a lonely sport, I feel, because you’re kind of sitting there and you’re looking at people by yourself. But if someone else knows your password and stuff it just feels better.

For Meixing, sharing her password with her boyfriend is a way of being connected. But it’s precisely these kinds of narratives that have prompted all sorts of horror by adults over the last week since that NYTimes article came out. I can’t count the number of people who have gasped “How could they!?!” at me. For this reason, I feel the need to pick up on an issue that the NYTimes let out.

The idea of teens sharing passwords didn’t come out of thin air. In fact, it was normalized by adults. And not just any adult. This practice is the product of parental online safety norms. In most households, it’s quite common for young children to give their parents their passwords. With elementary and middle school youth, this is often a practical matter: children lose their passwords pretty quickly. Furthermore, most parents reasonably believe that young children should be supervised online. As tweens turn into teens, the narrative shifts. Some parents continue to require passwords be forked over, using explanations like “because I’m your mother.” But many parents use the language of “trust” to explain why teens should share their passwords with them.

There are different ways that parents address the password issue, but they almost always build on the narrative of trust. (Tangent: My favorite strategy is when parents ask children to put passwords into a piggy bank that must be broken for the paper with the password to be retrieved. Such parents often explain that they don’t want to access their teens’ accounts, but they want to have the ability to do so “in case of emergency.” A piggy bank allows a social contract to take a physical form.)

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.

There’s another thread here that’s important. Think back to the days in which you had a locker. If you were anything like me and my friends, you gave out your locker combination to your friends and significant others. There were varied reasons for doing so. You wanted your friends to pick up a book for you when you left early because you were sick. You were involved in a club or team where locker decorating was common. You were hoping that your significant other would leave something special for you. Or – to be completely and inappropriately honest – you left alcohol in your locker and your friends stopped by for a swig. (One of my close friends was expelled for that one.) We shared our locker combinations because they served all sorts of social purposes, from the practical to the risqué.

How are Facebook passwords significantly different than locker combos? Truth be told, for most teenagers, they’re not. Teens share their passwords so that their friends can check their messages for them when they can’t get access to a computer. They share their passwords so their friends can post the cute photos. And they share their passwords because it’s a way of signaling an intimate relationship. Just like with locker combos.

Can password sharing be abused? Of course. I’ve heard countless stories of friends “punking” one another by leveraging password access. And I’ve witnessed all sorts of teen relationship violence where mandatory password sharing is a form of surveillance and abuse. But, for most teens, password sharing is as risky as locker combo sharing. This is why, even though 1/3 of all teens share their passwords, we only hear of scattered horror stories.

I know that this practice strikes adults as seriously peculiar, but it irks me when adults get all judgmental on this teen practice, as though it’s “proof” that teens can’t properly judge how trustworthy a relationship is. First, it’s through these kinds of situations where they learn. Second, adults are dreadful at judging their own relationships (see: divorce rate) so I don’t have a lot of patience for the high and mighty approach. Third, I’m much happier with teens sharing passwords as a form of intimacy than sharing many other things.

There’s no reason to be aghast at teen password sharing. Richtel’s story is dead-on. It’s pretty darn pervasive. But it also makes complete sense given how notions of trust have been constructed for many teens.

(Image Credit: Darwin Bell)

Designing for Social Norms (or How Not to Create Angry Mobs)

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code. In thinking about social media systems, plenty of folks think about monetization. Likewise, as issues like privacy pop up, we regularly see legal regulation become a factor. And, of course, folks are always thinking about what the code enables or not. But it’s depressing to me how few people think about the power of social norms. In fact, social norms are usually only thought of as a regulatory process when things go terribly wrong. And then they’re out of control and reactionary and confusing to everyone around. We’ve seen this with privacy issues and we’re seeing this with the “real name” policy debates. As I read through the discussion that I provoked on this issue, I couldn’t help but think that we need a more critical conversation about the importance of designing with social norms in mind.

Good UX designers know that they have the power to shape certain kinds of social practices by how they design systems. And engineers often fail to give UX folks credit for the important work that they do. But designing the system itself is only a fraction of the design challenge when thinking about what unfolds. Social norms aren’t designed into the system. They don’t emerge by telling people how they should behave. And they don’t necessarily follow market logic. Social norms emerge as people – dare we say “users” – work out how a technology makes sense and fits into their lives. Social norms take hold as people bring their own personal values and beliefs to a system and help frame how future users can understand the system. And just as “first impressions matter” for social interactions, I cannot underestimate the importance of early adopters. Early adopters configure the technology in critical ways and they play a central role in shaping the social norms that surround a particular system.

How a new social media system rolls out is of critical importance. Your understanding of a particular networked system will be heavily shaped by the people who introduce you to that system. When a system unfolds slowly, there’s room for the social norms to slowly bake, for people to work out what the norms should be. When a system unfolds quickly, there’s a whole lot of chaos in terms of social norms. Whenever a network system unfolds, there are inevitably competing norms that arise from people who are disconnected to one another. (I can’t tell you how much I loved watching Friendster when the gay men, Burners, and bloggers were oblivious to one another.) Yet, the faster things move, the faster those collisions occur, and the more confusing it is for the norms to settle.

The “real name” culture on Facebook didn’t unfold because of the “real name” policy. It unfolded because the norms were set by early adopters and most people saw that and reacted accordingly. Likewise, the handle culture on MySpace unfolded because people saw what others did and reproduced those norms. When social dynamics are allowed to unfold organically, social norms are a stronger regulatory force than any formalized policy. At that point, you can often formalize the dominant social norms without too much pushback, particularly if you leave wiggle room. Yet, when you start with a heavy-handed regulatory policy that is not driven by social norms – as Google Plus did – the backlash is intense.

Think back to Friendster for a moment… Remember Fakester? (I wrote about them here.) Friendster spent ridiculous amounts of time playing whack-a-mole, killing off “fake” accounts and pissing off some of the most influential of its userbase. The “Fakester genocide” prompted an amazing number of people to leave Friendster and head over to MySpace, most notably bands, all because they didn’t want to be configured by the company. The notion of Fakesters died down on MySpace, but the most central practice – the ability for groups (bands) to have recognizable representations – ended up being the most central feature of MySpace.

People don’t like to be configured. They don’t like to be forcibly told how they should use a service. They don’t want to be told to behave like the designers intended them to be. Heavy-handed policies don’t make for good behavior; they make for pissed off users.

This doesn’t mean that you can’t or shouldn’t design to encourage certain behaviors. Of course you should. The whole point of design is to help create an environment where people engage in the most fruitful and healthy way possible. But designing a system to encourage the growth of healthy social norms is fundamentally different than coming in and forcefully telling people how they must behave. No one likes being spanked, especially not a crowd of opinionated adults.

Ironically, most people who were adopting Google Plus early on were using their real names, out of habit, out of understanding how they thought the service should work. A few weren’t. Most of those who weren’t were using a recognizable pseudonym, not even trying to trick anyone. Going after them was just plain stupid. It was an act of force and people felt disempowered. And they got pissed. And at this point, it’s no longer about whether or not the “real names” policy was a good idea in the first place; it’s now an act of oppression. Google Plus would’ve been ten bazillion times better off had they subtly encouraged the policy without making a big deal out of it, had they chosen to only enforce it in the most egregious situations. But now they’re stuck between a rock and a hard place. They either have to stick with their policy and deal with the angry mob or let go of their policy as a peace offering in the hopes that the anger will calm down. It didn’t have to be this way though and it wouldn’t have been had they thought more about encouraging the practices they wanted through design rather than through force.

Of course there’s a legitimate reason to want to encourage civil behavior online. And of course trolls wreak serious havoc on a social media system. But a “real names” policy doesn’t stop an unrepentant troll; it’s just another hurdle that the troll will love mounting. In my work with teens, I see textual abuse (“bullying”) every day among people who know exactly who each other is on Facebook. The identities of many trolls are known. But that doesn’t solve the problem. What matters is how the social situation is configured, the norms about what’s appropriate, and the mechanisms by which people can regulate them (through social shaming and/or technical intervention). A culture where people can build reputation through their online presence (whether “real” names or pseudonyms) goes a long way in combating trolls (although it is by no means a fullproof solution). But you don’t get that culture by force; you get it by encouraging the creation of healthy social norms.

Companies that build systems that people use have power. But they have to be very very very careful about how they assert that power. It’s really easy to come in and try to configure the user through force. It’s a lot harder to work diligently to design and build the ecosystem in which healthy norms emerge. Yet, the latter is of critical importance to the creation of a healthy community. Cuz you can’t get to a healthy community through force.