Is Facebook Destroying the American College Experience?

Sitting with a group of graduating high school seniors last summer, the conversation turned to college roommates. Although headed off to different schools, they had a similar experience of learning their roommate assignment and immediately turning to Facebook to investigate that person. Some had already begun developing deep, mediated friendships while others had already asked for roommate transfers. Beyond roommates, all had used Facebook to find other newly minted freshman, building relationships long before they set foot on campus.

At first blush, this seems like a win for students. Going off to college can be a scary proposition, full of uncertainty, particularly about social matters. Why not get a head start building friends from the safety of your parent’s house?

What most students (and parents) fail to realize is that the success of the American college system has less to do with the quality of the formal education than it does with the social engineering project that is quietly enacted behind the scenes each year. Roommates are structured to connect incoming students with students of different backgrounds. Dorms are organized to cross-breed the cultural diversity that exists on campus. Early campus activities are designed to help people encounter people whose approach to the world is different than theirs. This process has a lot of value because it means that students develop an appreciation for difference and build meaningful relationships that will play a significant role for years to come. The friendships and connections that form on campuses shape future job opportunities and help create communities that change the future. We hear about famous college roommates as exemplars. Heck, Facebook itself was created by a group of Harvard roommates. But the more basic story is how people learn to appreciate difference, often by suffering through the challenges of entering college together.

When pre-frosh turn to Facebook before arriving on campus, they do so to find other people who share their interests, values, and background. As such, they begin a self-segregation process that results in increased “homophily” on campuses. Homophily is a sociological concept that refers to the notion that birds of a feather stick together. In other words, teens inadvertently undermine the collegiate social engineering project of creating diverse connections through common experiences. Furthermore, because Facebook enables them to keep in touch with friends from high school, college freshman spend extensive time maintaining old ties rather than building new ones. They lose out on one of the most glorious benefits of the American collegiate system: the ability to diversify their networks.

Facebook is not itself the problem. The issue stems from how youth use Facebook and the desire that many youth have to focus on building connections to people that think like they do. Building friendships with people who have different political, cultural, religious beliefs is hard. Getting to know people whose life stories seem so foreign is hard. And yet, such relationship building across lines of difference can also be tremendously transformative.

To complicate matters more, parents and high school teachers have beaten into today’s teens’ heads that internet strangers are dangerous. As such, even when teens are turning to Facebook or other services to find future college friends, they are skittish about people who are discomforting to them because they’ve been socialized into being wary of anyone they talk with. The fear-mongering around strangers plays a subtle but powerful role in discouraging teens from doing the disorienting work of getting to know someone truly unfamiliar.

It’s high time we recognize that college isn’t just about formalized learning and skills training, but also a socialization process with significant implications for the future. The social networks that youth build in college have long-lasting implications for youth’s future prospects. One of the reasons that the American college experience is so valuable is because it often produces diverse networks that enable future opportunities. This is also precisely what makes elite colleges elite; the networks that are built through these institutions end up shaping many aspects of power. When less privileged youth get to know children of powerful families, new pathways of opportunity and tolerance are created. But when youth use Facebook to maintain existing insular networks, the potential for increased structural inequity is great.

Photo by Daniel Borman

This post was originally written for LinkedIn. Visit there for additional comments.

The dirty job of keeping Facebook clean

Last week, Gawker received a curious document. Turned over by an aggrieved worker from the online freelance employment site oDesk, the document iterated, over the course of several pages and in unsettling detail, exactly what kinds of content should be deleted from the social networking site that had outsourced its content moderation to oDesk’s team. The social networking site, as it turned out, was Facebook.

The document, antiseptically titled “Abuse Standards 6.1: Operation Manual for Live Content Moderators” (along with an updated version 6.2 subsequently shared with Gawker, presumably by Facebook) is still available from Gawker. It represents the implementation of the Facebook’s Community Standards, which present Facebook’s priorities around acceptable content, but stay miles back from actually spelling them out. In the Community Standards, Facebook reminds users that “We have a strict ‘no nudity or pornography’ policy. Any content that is inappropriately sexual will be removed. Before posting questionable content, be mindful of the consequences for you and your environment.” But, an oDesk freelancer looking at hundreds of pieces of content every hour needs more specific instructions on what exactly is “inappropriately sexual” — such as removing “Any OBVIOUS sexual activity, even if naked parts are hidden from view by hands, clothes or other objects. Cartoons / art included. Foreplay allowed (Kissing, groping, etc.). even for same sex (man-man / woman-woman”. The document offers a tantalizing look into a process that Facebook and other content platforms generally want to keep under wraps, and a mundane look at what actually doing this work must require.

It’s tempting, and a little easy, to focus on the more bizarre edicts that Facebook offers here (“blatant depictions of camel toes” as well as “images of drunk or unconscious people, or sleeping people with things drawn on their faces” must be removed; pictures of marijuana are OK, as long as it’s not being offered for sale). But the absurdity here is really an artifact of having to draw this many lines in this much sand. Any time we play the game of determining what is and is not appropriate for public view, in advance and across an enormous and wide-ranging amount of content, the specifics are always going to sound sillier than the general guidelines. (It was not so long ago that “American Pie’s” filmmakers got their NC-17 rating knocked down to an R after cutting the scene in which the protagonist has sex with a pie from four thrusts to two.)

Lines in the sand are like that. But there are other ways to understand this document: for what it reveals about the kind of content being posted to Facebook, the position in which Facebook and other content platforms find themselves, and the system they’ve put into place for enforcing the content moderation they now promise.

Facebook or otherwise, it’s hard not to be struck by the depravity of some of the stuff that content moderators are reviewing. It’s a bit disingenuous of me to start with camel toes and man-man foreplay, when what most of this document deals with is so, so much more reprehensible: child pornography, rape, bestiality, graphic obscenities, animal torture, racial and ethnic hatred, self-mutilation, suicide. There is something deeply unsettling about this document in the way it must, with all the delicacy of a badly written training manual, explain and sometimes show the kinds of things that fall into these categories. In 2010, the New York Times reported on the psychological toll that content moderators, having to look at this “sewer channel” of content reported to them by users, often experience. It’s a moment when Supreme Court Justice Potter Stewart’s old saw about pornography, “I know it when I see it,” though so problematic as a legal standard, does feel viscerally true. It’s a disheartening glimpse into the darker side of the “participatory web”: no worse or no better than the depths that humankind has always been capable of sinking to, though perhaps boosted by the ability to put these coarse images and violent words in front of the gleeful eyes of co-conspirators, the unsuspecting eyes of others, and sometimes the fearful eyes of victims.

This outpouring of obscenity is by no means caused by Facebook, and it is certainly reasonable for Facebook to take a position on the kinds of content it believes many of its users will find reprehensible. But, that does not let Facebook off the hook for the kind of position it takes: not just where it draws the lines, but the fact that it draws lines at all, the kind of custodial role it takes on for itself, and the manner in which it goes about performing that role. We may not find it difficult to abhor child pornography or ethnic hatred, but we should not let that abhorrence obscure the fact that sites like Facebook are taking on this custodial role — and that while goofy frat pranks and cartoon poop may seem irrelevant, this is still public discourse. Facebook is now in the position of determining, or helping to determine, what is acceptable as public speech — on a site in which 800 million people across the globe talk to each other every day, about all manner of subjects.

This is not a new concern. The most prominent controversy has been about the removal of images of women breastfeeding, which has been a perennial thorn in Facebook’s side; but similar dustups have occurred around artistic nudity on Facebook, political caricature on Apple’s iPhone, gay themed books on Amazon, and fundamentalist Islamic videos on YouTube. The leaked document, while listing all the things that should be removed, is marked with the residue of these past controversies, if you know how to look for them. The document clarifies the breastfeeding rule, a bit, by prohibiting “Breastfeeding photos showing other nudity, or nipple clearly exposed.” Any commentary that denies the existence of the Holocaust must be escalated for further review, not surprising after years of criticism. Concerns for cyber-bullying, which have been taken up so vehemently over the last two years, appear repeatedly in the manual. And under the heading “international compliance” are a number of decidedly specific prohibitions, most involving Turkey’s objection to their Kurdish separatist movement, including prohibitions on maps of Kurdistan, images of the Turkish flag being burned, and any support for PKK (The Kurdistan Workers’ Party) or their imprisoned founder Abdullah Ocalan.

Facebook and its removal policies, and other major content platforms and their policies, are the new terrain for longstanding debates about the content and character of public discourse. That images of women breastfeeding have proven a controversial policy for Facebook should not be surprising, since the issue of women breastfeeding in public remains a contested cultural sore spot. That our dilemmas about terrorism and Islamic fundamentalism, so heightened over the last decade, should erupt here too is also not surprising. The dilemmas these sites face can be seen as a barometer of our society’s pressing concerns about public discourse more broadly: how much is too much; where are the lines drawn and who has the right to draw them; how do we balance freedom of speech with the values of the community, with the safety of individuals, with the aspirations of art and the wants of commerce.

But a barometer simply measures where there is pressure. When Facebook steps into these controversial issues, decides to authorize itself as custodian of content that some of its users find egregious, establishes both general guidelines and precise instructions for removing that content, and then does so, it is not merely responding to cultural pressures, it is intervening in them, reifying the very distinctions it applies. Whether breastfeeding is made more visible or less, whether Holocaust deniers can use this social network to make their case or not, whether sexual fetishes can or cannot be depicted, matters for the acceptability or marginalization of these topics. If, as is the case here, there are “no exceptions for news or awareness-related content” to the rules against graphic imagery and speech, well, that’s a very different decision, with different public ramifications, than if news and public service did enjoy such an exception.

But the most intriguing revelation here may not be the rules, but how the process of moderating content is handled. Sites like Facebook have been relatively circumspect about how they manage this task: they generally do not want to draw attention to the presence of so much obscene content on their sites, or that they regularly engage in “censorship” to deal with it. So the process by which content is assessed and moderated is also opaque. This little document brings into focus a complex chain of people and activities required for Facebook to play custodian.

The moderator using this leaked manual would be looking at content already reported or ‘flagged” by a Facebook user. The moderator would either “confirm” the report (thereby deleting the content), “unconfirm” it (the content stays) or “escalate” it, which moves it to Facebook for further or heightened review. Facebook has dozens of its own employees playing much the same role; contracting out to oDesk freelancers, and to companies like Caleris and Telecommunications On Demand, serves as merely a first pass. Facebook also acknowledges that it looks proactively at content that has not yet been reported by users (unlike sites like YouTube that claim to wait for their users to flag before they weigh in). Within Facebook, there is not only a layer of employees looking at content much as the oDesk workers do, but also a team charged with discussing truly gray area cases, empowered both to remove content and to revise the rules themselves.

At each level, we might want to ask: What kind of content gets reported, confirmed, and escalated? How are the criteria for judging determined? Who is empowered to rethink these criteria? How are general guidelines translated into specific rules, and how well do these rules fit the content being uploaded day in and day out? How do those involved, from the policy setter down to the freelance clickworker, manage the tension between the rules handed to them and their own moral compass? What kind of contextual and background knowledge is necessary to make informed decisions, and how is the context retained or lost as the reported content passes from point to point along the chain? What kind of valuable speech gets caught in this net? What never gets posted at all, that perhaps should?

Keeping our Facebook streets clean is a monumental task, involving multiple teams of people, flipping through countless photos and comments, making quick judgments, based on regularly changing proscriptions translated from vague guidelines, in the face of an ever-changing, global, highly contested, and relentless flood of public expression. And this happens at every site, though implemented in different ways. Content moderation is one of those undertakings that, from one vantage point, we might say it’s amazing that it works at all, and as well as it does. But from another vantage point, we should see that we are playing a dangerous game: the private determination of the appropriate boundaries of public speech. That’s a whole lot of cultural power, in the hands of a select few who have a lot of skin in the game, and it’s being done in an oblique way that makes it difficult for anyone else to inspect or challenge. As users, we certainly cannot allow ourselves to remain naive, believing that the search engine shows all relevant results, the social networking site welcomes all posts, the video platform merely hosts what users generate. Our information landscape is a curated one. What is important, then, is that we understand the ways in which it is curated, by whom and to what ends, and engage in a sober, public conversation about the kind of public discourse we want and need, and how we’re willing to get it.

This article first appeared on Salon.com, and is cross-posted at Culture Digitally.

In Defense of Friction

1903 telephone operator (John McNab on Flickr)

There is no doubt that technology has made my life much easier. I rarely share the romantic view that things were better when human beings used to do the boring tasks that machines now do. For example, I do not think there is much to gain by bringing back the old telephone operators. However, there are reasons to believe social computing systems should not automate social interactions.

In his paper about online trust, Coye Cheshire points out how automated trust systems undermine trust itself by incentivizing cooperation because of the fear of punishment rather than actual trust among people. Cheshire argues that:

strong forms of online security and assurance can supplant, rather than enhance, trust.

Leading to what he calls the trust paradox:

assurance structures designed to make interpersonal trust possible in uncertain environments undermine the need for trust in the first place

My collaborators and I found something similar when trying to automate credit-giving in the context of a creative online community. We found that automatic attribution given by a computer system, does not replace the manual credit given by another human being. Attribution, turns out, is a useful piece of information given by a system, while credit given by a person, is a signal of appreciation, one that is expected and that cannot be automated.

Slippery when icy - problems with frictionless spaces (ntr23 on Flickr)

Similarly, others have noted how Facebook’s birthday reminders have “ruined birthdays” by “commoditizing” social interactions and people’s social skills. Furthermore, some have argued that “Facebook is ruining sharing” by making it frictionless.

In many scenarios, automation is quite useful, but with social interactions, removing friction can have a harmful effect on the social bonds established through friction itself. In other cases, as Shauna points out, “social networking sites are good for relationships so tenuous they couldn’t really bear any friction at all.”

I am not sure if sharing has indeed been ruined by Facebook, but perhaps this opens new opportunities for new online services that allow people to have “friction-full” interactions.

What kind of friction would you add to existing online social systems?

Why Parents Help Children Violate Facebook’s 13+ Rule

Announcing new journal article: “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, First Monday.

“At what age should I let my child join Facebook?” This is a question that countless parents have asked my collaborators and me. Often, it’s followed by the following: “I know that 13 is the minimum age to join Facebook, but is it really so bad that my 12-year-old is on the site?”

While parents are struggling to determine what social media sites are appropriate for their children, government tries to help parents by regulating what data internet companies can collect about children without parental permission. Yet, as has been the case for the last decade, this often backfires. Many general-purpose communication platforms and social media sites restrict access to only those 13+ in response to a law meant to empower parents: the Children’s Online Privacy Protection Act (COPPA). This forces parents to make a difficult choice: help uphold the minimum age requirements and limit their children’s access to services that let kids connect with family and friends OR help their children lie about their age to circumvent the age-based restrictions and eschew the protections that COPPA is meant to provide.

In order to understand how parents were approaching this dilemma, my collaborators — Eszter Hargittai (Northwestern University), Jason Schultz (University of California, Berkeley), John Palfrey (Harvard University) — and I decided to survey parents. In many ways, we were responding to a flurry of studies (e.g. Pew’s) that revealed that millions of U.S. children have violated Facebook’s Terms of Service and joined the site underage. These findings prompted outrage back in May as politicians blamed Facebook for failing to curb underage usage. Embedded in this furor was an assumption that by not strictly guarding its doors and keeping children out, Facebook was undermining parental authority and thumbing its nose at the law. Facebook responded by defending its practices — and highlighting how it regularly ejects children from its site. More controversially, Facebook’s founder Mark Zuckerberg openly questioned the value of COPPA in the first place.

While Facebook has often sparked anger over its cavalier attitudes towards user privacy, Zuckerberg’s challenge with regard to COPPA has merit. It’s imperative that we question the assumptions embedded in this policy. All too often, the public takes COPPA at face-value and politicians angle to build new laws based on it without examining its efficacy.

Eszter, Jason, John, and I decided to focus on one core question: Does COPPA actually empower parents? In order to do so, we surveyed parents about their household practices with respect to social media and their attitudes towards age restrictions online. We are proud to release our findings today, in a new paper published at First Monday called “Why parents help their children lie to Facebook about age: Unintended consequences of the ‘Children’s Online Privacy Protection Act’.” From a national sample of 1,007 U.S. parents who have children living with them between the ages of 10-14 conducted July 5-14, 2011, we found:

  • Although Facebook’s minimum age is 13, parents of 13- and 14-year-olds report that, on average, their child joined Facebook at age 12.
  • Half (55%) of parents of 12-year-olds report their child has a Facebook account, and most (82%) of these parents knew when their child signed up. Most (76%) also assisted their 12-year old in creating the account.
  • A third (36%) of all parents surveyed reported that their child joined Facebook before the age of 13, and two-thirds of them (68%) helped their child create the account.
  • Half (53%) of parents surveyed think Facebook has a minimum age and a third (35%) of these parents think that this is a recommendation and not a requirement.
  • Most (78%) parents think it is acceptable for their child to violate minimum age restrictions on online services.

The status quo is not working if large numbers of parents are helping their children lie to get access to online services. Parents do appear to be having conversations with their children, as COPPA intended. Yet, what does it mean if they’re doing so in order to violate the restrictions that COPPA engendered?

One reaction to our data might be that companies should not be allowed to restrict access to children on their sites. Unfortunately, getting the parental permission required by COPPA is technologically difficult, financially costly, and ethically problematic. Sites that target children take on this challenge, but often by excluding children whose parents lack resources to pay for the service, those who lack credit cards, and those who refuse to provide extra data about their children in order to offer permission. The situation is even more complicated for children who are in abusive households, have absentee parents, or regularly experience shifts in guardianship. General-purpose sites, including communication platforms like Gmail and Skype and social media services like Facebook and Twitter, generally prefer to avoid the social, technical, economic, and free speech complications involved.

While there is merit to thinking about how to strengthen parent permission structures, focusing on this obscures the issues that COPPA is intended to address: data privacy and online safety. COPPA predates the rise of social media. Its architects never imagined a world where people would share massive quantities of data as a central part of participation. It no longer makes sense to focus on how data are collected; we must instead question how those data are used. Furthermore, while children may be an especially vulnerable population, they are not the only vulnerable population. Most adults have little sense of how their data are being stored, shared, and sold.

COPPA is a well-intentioned piece of legislation with unintended consequences for parents, educators, and the public writ large. It has stifled innovation for sites focused on children and its implementations have made parenting more challenging. Our data clearly show that parents are concerned about privacy and online safety. Many want the government to help, but they don’t want solutions that unintentionally restrict their children’s access. Instead, they want guidance and recommendations to help them make informed decisions. Parents often want their children to learn how to be responsible digital citizens. Allowing them access is often the first step.

Educators face a different set of issues. Those who want to help youth navigate commercial tools often encounter the complexities of age restrictions. Consider the 7th grade teacher whose students are heavy Facebook users. Should she admonish her students for being on Facebook underage? Or should she make sure that they understand how privacy settings work? Where does digital literacy fit in when what children are doing is in violation of websites’ Terms of Service?

At first blush, the issues surrounding COPPA may seem to only apply to technology companies and the government, but their implications extend much further. COPPA affects parenting, education, and issues surrounding youth rights. It affects those who care about free speech and those who are concerned about how violence shapes home life. It’s important that all who care about youth pay attention to these issues. They’re complex and messy, full of good intention and unintended consequences. But rather than reinforcing or extending a legal regime that produces age-based restrictions which parents actively circumvent, we need to step back and rethink the underlying goals behind COPPA and develop new ways of achieving them. This begins with a public conversation.

We are excited to release our new study in the hopes that it will contribute to that conversation. To read our complete findings and learn more about their implications for policy makers, see “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, published in First Monday.

To learn more about the Children’s Online Privacy Protection Act (COPPA), make sure to check out the Federal Trade Commission’s website.

(Versions of this post were originally written for the Huffington Post and for the Digital Media and Learning Blog.)

Image Credit: Tim Roe

Designing for Social Norms (or How Not to Create Angry Mobs)

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code. In thinking about social media systems, plenty of folks think about monetization. Likewise, as issues like privacy pop up, we regularly see legal regulation become a factor. And, of course, folks are always thinking about what the code enables or not. But it’s depressing to me how few people think about the power of social norms. In fact, social norms are usually only thought of as a regulatory process when things go terribly wrong. And then they’re out of control and reactionary and confusing to everyone around. We’ve seen this with privacy issues and we’re seeing this with the “real name” policy debates. As I read through the discussion that I provoked on this issue, I couldn’t help but think that we need a more critical conversation about the importance of designing with social norms in mind.

Good UX designers know that they have the power to shape certain kinds of social practices by how they design systems. And engineers often fail to give UX folks credit for the important work that they do. But designing the system itself is only a fraction of the design challenge when thinking about what unfolds. Social norms aren’t designed into the system. They don’t emerge by telling people how they should behave. And they don’t necessarily follow market logic. Social norms emerge as people – dare we say “users” – work out how a technology makes sense and fits into their lives. Social norms take hold as people bring their own personal values and beliefs to a system and help frame how future users can understand the system. And just as “first impressions matter” for social interactions, I cannot underestimate the importance of early adopters. Early adopters configure the technology in critical ways and they play a central role in shaping the social norms that surround a particular system.

How a new social media system rolls out is of critical importance. Your understanding of a particular networked system will be heavily shaped by the people who introduce you to that system. When a system unfolds slowly, there’s room for the social norms to slowly bake, for people to work out what the norms should be. When a system unfolds quickly, there’s a whole lot of chaos in terms of social norms. Whenever a network system unfolds, there are inevitably competing norms that arise from people who are disconnected to one another. (I can’t tell you how much I loved watching Friendster when the gay men, Burners, and bloggers were oblivious to one another.) Yet, the faster things move, the faster those collisions occur, and the more confusing it is for the norms to settle.

The “real name” culture on Facebook didn’t unfold because of the “real name” policy. It unfolded because the norms were set by early adopters and most people saw that and reacted accordingly. Likewise, the handle culture on MySpace unfolded because people saw what others did and reproduced those norms. When social dynamics are allowed to unfold organically, social norms are a stronger regulatory force than any formalized policy. At that point, you can often formalize the dominant social norms without too much pushback, particularly if you leave wiggle room. Yet, when you start with a heavy-handed regulatory policy that is not driven by social norms – as Google Plus did – the backlash is intense.

Think back to Friendster for a moment… Remember Fakester? (I wrote about them here.) Friendster spent ridiculous amounts of time playing whack-a-mole, killing off “fake” accounts and pissing off some of the most influential of its userbase. The “Fakester genocide” prompted an amazing number of people to leave Friendster and head over to MySpace, most notably bands, all because they didn’t want to be configured by the company. The notion of Fakesters died down on MySpace, but the most central practice – the ability for groups (bands) to have recognizable representations – ended up being the most central feature of MySpace.

People don’t like to be configured. They don’t like to be forcibly told how they should use a service. They don’t want to be told to behave like the designers intended them to be. Heavy-handed policies don’t make for good behavior; they make for pissed off users.

This doesn’t mean that you can’t or shouldn’t design to encourage certain behaviors. Of course you should. The whole point of design is to help create an environment where people engage in the most fruitful and healthy way possible. But designing a system to encourage the growth of healthy social norms is fundamentally different than coming in and forcefully telling people how they must behave. No one likes being spanked, especially not a crowd of opinionated adults.

Ironically, most people who were adopting Google Plus early on were using their real names, out of habit, out of understanding how they thought the service should work. A few weren’t. Most of those who weren’t were using a recognizable pseudonym, not even trying to trick anyone. Going after them was just plain stupid. It was an act of force and people felt disempowered. And they got pissed. And at this point, it’s no longer about whether or not the “real names” policy was a good idea in the first place; it’s now an act of oppression. Google Plus would’ve been ten bazillion times better off had they subtly encouraged the policy without making a big deal out of it, had they chosen to only enforce it in the most egregious situations. But now they’re stuck between a rock and a hard place. They either have to stick with their policy and deal with the angry mob or let go of their policy as a peace offering in the hopes that the anger will calm down. It didn’t have to be this way though and it wouldn’t have been had they thought more about encouraging the practices they wanted through design rather than through force.

Of course there’s a legitimate reason to want to encourage civil behavior online. And of course trolls wreak serious havoc on a social media system. But a “real names” policy doesn’t stop an unrepentant troll; it’s just another hurdle that the troll will love mounting. In my work with teens, I see textual abuse (“bullying”) every day among people who know exactly who each other is on Facebook. The identities of many trolls are known. But that doesn’t solve the problem. What matters is how the social situation is configured, the norms about what’s appropriate, and the mechanisms by which people can regulate them (through social shaming and/or technical intervention). A culture where people can build reputation through their online presence (whether “real” names or pseudonyms) goes a long way in combating trolls (although it is by no means a fullproof solution). But you don’t get that culture by force; you get it by encouraging the creation of healthy social norms.

Companies that build systems that people use have power. But they have to be very very very careful about how they assert that power. It’s really easy to come in and try to configure the user through force. It’s a lot harder to work diligently to design and build the ecosystem in which healthy norms emerge. Yet, the latter is of critical importance to the creation of a healthy community. Cuz you can’t get to a healthy community through force.