Is blocking pro-ED content the right way to solve eating disorders?

Warning: This post deals with eating disorder and self-harm content and is potentially triggering.

Following up on Tarleton’s terrific post on moderating Facebook comes Tumblr’s announcement that it will no longer allow pro-eating disorder (pro-ED) or pro-self-harm blogs on the site.

Active Promotion of Self-Harm. Don’t post content that actively promotes or glorifies self-injury or self-harm. This includes content that urges or encourages readers to cut or mutilate themselves; embrace anorexia, bulimia, or other eating disorders; or commit suicide rather than, e.g., seek counseling or treatment for depression or other disorders. Online dialogue about these acts and conditions is incredibly important; this prohibition is intended to reach only those blogs that cross the line into active promotion or glorification. For example, joking that you need to starve yourself after Thanksgiving or that you wanted to kill yourself after a humiliating date is fine, but recommending techniques for self-starvation or self-mutilation is not.

(The remainder of this post focuses on eating disorder content, because it’s what I know the most about. I’d love to hear more from people familiar with self-harm communities.)

Pro-ED content has existed on the internet for many years, and it has been studied by many researchers. It is primarily created and consumed by girls and young women, ages 13-25. There is evidence that the viewing of pro-ED websites (pro-ana, anorexia, and pro-mia, bulimia) produces negative effects in college-age women — lower self-esteem and perception of oneself as “heavier” (Bardone-Cone & Cass, 2007). But pro-ED websites have been sensationalized in the media as cults that encourage young women to kill themselves, even ending up as the case-of-the-week on Boston Legal.

At the same time, the cultural pressure on young women to conform to normative body types is intense. In Am I Thin Enough Yet? The Cult of Thinness and the Commercialization of Identity, feminist sociologist Sharlene Hess Biber looks at the complex interactions between media, schools, peers, family, and the health and fitness industry that systemically undermine young women’s self confidence, send the message that appearance is more important than intelligence or personality, and emphasize the importance of thinness overall. Often, the messages found on pro-ana or pro-mia sites– such as “nothing tastes as good as thin feels”, attributed to Kate Moss but actually a Weight Watchers slogan that has been around for decades– are extraordinarily similar to those found in magazines like Self and Women’s Health, or on websites like My Fitness Pal or Sparkpeople that promote weight loss in a “healthy” way. These media emphasize different weight loss techniques, but the message is the same: it is very important to be thin and conform to an attractive, normative body ideal.

Pro-ED websites are a female subculture, with their own vocabulary, customs, and norms. Moreover, the women who frequent these sites are well aware that their practices are stigmatized. In general, women with eating disorders go to great lengths to hide them from friends and families. This is primarily for two reasons: one, they want to keep losing weight and are worried that they may be forced into treatment, and two, they are afraid of being ridiculed or called out by others. The anonymous or pseudonymous nature of pro-ED sites allows these participants an outlet for their social isolation, and (to a certain extent) emotional support from others going through the same experiences that they are.

Jeannine Gailey, a sociologist of deviance, wrote a paper on pro-ED websites using ethnographic methods. She concludes:

They need a place where they can share their stories and fears with others who are similarly minded and have had comparable experiences. They, as Dias put it [another ethnographic researcher of pro-ana sites, paper here], are seeking a sanctuary. The internet provides the women with both a sanctuary and a medium in which to express the sensations and intense emotions they experience as they struggle to maintain control over their bodies and lives…. The women’s narratives I explored indicate that they participate in the central features of edgework, namely pushing oneself to the edge, testing the limits of both their bodies and minds, exercising particular skills that require ‘innate talent’ and mental toughness, and feelings of self-actualization or omnipotence.

Gailey frames EDs as “edgework,” a concept from criminology/deviance that describes practices of voluntary risk-taking, like skydiving, rock climbing, ‘extreme sports’, stock-trading, unprotected sex, and illegal graffiti. The skills Gailey describes as part of edgework are similar to those emphasized by other body-related extreme communities, such as those devoted to bodybuilding, crossfit, veganism, and paleo dieting. On such communities, members swap tips, ask for support, show progress, share information and share vocabularies and normative practices.

Obviously, Tumblr isn’t focusing on any of these communities. I’m not arguing that eating disorders aren’t dangerous, or even that they’re potentially empowering. They are not. But the focus on young women’s online practice as deviant, pathological, and quasi-illegal is in line with a long history. Young women and their bodies are often the locus of control of social panics, from teen pregnancies to virginity to obesity to dressing “slutty”.

More importantly, Tumblr banning this content won’t do anything to make it go away. It does take Tumblr off the hook, but even the quickest search for self-harm or thinspo (serious trigger warning) finds thousands of posts, many heartbreaking in their raw honesty. One Tumblr writes:

if tumblr blocks all our blogs then things will be worse. off than they were before, we’ll feel alone again, outcasts! Who can we share our problems with if our blogs have been taken off us? We share our deepest and most darkest secrets on here and if our blogs are taken where are we supposed to put our feelings? They will build up inside of us and things will get worse and worse. Well done tumblr you bunch of arseholes, you’re going to make things worse.

Pragmatically, many of the thinspo content has simply migrated to Pinterest. Others have password-protected their blogs and spread the password to people in the community.

Eating disorder prevention needs to be structural as well as medical. Realistically, eating disorders aren’t going anywhere as long as we have a complex set of mediated images and discursive tropes that pin the importance of young women on their bodies. These issues exist on a continuum that includes everything from Shape magazine and The Biggest Loser to well-meaning anti-childhood obesity initiatives. Young women participating in pro-ED communities are acting upon messages they get from many other places in their lives. While there is no agreed-upon way of dealing with pro-ED communities– and it’s great that Tumblr is going to implement PSA-type ads that appear on searches for these terms– there are more productive interventions that can be made. We must understand the reasons these young women are in such pain and, more importantly, be willing to engage with these communities, rather than painting them as horrific or abhorrent.

Stop the Cycle of Bullying

[John Palfrey and I originally wrote this as an op-ed for the Huffington Post. See HuffPo for more comments.]

On 22 September 2010, the wallet of Tyler Clementi – a gay freshman at Rutgers University – was found on the George Washington Bridge; his body was found in the Hudson River the following week. His roommate, Dharun Ravi, was charged with 15 criminal counts, including invasion of privacy, bias intimidation, and tampering with witnesses and evidence tampering. Ravi pleaded not guilty.

Ravi’s trial officially begins this week, but in the court of public opinion, he has already been convicted. This is a terrible irony, since the case itself is about bullying.

Wading through the news reports, it’s hard to tell exactly what happened in the hours leading up to Clementi’s suicide. Some facts are unknown. What seems apparent is that Clementi asked Ravi to have his dormroom to himself on two occasions – September 19 and 21 – so that he could have alone time with an older gay man. On the first occasion, Ravi appears to have jiggered his computer so that he could watch the encounter from a remote computer. Ravi announced that he did so on Twitter. When Clementi asked Ravi for a second night in the room, Ravi invited others to watch via Twitter. It appears as though Clementi read this and unplugged Ravi’s computer, thereby preventing Ravi from watching. What happened after this incident on September 21 is unclear. A day later, Clementi’s body was discovered.

The media-driven narrative quickly blamed Ravi and his friend Molly Wei, from whose room Ravi watched Clementi. Amidst a series of other highly publicized LGBT suicides, Clementi’s suicide was labeled as a tragic product of homophobic bullying. Ravi has been portrayed as a malicious young man, hellbent on making his roommate miserable. Technology was blamed for providing a new mechanism by which Ravi could spy on and torment his roommate. The overwhelming presumption: Ravi’s guilty for causing Clementi’s death. Ravi may well be guilty of these crimes, but we have trials for a reason.

As information has emerged from the legal discovery process, the story became more complicated. It appears as though Clementi turned to online forums and friends to get advice; his messages conveyed a desire for getting support, but they didn’t suggest a pending suicide attempt. In one document submitted to the court, Clementi appears to have written to a friend that he was not particularly upset by Ravi’s invasion. Older digital traces left by Clementi – specifically those produced after he came out to and was rejected by those close to him – exhibited terrible emotional pain. At Rutgers, Clementi appears to have been handling his frustrations with his roommate reasonably well. After the events of September 20 and 21, Clementi appears to have notified both his resident assistant and university officials and asked for a new room; the school appears to have responded properly and Clementi appeared pleased.

The process of discovery in a lawsuit is an essential fact-finding exercise. The presumption of innocence is an essential American legal principle. Unfortunately, in highly publicized cases, this doesn’t stop people from jumping to conclusions based on snippets of information. Media speculation and hype surrounding Clementi’s suicide has been damning for Ravi, but the incident has also prompted all sorts of other outcomes. Public policy wheels have turned, prompting calls for new state and federal cyberbullying prevention laws. Well-meaning advocates have called for bullying to be declared a hate crime.

As researchers, we know that bullying is a serious, urgent issue. We favor aggressive and meaningful intervention programs to address it and to prevent young people from taking their lives. These programs should especially support LGBT youth, themselves more likely to be the targets of bullying. Yet, it’s also critical that we pay attention to the messages that researchers have been trying to communicate for years. “Bullies” are often themselves victims of other forms of cruelty and pressure. Zero-tolerance approaches to bullying don’t work; they often increase bullying. Focusing on punishment alone does little to address the underlying issues. Addressing bullying requires a serious social, economic, and time-based commitment to educating both young people and adults. Research shows that curricula and outreach programs can work. We are badly underfunding youth empowerment programs that could help enormously. Legislative moves that focus on punishment instead of education only make the situation worse.

Not only are most young people often ill-equipped to recognize how their meanness, cruelty, and pranking might cause pain, but most adults are themselves are ill-equipped to help young people in a productive way. Worse, many adults are themselves perpetuating the idea that being cruel is socially acceptable. Not only has cruelty and deception become status quo on TV talk shows; it plays a central role in televised entertainment and political debates. In contemporary culture, it has become acceptable to be outright cruel to any public figure, whether they’re a celebrity, reality TV contestant, or teenager awaiting trial.

Tyler Clementi’s suicide is a tragedy. We should all be horrified that a teenager felt the need to take his life in our society. But in our frustration, we must not prosecute Dharun Ravi before he has had his day in court. We must not be bullies ourselves. Ravi’s life has already been destroyed by what he may or may not have done. The way we, the public, have treated him, even before his trial, has only made things worse.

To combat bullying, we need to stop the cycle of violence. We need to take the high road; we must refrain from acting like a mob, in Clementi’s name or otherwise. Every day, there are young people who are being tormented by their peers and by adults in their lives. If we want to make this stop, we need to get to the root of the problem. We should start by looking to ourselves.

danah boyd is a senior researcher at Microsoft Research and a research assistant professor at New York University. John Palfrey is a professor of law at Harvard Law School.

The dirty job of keeping Facebook clean

Last week, Gawker received a curious document. Turned over by an aggrieved worker from the online freelance employment site oDesk, the document iterated, over the course of several pages and in unsettling detail, exactly what kinds of content should be deleted from the social networking site that had outsourced its content moderation to oDesk’s team. The social networking site, as it turned out, was Facebook.

The document, antiseptically titled “Abuse Standards 6.1: Operation Manual for Live Content Moderators” (along with an updated version 6.2 subsequently shared with Gawker, presumably by Facebook) is still available from Gawker. It represents the implementation of the Facebook’s Community Standards, which present Facebook’s priorities around acceptable content, but stay miles back from actually spelling them out. In the Community Standards, Facebook reminds users that “We have a strict ‘no nudity or pornography’ policy. Any content that is inappropriately sexual will be removed. Before posting questionable content, be mindful of the consequences for you and your environment.” But, an oDesk freelancer looking at hundreds of pieces of content every hour needs more specific instructions on what exactly is “inappropriately sexual” — such as removing “Any OBVIOUS sexual activity, even if naked parts are hidden from view by hands, clothes or other objects. Cartoons / art included. Foreplay allowed (Kissing, groping, etc.). even for same sex (man-man / woman-woman”. The document offers a tantalizing look into a process that Facebook and other content platforms generally want to keep under wraps, and a mundane look at what actually doing this work must require.

It’s tempting, and a little easy, to focus on the more bizarre edicts that Facebook offers here (“blatant depictions of camel toes” as well as “images of drunk or unconscious people, or sleeping people with things drawn on their faces” must be removed; pictures of marijuana are OK, as long as it’s not being offered for sale). But the absurdity here is really an artifact of having to draw this many lines in this much sand. Any time we play the game of determining what is and is not appropriate for public view, in advance and across an enormous and wide-ranging amount of content, the specifics are always going to sound sillier than the general guidelines. (It was not so long ago that “American Pie’s” filmmakers got their NC-17 rating knocked down to an R after cutting the scene in which the protagonist has sex with a pie from four thrusts to two.)

Lines in the sand are like that. But there are other ways to understand this document: for what it reveals about the kind of content being posted to Facebook, the position in which Facebook and other content platforms find themselves, and the system they’ve put into place for enforcing the content moderation they now promise.

Facebook or otherwise, it’s hard not to be struck by the depravity of some of the stuff that content moderators are reviewing. It’s a bit disingenuous of me to start with camel toes and man-man foreplay, when what most of this document deals with is so, so much more reprehensible: child pornography, rape, bestiality, graphic obscenities, animal torture, racial and ethnic hatred, self-mutilation, suicide. There is something deeply unsettling about this document in the way it must, with all the delicacy of a badly written training manual, explain and sometimes show the kinds of things that fall into these categories. In 2010, the New York Times reported on the psychological toll that content moderators, having to look at this “sewer channel” of content reported to them by users, often experience. It’s a moment when Supreme Court Justice Potter Stewart’s old saw about pornography, “I know it when I see it,” though so problematic as a legal standard, does feel viscerally true. It’s a disheartening glimpse into the darker side of the “participatory web”: no worse or no better than the depths that humankind has always been capable of sinking to, though perhaps boosted by the ability to put these coarse images and violent words in front of the gleeful eyes of co-conspirators, the unsuspecting eyes of others, and sometimes the fearful eyes of victims.

This outpouring of obscenity is by no means caused by Facebook, and it is certainly reasonable for Facebook to take a position on the kinds of content it believes many of its users will find reprehensible. But, that does not let Facebook off the hook for the kind of position it takes: not just where it draws the lines, but the fact that it draws lines at all, the kind of custodial role it takes on for itself, and the manner in which it goes about performing that role. We may not find it difficult to abhor child pornography or ethnic hatred, but we should not let that abhorrence obscure the fact that sites like Facebook are taking on this custodial role — and that while goofy frat pranks and cartoon poop may seem irrelevant, this is still public discourse. Facebook is now in the position of determining, or helping to determine, what is acceptable as public speech — on a site in which 800 million people across the globe talk to each other every day, about all manner of subjects.

This is not a new concern. The most prominent controversy has been about the removal of images of women breastfeeding, which has been a perennial thorn in Facebook’s side; but similar dustups have occurred around artistic nudity on Facebook, political caricature on Apple’s iPhone, gay themed books on Amazon, and fundamentalist Islamic videos on YouTube. The leaked document, while listing all the things that should be removed, is marked with the residue of these past controversies, if you know how to look for them. The document clarifies the breastfeeding rule, a bit, by prohibiting “Breastfeeding photos showing other nudity, or nipple clearly exposed.” Any commentary that denies the existence of the Holocaust must be escalated for further review, not surprising after years of criticism. Concerns for cyber-bullying, which have been taken up so vehemently over the last two years, appear repeatedly in the manual. And under the heading “international compliance” are a number of decidedly specific prohibitions, most involving Turkey’s objection to their Kurdish separatist movement, including prohibitions on maps of Kurdistan, images of the Turkish flag being burned, and any support for PKK (The Kurdistan Workers’ Party) or their imprisoned founder Abdullah Ocalan.

Facebook and its removal policies, and other major content platforms and their policies, are the new terrain for longstanding debates about the content and character of public discourse. That images of women breastfeeding have proven a controversial policy for Facebook should not be surprising, since the issue of women breastfeeding in public remains a contested cultural sore spot. That our dilemmas about terrorism and Islamic fundamentalism, so heightened over the last decade, should erupt here too is also not surprising. The dilemmas these sites face can be seen as a barometer of our society’s pressing concerns about public discourse more broadly: how much is too much; where are the lines drawn and who has the right to draw them; how do we balance freedom of speech with the values of the community, with the safety of individuals, with the aspirations of art and the wants of commerce.

But a barometer simply measures where there is pressure. When Facebook steps into these controversial issues, decides to authorize itself as custodian of content that some of its users find egregious, establishes both general guidelines and precise instructions for removing that content, and then does so, it is not merely responding to cultural pressures, it is intervening in them, reifying the very distinctions it applies. Whether breastfeeding is made more visible or less, whether Holocaust deniers can use this social network to make their case or not, whether sexual fetishes can or cannot be depicted, matters for the acceptability or marginalization of these topics. If, as is the case here, there are “no exceptions for news or awareness-related content” to the rules against graphic imagery and speech, well, that’s a very different decision, with different public ramifications, than if news and public service did enjoy such an exception.

But the most intriguing revelation here may not be the rules, but how the process of moderating content is handled. Sites like Facebook have been relatively circumspect about how they manage this task: they generally do not want to draw attention to the presence of so much obscene content on their sites, or that they regularly engage in “censorship” to deal with it. So the process by which content is assessed and moderated is also opaque. This little document brings into focus a complex chain of people and activities required for Facebook to play custodian.

The moderator using this leaked manual would be looking at content already reported or ‘flagged” by a Facebook user. The moderator would either “confirm” the report (thereby deleting the content), “unconfirm” it (the content stays) or “escalate” it, which moves it to Facebook for further or heightened review. Facebook has dozens of its own employees playing much the same role; contracting out to oDesk freelancers, and to companies like Caleris and Telecommunications On Demand, serves as merely a first pass. Facebook also acknowledges that it looks proactively at content that has not yet been reported by users (unlike sites like YouTube that claim to wait for their users to flag before they weigh in). Within Facebook, there is not only a layer of employees looking at content much as the oDesk workers do, but also a team charged with discussing truly gray area cases, empowered both to remove content and to revise the rules themselves.

At each level, we might want to ask: What kind of content gets reported, confirmed, and escalated? How are the criteria for judging determined? Who is empowered to rethink these criteria? How are general guidelines translated into specific rules, and how well do these rules fit the content being uploaded day in and day out? How do those involved, from the policy setter down to the freelance clickworker, manage the tension between the rules handed to them and their own moral compass? What kind of contextual and background knowledge is necessary to make informed decisions, and how is the context retained or lost as the reported content passes from point to point along the chain? What kind of valuable speech gets caught in this net? What never gets posted at all, that perhaps should?

Keeping our Facebook streets clean is a monumental task, involving multiple teams of people, flipping through countless photos and comments, making quick judgments, based on regularly changing proscriptions translated from vague guidelines, in the face of an ever-changing, global, highly contested, and relentless flood of public expression. And this happens at every site, though implemented in different ways. Content moderation is one of those undertakings that, from one vantage point, we might say it’s amazing that it works at all, and as well as it does. But from another vantage point, we should see that we are playing a dangerous game: the private determination of the appropriate boundaries of public speech. That’s a whole lot of cultural power, in the hands of a select few who have a lot of skin in the game, and it’s being done in an oblique way that makes it difficult for anyone else to inspect or challenge. As users, we certainly cannot allow ourselves to remain naive, believing that the search engine shows all relevant results, the social networking site welcomes all posts, the video platform merely hosts what users generate. Our information landscape is a curated one. What is important, then, is that we understand the ways in which it is curated, by whom and to what ends, and engage in a sober, public conversation about the kind of public discourse we want and need, and how we’re willing to get it.

This article first appeared on Salon.com, and is cross-posted at Culture Digitally.

The life and death of our research data

At the 2012 iconference, I sat in on a fishbowl about human values and data collection.  Hearing a vibrant discussion about research ethics related to the life of data was actually incredibly timely for me, in that lately I’ve been thinking a lot about the ethics of data gathering.  In particular, I recently came across this research project while perusing a blog on body modification.  Spearheaded by the Centre for Anatomy and Human Identification (CAHID) at the University of Dundee, Scotland, UK, the project intends to collect “images of body modifications to establish a database which may aid in the identification of victims and missing persons, for example in a disaster. By collecting a large number of images of tattoos, piercings and other body modifications, not only can we develop a more uniform way of describing those modifications but also establish how individualistic certain body modifications are within a population, social group or age group.”  Essentially, people with body modification are being asked to submit images of their modifications as well as some personal information in order to generate statistical measures for the prevalence of various body modifications.  In the blog post I read, the researcher emphasizes that “none of the images will be used for policing purposes simply because we don’t have permission to do so.”  Presumably, the researcher felt it was important to emphasize this because one of the partners in the project is Interpol.  Interestingly, in Interpol’s description of the project, there is no explicit mention of the fact that data will not be used to assist law enforcement.
During the conference fishbowl, I raised this project as a case study for thinking about ethical tensions surrounding informed consent, risk/benefit analysis and the preservation of data gathering in social sciences research.  My main question centers on how do we explain to participants the issues of data privacy?  I don’t mean this in a pedantic way, where researchers are instructing hapless laypeople on the complexities of data curation.  I mean, how do we balance a need to gather data from people with a concern for the life of that data?  Can these researchers ensure that the information provided by participants won’t be used for purposes other than identifying bodies after a disaster? If the researchers conclude their involvement with a project, what influence do they have over the database they’ve created and the parties who have access to that database?  IRB forms typically ensure that researchers outline how they will manage the destruction of data and require consent forms to address issues of privacy.  The statement that researchers are prohibited from doing so because they don’t ask for that kind of consent from participants does little to quell my concerns about asking for personal data (moreover, for me, for documentation of bodies) which could then be used in nefarious ways by an international body of policing.
To be fair, I’ve relied on the body modification community to conduct research on secrecy and stigmatized behavior and even with using consent forms and explaining privacy issues I can’t guarantee that all of my participants had thought through every possible contingency of sharing information with me.  Yet to me, there is a qualitative difference between asking participants to share personal experiences with body modification and creating a database of images that is then shared with an agency like Interpol.
My objective isn’t to slam this research project as ethically vacuous.  My objective is to think about this research project as a case that illustrates concerns I have for privacy in the collection of mass information.  Last fall, danah boyd and Kate Crawford wrote a terrific piece on provocations for big data and addressed ethical issues of large data sets.  In addition to their concerns about the ethics of gathering and analyzing “public” data from Facebook or Twitter, boyd and Crawford ask, “Should someone be included as a part of a large aggregate of data? What if someone’s ‘public’ blog post is taken out of context and analyzed in a way that the author never imagined? What does it mean for someone to be spotlighted or to be analyzed without knowing it? Who is responsible for making certain that individuals and communities are not hurt by the research process? What does consent look like?”  These are questions that I would also apply to building repositories of private information that people submit willingly and with consent.
One suggestion that came out of the iconference talk was to think about the metaphors we use to describe data (Is it a mirror?  Is it a window?) and use that as a lens for thinking through some of the issues surrounding the ethics of data collection.  What are the consequences of adhering to a particular set of metaphors about data in terms of how we talk to participants?  These issues also suggest to me that researchers should take a proactive stance with IRBs, suggesting ways of holding ourselves accountable for the privacy and well-being of participants.  I know I’ve been guilty of being a little vague in filling out IRB forms when it came to the benefits my project offers to my participants (I often say something kind of lame like, “It is hoped that participants will benefit from increased understanding of XYZ.”).  For my own work, one thing that comes out of working through some of the issues provoked by the University of Dundee project is a more rigorous consideration about what risks and benefits truly mean for participants in my projects, not only in the process of conducting research, but in the long term of acquiring and sharing information gathered about participants’ lives.