Custodians

I’m thrilled to say that my new book, Custodians of the Internet, is now available for purchase from Yale University Press, and your favorite book retailer. Those of you who know me know that I’ve been working on this book for a long time, and have cared about the issues it addresses for a while now. So I’m particularly excited that it is now no longer mine, but yours if you want it. I hope it’ll be of some value to those of you who are interested in interrogating and transforming the information landscape in which we find ourselves.

By way of introduction, I thought I would explain the book’s title, particularly my choice of the word “custodians.” This title came unnervingly late in the writing process, and after many, many conversations with my extremely patient friend and colleague Dylan Mulvin. “Custodians of the Internet” captured, better than many, many alternatives, the aspirations of social media platforms, the position they find themselves in, and my notion for how they should move forward.

moderators are the web’s “custodians,” quietly cleaning up the mess: The book begins with a quote from one of my earliest interviews, with a member of YouTube’s content policy team. As they put it, “In the ideal world, I think that our job in terms of a moderating function would be really to be able to just turn the lights on and off and sweep the floors . . . but there are always the edge cases, that are gray.” The image invoked is a custodian in the janitorial sense, doing the simple, mundane, and uncontroversial task of sweeping the floors. In this turn of phrase, content moderation was offered up as simple maintenance. It is not imagined to be difficult to know what needs scrubbing, and the process is routine. As with content moderation, there is labor involved, but largely invisible, just as actual janitorial staff are often instructed to “disappear,” working at night or with as little intrusion as possible. yet even then, years before Gamergate or ISIS beheadings or white nationalists or fake news, it was clear that moderation is not so simple.

platforms have taken “custody” of the Internet: Content moderation at the major platforms matters because those platforms have achieved such prominence in the intervening years.As I was writing the book, one news item in 2015 stuck with me: in a survey on people’s new media use, more people said that they used Facebook than said they used the Internet. Facebook, which by then had become one of the most popular online destinations in the world and had expanded to the mobile environment, did not “seem” like the Internet anymore. Rather than being part of the Internet, it had somehow surpassed it. This was not true, of course; Facebook and the other major platforms had in fact woven themselves deeper into the Internet, by distributing cookies, offering secure login mechanisms for other sites and platforms, expanding advertising networks, collecting reams of user data from third-party sites, and even exploring Internet architecture projects. In both the perception of users and in material ways, Facebook and the major social media platforms have taken “custody” of the Internet. This should change our calculus as to whether platform moderation is or is not “censorship,” and the responsibilities of platforms bear when they decide what to remove and who to exclude.

platforms should be better “custodians,” committed guardians of our struggles over value: In the book, I propose that these responsibilities have expanded. Users have become more acutely aware, of both the harms they encounter on these platforms, and the costs of being wronged by content moderation decisions. What’s more, social media platforms have become the place where a variety of speech coalitions do battle: activists, trolls, white nationalists, advertisers, abusers, even the President. And the implications of content moderation have expanded, from individual concerns to public ones. If a platform fails to moderate, everyone can be affected, even those who aren’t party to the circulation of the offensive, the fraudulent, or the hateful — even those who aren’t on social media at all.

What would it mean for platforms to play host not just to our content, but to our best intentions? The major platforms I discuss here have, for years, tried to position themselves as open and impartial conduits of information, defenders of their user’s right to speak, and legally shielded from any obligations for how they police their sites. As most platform managers see it, moderation should be theirs to do, conducted on their own terms, on our behalf, and behind the scenes. But that arrangement is crumbling, as critics begin to examine the responsibilities social media platforms have to the public they serve.

In the book, I propose that platforms become “custodians” of the public discourse they facilitate — not in the janitorial sense, but something more akin to legal guardianship. The custodian, given charge over a property, a company, a person, or a valuable resource, does not take it for their own or impose their will over it; they accept responsibility for ensuring that it is governed properly. This is akin to Jack Balkin’s suggestion that platforms act as “information fiduciaries,” with a greater obligation to protect our data. But I don’t just mean that platforms should be custodians of our content; platforms should be custodians of the deliberative process we all must engage in, that makes us a functioning public. Users need to be more accountable for making the hard decisions about what does and does not belong; platforms could facilitate that deliberation, and then faithfully enact the conclusions users reach. Safeguarding public discourse requires ensuring that it is governed by those to whom it belongs, making sure it survives, that its value is sustained in a fair and equitable way. Platforms could be not the police of our reckless chatter, but the trusted agents of our own interest in forming more democratic publics.

If you end up reading the book, you have my gratitude. And I’m eager to hear from anyone who has thoughts, comments, praise, criticism, and suggestions. You can find me on Twitter at @TarletonG.

The accountability of social media platforms, in the age of Trump

Pundits and commentators are just starting to pick through the rubble of this election and piece together what happens and what it means. In such cases, it is often easier to grab hold of one explanation — Twitter! racism! Brexit! James Comey! — and use it as a clothesline to hang the election on and shake it into some semblance of sense. But as scholars, we do a disservice to allow for simple or single explanations. “Perfect storm” has become a cliche, but I can see a set of elements that had to all be true, that came together, to produce the election we just witnessed: Globalization, economic precarity, and fundamentalist reactionary responses; the rise of the conservative right and its target tactics, especially against the Clintons; backlashes to multiculturalism, diversity, and the election of President Obama; the undoing of the workings and cultural authority of journalism; the alt-right and the undercurrents of social media; the residual fear and anxiety in America after 9/11. It is all of these things, and they were all already connected, before candidate Trump emerged.

Yet at the same time, my expertise does not stretch across all of these areas. I have to admit that I have trained myself right down to a fine point: social media, public discourse, technology, control, law. I have that hammer, and can only hit those nails. If I find myself being particular concerned about social media and harassment, or want to draw links between Trump’s dog whistle politics, Steve Bannon and Breitbart, the tactics of the alt-right, and the failings of Twitter to consider the space of discourse it has made possible, I risk making it seem like I think there’s one explanation, that technology produces social problems. I do not mean this. In the end, I have to have faith that, as I try to step up and say something useful about this one aspect, some other scholar is similarly stepping up an saying something about fundamentalist reactions to globalization, and someone else is stepping up to speak about the divisiveness of the conservative movement.

The book I’m working on now, nearing completion, is about social media platforms and the way they have (and have not) stepped into the role of arbiters of public discourse. The focus is on the platforms, their ambivalent combination of neutrality and intervention, the actual ways in which they go about policing offensive content and behavior, and the implications those tactics and arrangements have for how we think about the private curation of public discourse. But the book is framed in terms of the rise and now, for lack of a better word, adolescence of social media platforms, and how the initial optimism and enthusiasm that fueled the rise of the web, overshadowed the darker aspects already emergent there, and spurred the rise of the first social media platforms, seems to have given way to a set of concerns about how social media platforms work and how they are used — sometimes against people, and towards very different ends than were originally imagined. Those platforms did not at first imagine, and have not thoroughly thought through, how they now support (among many other things) a targeted project of racial animosity and a cold gamesmanship about public engagement. In the context of the election, my new goal is to boost that part of the argument, to highlight the opportunities that social media platforms offer to forms of public discourse that are not only harassing, racist, or criminal, but also that can take advantage of the dynamics of social media to create affirming circles of misinformation, to sip the poison of partisanship, to spur leaderless movements ripe for demagoguery — and how the social media platforms who now host this discourse have embraced a woefully insufficient sense of accountability, and must rethink how they have become mechanisms of social and political discourse, good and ill.

This specific project is too late in the game for a radical shift. But as I think beyond it, I feel an imperative to be sure that my choices of research topics are driven more by cultural and political imperative than merely my own curiosity. Or, ideally, the perfect meeting point of the two. It seems like the logical outcome of my interest in platforms and content moderation is to shift how we think of platforms, not as mere intermediaries between speakers (if they ever were, they are no longer) to understand them as constitutive of public discourse. If we understand them as constituting discourse — both by the choreography they install in their design, the moderation they conduct as a form of policy, and in the algorithmic selection of which raw material becomes “my feed,” then we expand their sense of responsibility. moreover, we might ask what it would mean to hold them accountable for making the political arena we want, we need. These questions will only grow in importance and complexity as these information systems depend more on more on algorithmic, machine learning, and other automated techniques;, more regularly include bots who are difficult to discern from the human participants; and that continue to extend their global reach for new consumers, also extending and entangling with the very shifts of globalization and tribalization we will continue to grapple with.

These comments were part of a longer post at Culture Digitally that I helped organize, in which a dozen scholars of media and information reflected on the election and the future directions of their own work, and our field, in light of the political realities we woke up to Wednesday morning. My specific scholarly community cannot address every issue that’s likely on the horizon, but our work does touch a surprising number of them. The kinds of questions that motivate our scholarship — from fairness and equity, to labor and precarity, to harassment and misogyny, to globalism and fear, to systems and control, to journalism and ignorance — all of these seem so much more pressing today then they even did yesterday.

How Do Users Take Collective Action Against Online Platforms? CHI Honorable Mention

What factors lead users in an online platform to join together in mass collective action to influence those who run the platform? Today, I’m excited to share that my CHI paper on the reddit blackout has received a Best Paper Honorable Mention! (Read the pre-print version of my paper here)

When users of online platforms complain, we’re often told to leave if we don’t like how a platform is run. Beyond exit or loyalty, digital citizens sometimes take a third option, organizing to pressure companies for change. But how does that come about?

I’m seeking reddit moderators to collaborate on the next stage of my research: running experiments together with subreddits to test theories of moderation. If you’re interested, you can read more here. Also, I’m presenting this work as part of larger talks at the Berkman Center on Feb 23 and the Oxford Internet Institute on March 16. I would love to see you there!

Having a formalized voice with online platforms is rare, though it has happened with San Francisco drag queens, the newly-announced Twitter Trust and Safety Council or the EVE player council, where users are consulted about issues a platform faces. These efforts typically keep users in positions of minimal power on the ladder of citizen participation, but they do give some users some kind of voice.

Another option is collective action, leveraging the collective power of users to pressure a platform to change how that platform works. To my knowledge, this has only happened four times on major U.S. platforms: when AOL community leaders settled a $15 million class action lawsuit for unpaid wages, when DailyKos writers went on strike in 2008, the recent Uber class action lawsuit, and the reddit blackout of July 2015, when moderators of 2,278 subreddits shut down their communities to pressure the company for better coordination and better moderation tools. They succeeded.

What factors lead communities to participate in such a large scale collective action? That’s the question that my paper set out to answer, combining statistics with the “thick data” of qualitative research.

The story of how I answered this question is also a story about finding ways to do large-scale research that include the voices and critiques of the people whose lives we study as researchers. In the turmoil of the blackout, amidst volatile and harmful controversies around hate speech, harassment, censorship, and the blackout itself, I made special effort to do research that included redditors themselves.

Theories of Social Movement Mobilization

Social movement researchers have been asking how movements come together for many decades, and there are two common schools, responding to early work to quantify collective action (see Olson, Coleman):

Political Opportunity Theories argue that social movements need the right people and the right moment. According to these theories, a movement happens when grievances are high, when social structure among potential participants is right, and when the right opportunity for change arises. For more on political opportunity theory, see my Atlantic article on the Facebook Equality Meme this past summer.

Resource Mobilization Theories argue that successful movements are explained less by grievances and opportunities and more by the resources available to movement actors. In their view, collective action is something that groups create out of their resources rather than something that arises out of grievances. They’re also interested in social structure, often between groups that are trying to mobilize people (read more).

A third voice in these discussions are the people who participate in movements themselves, voices that I wanted to have a primary role in shaping my research.

How Do You Study a Strike As It Unfolds?

I was lucky enough to be working with moderators and collecting data before the blackout happened. That gave me a special vantage for combining interviews and content analysis with statistical analysis of the reddit blackout.

Together with redditors, I developed an approach of “participatory hypothesis testing,” where I posed ideas for statistics on public reddit threads and worked together with redditors to come up with models that they agreed were a fair and accurate analysis of their experience. Grounding that statistical work involved a whole lot of qualitative research as well.

If you like that kind of thing, here are the details:

In the CHI paper, I analyzed 90 published interviews with moderators from before the blackout, over 250 articles outside reddit about the blackout, discussions in over 50 subreddits that declined to join the blackout, public statements by over 200 subreddits that joined the blackout, and over 150 discussions in blacked out subreddits after their communities were restored. I also read over 100 discussions in communities that chose not to join. Finally, I conducted 90 minute interviews with 13 subreddit moderators of subreddits of all sizes, including those that joined and declined to join the blackout.

To test hypotheses developed with redditors, I collected data from 52,735 non-corporate subreddits that received at least one comment in June 2015, alongside a list of blacked-out subreddits. I also collected data on moderators and comment participation for the period surrounding the blackout.

So What’s The Answer? What Factors Predict Participation in Action Against Platforms?

In the paper, I outline major explanations offered by moderators and translate them into a statistical model that corresponds to major social movement theories. I found evidence confirming many of redditor’s explanations across all subreddits, including aspects of classic social movement theories. These findings are as much about why people choose *not* to participate as much as they are about what factors are involved in joining:

    • Moderator Grievances were important predictors of participation. Subreddits with greater amounts of work, and whose work was more risky were more likely to join the blackout
    • Subreddit Resources were also important factors. Subreddits with more moderators were more likely to join the blackout. Although “default” subreddits played an important role in organizing and negotiating in the blackout, they were no more or less likely to participate, holding all else constant.
    • Relations Among Moderators were also important predictors, and I observed several cases where “networks” of closely-allied subreddits declined to participate.
    • Subreddit Isolation was also an important factor, with more isolated subreddits less likely to join, and moderators who participate in “metareddits” more likely to join.
    • Moderators Relations Within Their Groups were also important; subreddits whose moderators participated more in their groups were less likely to join the blackout.

Many of my findings go into details from my interviews and observations, well beyond just a single statistical model; I encourage you to read the pre-print version of my paper.

What’s Next For My reddit Research?

The reddit blackout took me by surprise as much as anyone, so now I’m back to asking the questions that brought me to moderators in the first place:

THANK YOU REDDIT! & Acknowledgments

CHI_Banner

First of all, THANK YOU REDDIT! This research would not have been possible without generous contributions from hundreds of reddit users. You have been generous all throughout, and I deeply appreciate the time you invested in my work.

Many other people have made this work possible; I did this research during a wonderful summer internship at the Microsoft Research Social Media Collective, mentored by Tarleton Gillespie and Mary Gray. Mako Hill introduced me to social movement theory as part of my general exams. Molly Sauter, Aaron Shaw, Alex Leavitt, and Katherine Lo offered helpful early feedback on this paper. My advisor Ethan Zuckerman remains a profoundly important mentor and guide through the world of research and social action.

Finally, I am deeply grateful for family members who let me ruin our Fourth of July weekend to follow the reddit blackout closely and set up data collection for this paper. I was literally sitting at an isolated picnic table ignoring everyone and archiving data as the weekend unfolded. I’m glad we were able to take the next weekend off! ❤

SMC media roundup

This is a collection of some of our researchers’ quotes, mentions, or writings in mainstream media. Topics include Facebook’s supposed neutral community standards, sharing economy workers uniting to protest, living under surveillance and relational labor in music.

Tarleton Gillespie in the Washington Post –> The Big Myth Facebook needs everyone to believe

And yet, observers remain deeply skeptical of Facebook’s claims that it is somehow value-neutral or globally inclusive, or that its guiding principles are solely “respect” and “safety.” There’s no doubt, said Tarleton Gillespie, a principal researcher at Microsoft Research in New England, that the company advances a specific moral framework — one that is less of the world than of the United States, and less of the United States than of Silicon Valley.

Mary Gray in The New York Times –> Uber drivers and others in the gig economy take a stand

“There’s a sense of workplace identity and group consciousness despite the insistence from many of these platforms that they are simply open ‘marketplaces’ or ‘malls’ for digital labor,” said Mary L. Gray, a researcher at Microsoft Research and professor in the Media School at Indiana University who studies gig economy workers.

Kate Crawford’s (and others’) collaboration with Laura Poitras (Academy Award-winning documentary film director and privacy advocate) in the book about living under surveillance in Boing Boing.

Poitras has a show on at NYC’s Whitney Museum, Astro Noise, that is accompanied by a book in which Poitras exposes, for the first time, her intimate notes on her life in the targeting reticule of the US government at its most petty and vengeful. The book includes accompanying work by Ai Weiwei, Edward Snowden, Dave Eggers, former Guantanamo Bay detainee Lakhdar Boumediene, Kate Crawford and Cory Doctorow.

(More on the upcoming book and Whitney museum event on Wired)

Canadian Songwriter’s Association interview with Nancy Baym –> Sound Advice: How to use social media in 2016

When discussing the use of social media by songwriters, Baym prefers to present a big-picture view rather than focusing on a ‘Top Ten Tips” approach, or on one platform or means of engagement. Practicality is key: “I’d love for 2016 to be the year of people getting realistic about what social media can and can’t do for you, of understanding that it’s a mode of relationship building, not a mode of broadcast,” says Baym.

 

 

Presentation; Between Platforms and Community: Moderators on Reddit

Presentation by intern Nathan Matias on the project he worked on during the summer at the SMC. He has continued to work on his research, so in case you have not read it here is a more updated post on his work:

Followup: 10 Factors Predicting Participation in the Reddit Blackout. Building Statistical Models of Online Behavior through Qualitative Research

Below is the presentation he did for MSR earlier this month:

(Part1)

(Part 2)

(Part 3)

(Part 4)