Custodians

I’m thrilled to say that my new book, Custodians of the Internet, is now available for purchase from Yale University Press, and your favorite book retailer. Those of you who know me know that I’ve been working on this book for a long time, and have cared about the issues it addresses for a while now. So I’m particularly excited that it is now no longer mine, but yours if you want it. I hope it’ll be of some value to those of you who are interested in interrogating and transforming the information landscape in which we find ourselves.

By way of introduction, I thought I would explain the book’s title, particularly my choice of the word “custodians.” This title came unnervingly late in the writing process, and after many, many conversations with my extremely patient friend and colleague Dylan Mulvin. “Custodians of the Internet” captured, better than many, many alternatives, the aspirations of social media platforms, the position they find themselves in, and my notion for how they should move forward.

moderators are the web’s “custodians,” quietly cleaning up the mess: The book begins with a quote from one of my earliest interviews, with a member of YouTube’s content policy team. As they put it, “In the ideal world, I think that our job in terms of a moderating function would be really to be able to just turn the lights on and off and sweep the floors . . . but there are always the edge cases, that are gray.” The image invoked is a custodian in the janitorial sense, doing the simple, mundane, and uncontroversial task of sweeping the floors. In this turn of phrase, content moderation was offered up as simple maintenance. It is not imagined to be difficult to know what needs scrubbing, and the process is routine. As with content moderation, there is labor involved, but largely invisible, just as actual janitorial staff are often instructed to “disappear,” working at night or with as little intrusion as possible. yet even then, years before Gamergate or ISIS beheadings or white nationalists or fake news, it was clear that moderation is not so simple.

platforms have taken “custody” of the Internet: Content moderation at the major platforms matters because those platforms have achieved such prominence in the intervening years.As I was writing the book, one news item in 2015 stuck with me: in a survey on people’s new media use, more people said that they used Facebook than said they used the Internet. Facebook, which by then had become one of the most popular online destinations in the world and had expanded to the mobile environment, did not “seem” like the Internet anymore. Rather than being part of the Internet, it had somehow surpassed it. This was not true, of course; Facebook and the other major platforms had in fact woven themselves deeper into the Internet, by distributing cookies, offering secure login mechanisms for other sites and platforms, expanding advertising networks, collecting reams of user data from third-party sites, and even exploring Internet architecture projects. In both the perception of users and in material ways, Facebook and the major social media platforms have taken “custody” of the Internet. This should change our calculus as to whether platform moderation is or is not “censorship,” and the responsibilities of platforms bear when they decide what to remove and who to exclude.

platforms should be better “custodians,” committed guardians of our struggles over value: In the book, I propose that these responsibilities have expanded. Users have become more acutely aware, of both the harms they encounter on these platforms, and the costs of being wronged by content moderation decisions. What’s more, social media platforms have become the place where a variety of speech coalitions do battle: activists, trolls, white nationalists, advertisers, abusers, even the President. And the implications of content moderation have expanded, from individual concerns to public ones. If a platform fails to moderate, everyone can be affected, even those who aren’t party to the circulation of the offensive, the fraudulent, or the hateful — even those who aren’t on social media at all.

What would it mean for platforms to play host not just to our content, but to our best intentions? The major platforms I discuss here have, for years, tried to position themselves as open and impartial conduits of information, defenders of their user’s right to speak, and legally shielded from any obligations for how they police their sites. As most platform managers see it, moderation should be theirs to do, conducted on their own terms, on our behalf, and behind the scenes. But that arrangement is crumbling, as critics begin to examine the responsibilities social media platforms have to the public they serve.

In the book, I propose that platforms become “custodians” of the public discourse they facilitate — not in the janitorial sense, but something more akin to legal guardianship. The custodian, given charge over a property, a company, a person, or a valuable resource, does not take it for their own or impose their will over it; they accept responsibility for ensuring that it is governed properly. This is akin to Jack Balkin’s suggestion that platforms act as “information fiduciaries,” with a greater obligation to protect our data. But I don’t just mean that platforms should be custodians of our content; platforms should be custodians of the deliberative process we all must engage in, that makes us a functioning public. Users need to be more accountable for making the hard decisions about what does and does not belong; platforms could facilitate that deliberation, and then faithfully enact the conclusions users reach. Safeguarding public discourse requires ensuring that it is governed by those to whom it belongs, making sure it survives, that its value is sustained in a fair and equitable way. Platforms could be not the police of our reckless chatter, but the trusted agents of our own interest in forming more democratic publics.

If you end up reading the book, you have my gratitude. And I’m eager to hear from anyone who has thoughts, comments, praise, criticism, and suggestions. You can find me on Twitter at @TarletonG.

Night modes and the new hue of our screens

Information & Culture just published (paywall; or free pre-print) an article I wrote about “night modes,” in which I try to untangle the history of light, screens, sleep loss, and circadian research. If we navigate our lives enmeshed with technologies and their attendant harms, I wanted to know how we make sense of our orientation to the things that prevent harm. To think, in other words, of the constellation of people and things that are meant to ward off, prevent, stave off, or otherwise mitigate the endemic effects of using technology.

If you’re not familiar with “night modes”: in recent years, hardware manufacturers and software companies have introduced new device modes that shift the color temperature of screens during evening hours. To put it another way: your phone turns orange at night now. Perhaps you already use f.lux, or Apple’s “Night Shift,” or “Twilight” for Android.

All of these software interventions come as responses to the belief that untimely light exposure closer to bedtime will result in less sleep or a less restful sleep. Research into human circadian rhythms has had a powerful influence on how we think and talk about healthy technology use. And recent discoveries in the human response to light, as you’ll learn in the article, are based on a tiny subset of blind persons who lack rods and cones. As such, it’s part of a longer history of using research on persons with disabilities to shape and optimize communication technologies – a historical pattern that the media and disability studies scholar, Mara Mills, has documented throughout her career.

 apple night shift

Continue reading “Night modes and the new hue of our screens”

The Senate Talks to Zuck

(or: “I will get back to you on that, Senator.”)

Here is an early response to Zuckerberg’s testimony before the US Senate today. If you want my overall score, as of 3:30 ET I think Zuckerberg is doing quite well, but some of the things being discussed need a lot of unpacking.

Those Poor Fools “whose privacy settings allowed it.”

In the beginning of his testimony, Zuck described what people are so upset about:

Zuckerberg: [The Kogan personality quiz app that shared data with Cambridge Analytica] was installed by around 300,000 people who agreed to share some of their Facebook information as well as some information from their friends whose privacy settings allowed it.

[Emphasis mine.]

Huh — this phrasing is so careful to be technically accurate but it is right up against the limit of truth. Then I think it goes past that. I looked up what the Facebook privacy settings screen looked like in 2015. It looked like this:

facebook privacy settings as of 2015 screenshot

If we follow the research findings in the security area, most users probably never saw this screen at all: people tend not to know about their own security settings.

But if you did find this screen, anyone who clicked “Friends” for any column surely could not have taken this to mean that their “privacy settings allowed” (Zuckerberg’s phrase) the harvesting of their data by an app that they never authorized and were not aware of.

This is presumably why Facebook disallowed this use of third-party data by apps well before this scandal. There is no third-party consent. So Zuckerberg’s claim that the Kogan/Cambridge Analytica app took information from people “whose privacy settings allowed it” seems a bridge too far.

The American Dream

A closing thought:  It is hyperbole, I know, but I was struck by Sen. John Thune’s (R-SD) remark that “Facebook represents the American dream.” Didn’t The Social Network cover this ground? I don’t remember the plot that way. Did Thune just mean that Zuck got rich?

Zuck’s the Scorpion and We Are The Frog?

Zuckerberg: [investments] in security…will significantly impact our profitability going forward. But I want to be clear about what our priority is: protecting our community is more important than maximizing our profits.

This is a nice quote by Zuck because it highlights the key problem with Facebook’s position. The issue isn’t really “security” though. It’s the fact that Facebook is fundamentally in the business of harvesting user data and that negative, polarizing (and even inaccurate) ads and status updates are good for the platform. They promote engagement through outrage.

To crib from Marshall McLuhan, what is in the public interest is not necessarily what the public is interested in. Gory road accidents turn heads. But is that what our media should be showing?

Zuck’s comment is also highlighting that by asking Facebook to fix these problems, we are asking advertising-supported media to behave in a way that makes no sense for them and is opposite to their nature.

Let’s take a look at some of the ads placed by fake accounts controlled by Агентство интернет-исследований  they are extremely polarizing (american.veterans is a beard or sock puppet account):

 

1509564403589-screen-shot-2017-11-01-at-30922-pm

And it is now very clear from both common sense and the Trump/Clinton Facebook CPM fracas that polarizing ads on Facebook are much more likely to gain clicks.

The Trump campaign’s official ads were quintessentially negative and polarizing ads: they did things like try to associate the phrase “Crooked Hillary” with a winged bag of money.

Political ad spending is also a windfall that old media (radio and television stations) depended on, but there was no “click engagement” dimension with old mediaold media left people with little to do. It seems possible that the new media political ad environment can create feedbacks with negative ads that might be much more significant than the old ways of doing things.

Pay-For-Privacy

Another thing that struck me in early Q&A is the concern raised about a paid Facebook model. This was floated yesterday in a media interview and now Sen. Bill Nelson (D-FL) is asking Zuckerberg if it is true users would have to, as he put it, “pay for privacy.”

Nelson seems outraged. On the one hand, this outrage makes no sense. If Facebook were switched completely away from an advertising model, it would be great for users as it would redefine the company’s incentives completely.

However, I think what is being proposed is a half-pay, half-free system (or opt-in payments). If that’s the plan, the outrage is justified. Pay-for-privacy makes social media even more regressive.

Privacy is already regressive in the sense that only those people who have time to learn about risks and fiddle with endless (and endlessly changing) settings pages have any hope of protecting themselves. The current system rewards computer skill and free time. And even with those things users still may not be able to protect themselves, because the options just aren’t available.

But an opt-in payment system makes privacy even worse by taking these intangible regressive dimensions and, in addition, putting a payment step on top of them. It’s not that people in opt-in privacy use either time or money to obtain privacy, rather it will be the case that people who both have time to follow this topic closely enough to know that they need privacy in the first place and can afford to pay for it will have privacy.

 

Big Social Won’t Be Fixed

I need to sign off because I can’t spend the day watching this. My summary so far: “Big Social” won’t be fixed by anything that was said here. The business models, institutions, and habits are too well-established and have too much inertia for a meaningful reconfiguration to come from the things I’ve heard so far.

Congratulations to the incoming SMC interns for summer 2018!

Another stellar crop of applicants poured in for the SMC internships this year, and another three emerged as the best of the best. Thanks to everyone who applied, it was painful not to accept more of you! For summer 2018, we’re thrilled to have these three remarkable students joining us in the Microsoft Research lab in New England, to conduct their own original research and to be part of the SMC community. (Remember that we offer these internships every summer: if you’re an advanced graduate student in the areas of communication, the anthropology or sociology of new media, information science, and related fields, watch this page for the necessary information.)

 

Robyn Caplan is a doctoral candidate at Rutgers University’s School of Communication and Information under the supervision of Professor Philip Napoli. For the last three years, she has also been a Researcher at the Data & Society Research Institute, working on projects related to platform accountability, media manipulation, and data and civil rights. Her most recent research explores how platforms and news media associations navigate content moderation decisions regarding trustworthy and credible content, and how current concerns regarding the rise of disinformation across borders are impacting platform governance, and national media and information policy. Previously she was a Fellow at the GovLab at NYU, where she worked on issues related to open data policy and use. She holds an MA from New York University in Media, Culture, and Communication, and a Bachelor of Science from the University of Toronto.

 

Michaelanne Dye is a Ph.D. candidate in Human-Centered Computing in the School of Interactive Computing at Georgia Tech. She also holds an M.A. in Cultural Anthropology. Michaelanne uses ethnographic methods to explore human-computer interaction and development (HCID) issues within social computing systems, paying attention to the complex factors that afford and constrain meaningful engagements with the internet in resource-constrained communities. Through fieldwork in Havana, Cuba, Michaelanne’s dissertation work examines how new internet infrastructures interact with cultural values and local constraints. Moreover, her research explores community-led information networks that have evolved in absence of access to the world wide web – in order to explore ways to design more meaningful and sustainable engagements for users in both “developing” and “developed” contexts. Michaelanne’s work has been published in the conference proceedings of Human Factors in Computing Systems (CHI) and Computer-Supported Cooperative Work and Social Computing (CSCW).

 

Penny Trieu is a PhD candidate in the School of Information at the University of Michigan. She is a member of the Social Media Research Lab, where she is primarily advised by Nicole Ellison. Her research concerns how people can use communication technologies, particularly social media, to better support their interpersonal relationships. She also looks at identity processes, notably self-presentation and impression management, on social media. Her research has appeared in venues such as Information, Communication, and Society; Social Media + Society, and at the International Communication Association conference. At the Social Media Collective, she will work on the dynamics of interpersonal feedback and self-presentation around ephemeral sharing via Instagram and Snapchat Stories.

Content moderation is not a panacea: Logan Paul, YouTube, and what we should expect from platforms

What do we expect of content moderation? And what do we expect of platforms?

There is an undeniable need, now more than ever, to reconsider the public responsibilities of social media platforms. For too long, platforms have enjoyed generous legal protections and an equally generous cultural allowance, to be “mere conduits” not liable for what users post to them. in the shadow of this protection, they have constructed baroque moderation mechanisms: flagging, review teams, crowdworkers, automatic detection tools, age barriers, suspensions, verification status, external consultants, blocking tools. They all engage in content moderation, but are not obligated to; they do it largely out of sight of public scrutiny, and are held to no official standards as to how they do so. This needs to change, and it is beginning to.

But in this crucial moment, one that affords such a clear opportunity to fundamentally reimagine how platforms work and what we can expect of them, we might want to get our stories straight about what those expectations should be.

The latest controversy involves Logan Paul, a twenty-two year old YouTube star with 15 million plus subscribers. His videos, a relentless barrage of boasts, pranks, and stunts, have garnered him legions of adoring fans. But he faced public backlash this week after posting a video in which he and his buddies ventured into the Aokigahara forest of Japan, only to find the body of a young man who had recently committed suicide. Rather than turning off the camera, Logan continued his antics, pinballing between awe and irreverence, showing the body up close and then turning the attention back to his own reaction. The video lingers of the body, including close ups of his swollen hand, and Paul’s reactions were self-centered and cruel. After a blistering wave of criticism in the video comments and on Twitter, Paul removed the video and issued a written apology, which was itself criticized for not striking the right tone. A somewhat more heartfelt video apology followed. He later announced he would be taking a break from YouTube.

There is no question that Paul’s video was profoundly insensitive, an abject lapse in judgment. But amidst the reaction, I am struck by the press coverage of and commentary about the incident: the willingness both to lump this controversy in with an array of other concerns about what’s online, as somehow all part of the “content moderation” problem; paired with a persistent and unjustified optimism for what content moderation should be able to handle.

YouTube has weathered a series of controversies over the course of the last year, many of which had to do with children, both their exploitation and their vulnerability as audiences. There was the controversy about popular vlogger PewDiePie, condemned for including anti-Semitic humor and Nazi imagery in his videos. Then there were the videos that slipped past the stricter standards YouTube has for its Kids app: amateur versions of cartoons featuring well-known characters with weirdly upsetting narrative third acts. That was quickly followed by the revelation of entire YouTube channels of videos in which children were being mistreated, frightened and exploited, that seem designed to skirt YouTube’s rules against violence and child exploitation. And just days later, Buzzfeed also reported that YouTube’s autocomplete displayed results that seemed to point to child sexual exploitation. YouTube representatives have apologized for all of these, promised to increase the number of moderators reviewing their videos, aggressively pursue better artificial intelligence solutions, and remove advertising from some of the questionable channels.

Content moderation, and different kinds of responsibility

But what do these incidents have in common, besides the platform? Journalists and commentators are eager to lump them together: part of a single condemnation of YouTube, its failure to moderate effectively, and its complicity with the profits made by producers of salacious or reprehensible content. But these incidents represent different kinds of problems, they implicate YouTube and content moderation in different ways — and, when lumped together, they suggest a contradictory set of expectations we have for platforms and their public responsibility.

Platforms assert a set of normative standards, guidelines by which users are expected to comport themselves. It is difficult to convince every user to honor these standards, in part because the platforms have spent years promising users an open and unfettered playing field, inviting users to do or say whatever they want. And it is difficult to enforce these standards, in part because the platforms have few of the traditional mechanism of governance: they can’t fire us, we are not salaried producers. All they have are the terms of service and the right to delete content and suspend users. And, there are competing economic incentives for platforms to be more permissive than they claim to be, and to treat high value producers differently than the rest.

Incidents like the exploitative videos of children, or the misleading amateur cartoons, take advantage of this system. They live amidst this enormous range of videos, some subset of which YouTube must remove. Some come from users who don’t know or care about the rules, or find what they’re making perfectly acceptable. Others are deliberately designed to slip past moderators, either by going unnoticed or by walking right up to but not across the community guidelines. They sometimes require hard decisions about speech, community, norms, and the right to intervene.

Logan Paul’s video, or PewDiePie’s racist outbursts, are of a different sort. As was clear in the news coverage and the public outrage, critics were troubled by Logan Paul’s failure to consider his responsibility to his audience, to show more dignity as a videomaker, to choose sensitivity over sensationalism. The fact that he has 15 million subscribers, many of them young, was reason for many to claim that he (and by implication, YouTube) have a greater responsibility. These sound more like traditional media concerns: the effects on audiences, the responsibilities of producers, the liability of providers. This could just as easily be a discussion about Ashton Kutcher and an episode of Punk’d. What would Kutcher’s, his production team’s, and MTV’s responsibility be if he had similarly crossed the line with one of his pranks?

But MTV was in a structurally different position than YouTube. We expect MTV to be accountable for a number of reasons: they had the opportunity to review the episode before broadcasting it; they employed Kutcher and his team, affording them specific power to impose standards; and they chose to hand him the megaphone in the first place. While YouTube also affords Logan Paul a way to reach millions, and he and YouTube share advertising revenue from popular videos, these offers are in principle made to all YouTube users. YouTube is a distribution platform, not a distribution bottleneck — or it is a bottleneck of a very different shape. This does not mean we cannot or should not hold YouTube accountable. We could decide as a society that we want YouTube to meet exactly the same responsibilities as MTV, or more. But we must take into account that these structural differences change not only what YouTube can do, but how and why we can expect it of them.

Moreover, is content moderation the right mechanism to manage this responsibility? Or to put it another way, what would the critics of Logan’s video have wanted YouTube to do? Some argued that YouTube should have removed the video, before Paul did. (It seems the video was reviewed, and was not removed, but Paul received a “strike” on his account, a kind of warning — we know this only based on this evidence. If you want to see the true range of disagreement about what YouTube should have done, just read down the lengthy thread of comments that followed this tweet.) In its PR response to the incident, a YouTube representative said it should have taken the video down, for being “shocking, sensational or disrespectful”. But it is not self-evident that Paul’s video violates YouTube’s policies. And from the comments from critics, it was Paul’s blithe, self-absorbed commentary, the tenor he took about the suicide victim he found, as much as showing the body itself, that was so troubling. Showing the body, lingering on its details, was part of Paul’s casual indifference, but so were his thoughtless jokes and exaggerated reactions. Is it so certain that YouTube should have removed this video on our behalf? I do not mean to imply that the answer is no, or that it is yes. I’m only noting that this is not an easy case to adjudicate — which is precisely why I we shouldn’t expect YouTube to already have a clean and settled policy towards it.

There’s no simple answer as to where such lines should be drawn. Every bright line rule YouTube might draw will be plagued with “what abouts”. Is it that corpses should not be shown in a video? What about news footage from a battlefield? What about public funerals? Should the prohibition be specific to suicide victims, out of respect? It would be reasonable to argue that YouTube should allow a tasteful documentary about the Aokigahara forest, concerned about the high rates of suicide among Japanese men. Such a video might even, for educational or provocative reasons, include images of the body of a suicide victim, or evidence of their deaths. In fact, YouTube already has some, of a variety of qualities (see 1, 2, 3, 4).

So what we critics may be implying is that YouTube should be responsible to distinguish the insensitive versions from the sensitive ones. Again, this sounds more like the kinds of expectations we had for television networks — which is fine if that’s what we want, but we should admit that this would be asking much more from YouTube than we might think.

As a society, we’ve already struggled with this very question, in traditional media: should the news show the coffins of U.S. soldiers as their returned from war? should the news show the grisly details of crime scenes? When is the typically too graphic video acceptable because it is newsworthy, educational, or historically relevant? Not only is the answer far from clear, and differs across cultures and periods. As a society, we need to engage in the debate; it cannot be answered for us by YouTube alone.

These moments of violation serve as the spark for that debate. It may be that all this condemnation of Logan Paul, in the comment threads on YouTube, on Twitter, and in the press coverage, is the closest we get to a real, public consideration of what’s appropriate for public consumption. And maybe the focus among critics on Paul’s irresponsibility, as opposed to YouTube’s, is indicative that this is not a moderation question, or a growing public sense that we cannot rely on YouTube’s moderation, that we need to cultivate a clearer sensibility of what public culture should look like, and teach creators to take their public responsibility more seriously. (Though even if it is, there will always be a new wave of twenty-year-olds waiting in the wings, who will jump at the chance social media offers to show off for a crowd, way before they ever grapple with social norms we may have worked out. This is why we need to keep having this debate.)

How exactly YouTube is complicit in the choices of its stars

This is not to suggest that platforms bear no responsibility for the content that they help circulate. Far from it. YouTube is implicated, in that they afford the opportunity for Logan to broadcast his tasteless video, help him gather millions of viewers who will have it instantly delivered to their feed, design and tune the recommendation algorithms that amplify its circulation, and profit enormously from the advertising revenue it accrues.

Some critics are doing the important work of putting platforms under scrutiny, to better understand the way producers and platforms are intertwined. But it is awfully tempting to draw too simple a line between the phenomenon and the provider, to paint platforms with too broad a brush. The press loves villains, and YouTube is one right now. But we err when we draw these lines of complicity too cleanly. Yes, YouTube benefits financially from Logan Paul’s success. That by itself does not prove complicity; it needs to be a feature of our discussion about complicity. We might want revenue sharing to come with greater obligations on the part of the platform; or, we might want platforms to be shielded from liability or obligation no matter what the financial arrangement; or, we might want equal obligations whether there is revenue shared or not; or we might want obligations to attend to popularity rather than revenue. These are all possible structures of accountability.

It is also easy to say that YouTube drives vloggers like Logan Paul to be more and more outrageous. If video makers are rewarded based on the number of views, whether that reward is financial or just reputational, it stands to reason that some videomakers will look for ways to increase those numbers, including going bigger. But it is not clear that metrics of popularity necessarily or only lead to being over more outrageous, and there’s nothing about this tactic that is unique to social media. Media scholars have long noted that being outrageous is one tactic producers use to cut through the clutter and grab viewers, whether its blaring newspaper headlines, trashy daytime talk shows, or sexualized pop star performances. That is hardly unique to YouTube. And YouTube videomakers are pursuing a number of strategies to seek popularity and the rewards therein, outrageousness being just one. many more seem to depend on repetition, building a sense of community or following, interacting with individual subscribers, and the attempt to be first. While over-caffeinated pranksters like Logan Paul might try to one-up themselves and their fellow bloggers, that is not the primary tactic for unboxing vidders or Minecraft world builders or fashion advisers or lip syncers or television recappers or music remixers. Others see Paul as part of a “toxic YouTube prank culture” that migrated from Vine, which is another way to frame YouTube’s responsibility. But a genre may develop, and a provider profiting from it may look the other way or even encourage it; that does not answer the question of what responsibility they have for it, it only opens it.

To draw too straight a line between YouTube’s financial arrangements and Logan Paul’s increasingly outrageous shenanigans misunderstands both of the economic pressures of media and the complexity of popular culture. It ignores the lessons of media sociology, which makes clear that the relationship between the pressures imposed by industry and the creative choices of producers is much more complex and dynamic. And it does prove that content moderation is the right way to address this complicity.

*   *   *

Let me say again: Paul’s video was in poor, poor taste, and he deserves all of the criticism he received. And I find this genre of boffo, entitled, show-off masculinity morally problematic and just plain tiresome. And while it may sound like I am defending YouTube, I am definitely not. Along with the other major social media platforms, YouTube has a greater responsibility for the content they circulate than they have thus far acknowledged; they have built a content moderation mechanism that is too reactive, too dismissive, and too opaque, and they are due for a public reckoning. In the last few years, the workings of content moderation and its fundamental limitations have come to the light, and this is good news. Content moderation should be more transparent, and platforms should be more accountable, not only for what traverses their system, but the ways in which they are complicit in its production, circulation, and impact. But it also seems we are too eager to blame all things on content moderation, and to expect platforms to maintain a perfectly honed moral outlook every time we are troubled by something we find there. Acknowledging that YouTube is not a mere conduit does not imply that it is exclusively responsible for everything available there.

As Davey Alba at Buzzfeed argued, “YouTube, after a decade of being the pioneer of internet video, is at an inflection point as it struggles to control the vast stream of content flowing across its platform, balancing the need for moderation with an aversion toward censorship.” This is true. But we are also at an inflection point of our own. After a decade of embracing social media platforms as key venues for entertainment, news, and public exchange, and in light of our growing disappointment in their preponderance of harassment, hate, and obscenity, we too are struggling: to modulate exactly what we expect of them and why, to balance how to improve the public sphere with what role intermediaries can reasonably be asked to take.

This essay is cross-posted at Culture Digitally. Many thanks to Dylan Mulvin for helping me think this through.

Call for applications! 2018 summer internship, MSR Social Media Collective

APPLICATION DEADLINE: JANUARY 19, 2018

Microsoft Research New England (MSRNE) is looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week Internship program. The Social Media Collective (in New England, we are Nancy Baym, Tarleton Gillespie, and Mary Gray, with current postdocs Dan Greene and Dylan Mulvin) bring together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. Learn more about us here.

MSRNE internships are 12-week paid stays in our lab in Cambridge, Massachusetts. During their stay, SMC interns are expected to devise and execute their own research project, distinct from the focus of their dissertation (see the project requirements below). The expected outcome is a draft of a publishable scholarly paper for an academic journal or conference of the intern’s choosing. Our goal is to help the intern advance their own career; interns are strongly encouraged to work towards a creative outcome that will help them on the academic job market.

The ideal candidate may be trained in any number of disciplines (including anthropology, communication, information studies, media studies, sociology, science and technology studies, or a related field), but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to media or communication technologies and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

Primary mentors for this year will be Nancy Baym and Tarleton Gillespie, with additional guidance offered by other members of the SMC. We are looking for applicants working in one or more of the following areas:

  1. Personal relationships and digital media
  2. Audiences and the shifting landscapes of producer/consumer relations
  3. Affective, immaterial, and other frameworks for understanding digital labor
  4. How platforms, through their design and policies, shape public discourse
  5. The politics of algorithms, metrics, and big data for a computational culture
  6. The interactional dynamics, cultural understanding, or public impact of AI chatbots or intelligent agents

Interns are also expected to give short presentations on their project, contribute to the SMC blog, attend the weekly lab colloquia, and contribute to the life of the community through weekly lunches with fellow PhD interns and the broader lab community. There are also natural opportunities for collaboration with SMC researchers and visitors, and with others currently working at MSRNE, including computer scientists, economists, and mathematicians. PhD interns are expected to be on-site for the duration of their internship.

Applicants must have advanced to candidacy in their PhD program by the time they start their internship. (Unfortunately, there are no opportunities for Master’s students or early PhD students at this time). Applicants from historically marginalized communities, underrepresented in higher education, and students from universities outside of the United States are encouraged to apply.

PEOPLE AT MSRNE SOCIAL MEDIA COLLECTIVE

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

  • How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)
  • How are social media platforms, through their algorithmic design and user policies, taking up the role of custodians of public discourse? (Tarleton Gillespie)
  • What are the cultural, political, and economic implications of crowdsourcing as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)
  • How do public institutions like schools and libraries prepare workers for the information economy, and how are they changed in the process? (Dan Greene)
  • How are media standards made, and what do their histories tell us about the kinds of things we can represent? (Dylan Mulvin)

SMC PhD interns may also have the opportunity to connect with our sister Social Media Collective members in New York City. Related projects in New York City include:

  • What are the politics, ethics, and policy implications of artificial intelligence and data science? (Kate Crawford, MSR-NYC)
  • What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

For more information about the Social Media Collective, and a list of past interns, visit the About page of our blog. For a complete list of all permanent researchers and current postdocs based at the New England lab, see: http://research.microsoft.com/en-us/labs/newengland/people/bios.aspx

 

COMPENSATION, RELOCATION, AND BENEFITS:

  • highly competitive salary
  • travel to/from internship location from your university location (including the intern and all eligible dependents)
  • housing costs: interns can select one of two housing options
    • fully furnished corporate housing covered by Microsoft
    • a lump sum for finding and securing your own housing
  • local transportation allowance for commuting
  • health insurance is not provided; most interns stay covered under their university insurance, but interns are eligible to enroll in a Microsoft sponsored medical plan
  • internship events and activities

 

APPLICATION PROCESS

To apply for a PhD internship with the Social Media Collective, fill out the online application form: https://careers.research.microsoft.com/

On the application website, please indicate that your research area of interest is “Anthropology, Communication, Media Studies, and Sociology” and that your location preference is “New England, MA, U.S.” in the pull down menus. Also enter the name of a mentor (Nancy Baym or Tarleton Gillespie) whose work most directly relates to your own in the “Microsoft Research Contact” field. IF YOU DO NOT MARK THESE PREFERENCES WE WILL NOT RECEIVE YOUR APPLICATION. So, please, make sure to follow these detailed instructions.

Your application needs to include:

  1. A short description (no more than 2 pages, single spaced) of 1 or 2 projects that you propose to do while interning at MSRNE, independently and/or in collaboration with current SMC researchers. The project proposals can be related to, but must be distinct from your dissertation research. Be specific and tell us:
    • What is the research question animating your proposed project?
    • What methods would you use to address your question?
    • How does your research question speak to the interests of the SMC?
    • Who do you hope to reach (who are you engaging) with this proposed research?
  2. A brief description of your dissertation project.
  3. An academic article-length manuscript (~7,000 or more) that you have authored or co-authored (published or unpublished) that demonstrates your writing skills.
  4. A copy of your CV.
  5. The names and contact information for 3 references (one must be your dissertation advisor).
  6. if available, pointers to your website or other online presence (this is not required).

A request for letters will be sent directly to your list of referees, on your behalf. IMPORTANT: THE APPLICATION SYSTEM WILL NOT REQUEST THOSE REFERENCE LETTERS UNTIL AFTER YOU HAVE SUBMITTED YOUR APPLICATION! Please warn your letter writers in advance so that they will be ready to submit them when they receive the prompt. The email they receive will automatically tell them they have two weeks to respond. Please ensure that they expect this email (tell them to check their spam folders, too!) and are prepared to submit your letter by our application deadline.  You can check the progress on individual reference requests at any time by clicking the status tab within your application page. Note that a complete application must include three submitted letters of reference.

If you have any questions about the application process, please contact Tarleton Gillespie at tarleton@microsoft.com and include “SMC PhD Internship” in the subject line.

 

TIMELINE

Due to the volume of applications, late submissions (including submissions with late letters of reference) will not be considered. We will not be able to provide specific feedback on individual applications. Finalists will be contacted in early February to arrange a Skype interview. Applicants chosen for the internship will be informed in March and announced on the socialmediacollective.org blog.

 

 

PREVIOUS INTERN TESTIMONIALS

“The internship at Microsoft Research was all of the things I wanted it to be – personally productive, intellectually rich, quiet enough to focus, noisy enough to avoid complete hermit-like cave dwelling behavior, and full of opportunities to begin ongoing professional relationships with other scholars who I might not have run into elsewhere.”
— Laura Noren, Sociology, New York University

“If I could design my own graduate school experience, it would feel a lot like my summer at Microsoft Research. I had the chance to undertake a project that I’d wanted to do for a long time, surrounded by really supportive and engaging thinkers who could provide guidance on things to read and concepts to consider, but who could also provoke interesting questions on the ethics of ethnographic work or the complexities of building an identity as a social sciences researcher. Overall, it was a terrific experience for me as a researcher as well as a thinker.”
— Jessica Lingel, Library and Information Science, Rutgers University

“My internship experience at MSRNE was eye-opening, mind-expanding and happy-making. If you are looking to level up as a scholar – reach new depth in your focus area, while broadening your scope in directions you would never dream up on your own; and you’d like to do that with the brightest, most inspiring and supportive group of scholars and humans – then you definitely want to apply.”
— Kat Tiidenberg, Sociology, Tallinn University, Estonia

“The Microsoft Internship is a life-changing experience. The program offers structure and space for emerging scholars to find their own voice while also engaging in interdisciplinary conversations. For social scientists especially the exposure to various forms of thinking, measuring, and problem-solving is unparalleled. I continue to call on the relationships I made at MSRE and always make space to talk to a former or current intern. Those kinds of relationships have a long tail.”
— Tressie McMillan Cottom, Sociology, Emory University

“My summer at MSR New England has been an important part of my development as a researcher. Coming right after the exhausting, enriching ordeal of general/qualifying exams, it was exactly what I needed to step back, plunge my hands into a research project, and set the stage for my dissertation… PhD interns are given substantial intellectual freedom to pursue the questions they care about. As a consequence, the onus is mostly on the intern to develop their research project, justify it to their mentors, and do the work. While my mentors asked me good, supportive, and often helpfully hard, critical questions, but my relationship with them was not the relationship of an RA to a PI– instead it was the relationship of a junior colleague to senior ones.”
— J. Nathan Matias, Media Lab, MIT (read more here)

“This internship provided me with the opportunity to challenge myself beyond what I thought was possible within three months. With the SMC’s guidance, support, and encouragement, I was able to reflect deeply about my work while also exploring broader research possibilities by learning about the SMC’s diverse projects and exchanging ideas with visiting scholars. This experience will shape my research career and, indeed, my life for years to come.”
— Stefanie Duguay, Communication, Queensland University of Technology

“There are four main reasons why I consider the summer I spent as an intern with the Social Media Collective to be a formative experience in my career. 1. was the opportunity to work one-on-one with the senior scholars on my own project, and the chance to see “behind the scenes” on how they approach their own work. 2. The environment created by the SMC is one of openness and kindness, where scholars encourage and help each other do their best work. 3. hearing from the interdisciplinary members of the larger MSR community, and presenting work to them, required learning how to engage people in other fields. And finally, 4. the lasting effect: Between senior scholars and fellow interns, you become a part of a community of researchers and create friendships that extend well beyond the period of your internship.”
— Stacy Blasiola, Communication, University of Illinois Chicago

“My internship with Microsoft Research was a crash course in what a thriving academic career looks like. The weekly meetings with the research group provided structure and accountability, the stream of interdisciplinary lectures sparked intellectual stimulation, and the social activities built community. I forged relationships with peers and mentors that I would never have met in my graduate training.”
— Kate Zyskowski, Anthropology, University of Washington

“It has been an extraordinary experience for me to be an intern at Social Media Collective. Coming from a computer science background, communicating and collaborating with so many renowned social science and media scholars teaches me, as a researcher and designer of socio-technical systems, to always think of these systems in their cultural, political and economic context and consider the ethical and policy challenges they raise. Being surrounded by these smart, open and insightful people who are always willing to discuss with me when I met problems in the project, provide unique perspectives to think through the problems and share the excitements when I got promising results is simply fascinating. And being able to conduct a mixed-method research that combines qualitative insights with quantitative methodology makes the internship just the kind of research experience that I have dreamed for.”
— Ming Yin, Computer Science, Harvard University

“Spending the summer as an intern at MSR was an extremely rewarding learning experience. Having the opportunity to develop and work on your own projects as well as collaborate and workshop ideas with prestigious and extremely talented researchers was invaluable. It was amazing how all of the members of the Social Media Collective came together to create this motivating environment that was open, supportive, and collaborative. Being able to observe how renowned researchers streamline ideas, develop projects, conduct research, and manage the writing process was a uniquely helpful experience – and not only being able to observe and ask questions, but to contribute to some of these stages was amazing and unexpected.”
— Germaine Halegoua, Communication Arts, University of Wisconsin-Madison

“Not only was I able to work with so many smart people, but the thoughtfulness and care they took when they engaged with my research can’t be stressed enough. The ability to truly listen to someone is so important. You have these researchers doing multiple, fascinating projects, but they still make time to help out interns in whatever way they can. I always felt I had everyone’s attention when I spoke about my project or other issues I had, and everyone was always willing to discuss any questions I had, or even if I just wanted clarification on a comment someone had made at an earlier point. Another favorite aspect of mine was learning about other interns’ projects and connecting with people outside my discipline.”
–Jolie Matthews, Education, Stanford University

We are hiring a Postdoc

The Social Media Collective at Microsoft Research New England (MSRNE) is looking for a social media postdoctoral researcher (start date: July, 2018). This position is an ideal opportunity for a scholar whose work draws on anthropology, communication, media studies, sociology, and/or science and technology studies to bring empirical and critical perspectives to complex socio-technical issues. Application deadline: 1 December 2017. This year, we will also consider applications for a possible candidate slot, based in SMC, bridging SMC and one or more areas of the MSRNE lab, including machine learning, bioinformatics, cryptography, algorithmic game theory, and economics.

Microsoft Research provides a vibrant multidisciplinary research environment, with an open publications policy and close links to top academic institutions around the world. Postdoctoral researcher positions provide emerging scholars (PhDs received late 2017 or to be conferred by July 2018) an opportunity to develop their research career and to interact with some of the top minds in the research community. Postdoctoral researchers define their own research agenda. Successful candidates will have a well-established research track record as demonstrated by journal publications and conference papers, as well as participation on program committees, editorial boards, and advisory panels.

While each of the Microsoft Research labs has openings in a variety of different disciplines, this position with the Social Media Collective at Microsoft Research New England specifically seeks social science/humanities candidates with critical approaches to their topics. Qualifications include a strong academic record in anthropology, communication, media studies, sociology, science and technology studies, or a related field. The ideal candidate may be trained in any number of disciplines, but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to technology or the internet and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

– How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)

– How are social media platforms, through algorithmic design and user policies, adopting the role of intermediaries for public discourse? (Tarleton Gillespie)

– What are the cultural, political, and economic implications of on-demand contract work as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)

– How do standards, defaults, and infrastructures encode our assumptions about human behavior and perception? (Dylan Mulvin)

– How are public and private institutions training people for the future of work, and deciding who should be included in that future? (Dan Greene)

SMC postdocs may have the opportunity to visit and collaborate with our sister Social Media Collective members in New York City. Related projects in New York City include:

– What are the politics, ethics, and policy implications of big data science? (Kate Crawford, MSR-NYC, AI Now)

– What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

Postdoctoral researchers receive a competitive salary and benefits package, and are eligible for relocation expenses.  Postdoctoral researchers are hired for a two-year term appointment following the academic calendar, starting in July 2018. Applicants must have completed the requirements for a PhD, including submission of their dissertation, prior to joining Microsoft Research. We encourage those with tenure-track job offers from other institutions to apply, so long as they can defer their start date to accept our position.

Microsoft does not discriminate against any applicant on the basis of age, ancestry, color, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.

To apply for a postdoc position at MSRNE:

Submit an online application here.

– On the application website, indicate that your research area of interest is “Anthropology, Communication, Media Studies, and Sociology” and that your location preference is “New England, MA, U.S.” in the pull down menus. IF YOU DO NOT MARK THESE PREFERENCES WE WILL NOT RECEIVE YOUR APPLICATION. 

– In addition to your CV and names of three referees (including your dissertation advisor) that the online application requires, upload the following 3 attachments with your online application:

  1. two journal articles, book chapters, or equivalent writing samples (uploaded as two separate attachments);
  1. a single research statement (four page maximum length) that does the following: outlines the questions and methodologies central to your research agenda (~two page); provides an abstract and chapter outline of your dissertation (~one page); offers a description of how your research agenda relates to research conducted by the Social Media Collective (~one page)

After you submit your application, a request for letters will be sent to your list of referees on your behalf. NOTE: THE APPLICATION SYSTEM WILL NOT REQUEST REFERENCE LETTERS UNTIL AFTER YOU HAVE SUBMITTED YOUR APPLICATION! Please warn your letter writers in advance so that they will be ready to submit them when they receive the prompt. The email they receive will automatically tell them they have two weeks to respond but that an individual call for applicants may have an earlier deadline. Please ensure that they expect this and are prepared to submit your letter by our application deadline of December 1, 2017. Please make sure to check back with your referees if you have any questions about the status of your requested letters of recommendation. You can check the progress on individual reference requests at any time by clicking the status tab within your application page. Note that a complete application must include three submitted letters of reference.

For more information, see here.

Feel free to ask questions about the position in the comments below.

 

 

Heading to the Courthouse for Sandvig v. Sessions

E._Barrett_Prettyman_Federal_Courthouse,_DC

(or: Research Online Should Not Be Illegal)

I’m a college professor. But on Friday morning I won’t be in the classroom, I’ll be in courtroom 30 in the US District Courthouse on Constitution Avenue in Washington DC. The occasion? Oral arguments on the first motion in Sandvig v. Sessions.

You may recall that the ACLU, academic researchers (including me), and journalists are bringing suit against the government to challenge the constitutionality of “The Worst Law in Technology” — the US law that criminalizes most online research. Our hopes are simple: Researchers and reporters should not fear prosecution or lawsuits when we seek to obtain information that would otherwise be available to anyone, by visiting a Web site, recording the information we see there, and then publishing research results based on what we find.

As things stand, the misguided US anti-hacking law, called the Computer Fraud and Abuse Act (CFAA), makes it a crime if a computer user “exceeds authorized access.” What is authorized access to a Web site? Previous court decisions and the federal government have defined it as violating the site’s own stated “Terms of Service,” (ToS) but that’s ridiculous. The ToS is a wish-list of what corporate lawyers dream about, written by corporate lawyers. (Crazy example, example, example.) ToS sometimes prohibit people from using Web sites for research, they prohibit users from saying bad things about the corporation that runs the Web site, they prohibit users from writing things down. They should not be made into criminal violations of the law.

In the latest developments of our case, the government has argued that Web servers are private property, and that anyone who exceeds authorized access is trespassing “on” them. (“In” them? “With” them? It’s a difficult metaphor.) In other cases the CFAA was used to say that because Web servers are private, users are also wasting capacity on these servers, effectively stealing a server’s processing cycles that the owner would rather use for other things. I visualize a cartoon thief with a bag of electrons.

Are Internet researchers and data journalists “trespassing” and “stealing”? These are the wrong metaphors. Lately I’ve been imagining what would happen in the world of print if the CFAA metaphors were our guide back when the printing press were invented.

If you picked up a printed free newspaper like Express, the Metro, or the Chicago Reader at a street corner and the CFAA applied to it, there would be a lengthy “Terms of Readership” printed on an inside page in very small type. Since these are advertising-supported publications, it would say that people who belong to undesirable demographics are trespassing on the printed page if they attempt to read it. After all, the newspaper makes no money from readers who are not part of a saleable advertising audience. In fact, since the printing presses are private property, unwanted readers are stealing valuable ink and newsprint that should be reserved for the paper’s intended readers. To cover all the bases, readers would be forbidden from writing anything based on what they read in the paper if the paper’s owners wouldn’t like it. And readers could be sued by the newspaper or prosecuted by the federal government if they did any of these things. The scenario sounds foolish and overblown, but it’s the way that Web sites work now under the CFAA.

Another major government argument has been that we researchers and journalists have nothing to be concerned about because prosecutors will use this law with the appropriate discretion. Any vagueness is OK because we can trust them. Concern by researchers and reporters is groundless.

Yet federal prosecutors have a terrible record when it comes to the CFAA. And the idea that online platforms want to silence research and journalism is not speculative. After our lawsuit was filed, the Streaming Heritage research team funded by the Swedish Research Council (similar to the US National Science Foundation) received shocking news: Spotify’s lawyers had contacted the Research Council and asked the council to take “resolute action” against the project, suggesting it had violated “applicable law.” Professors Snickars, Vonderau, and others were studying the Spotify platform. What “law” did Spotify claim was being violated? The site’s own Terms of Service. (Here’s a description of what happened. Note: It’s in Swedish.)

This demand occurred just after a member of the research team appeared in a news story that characterized Spotify in a way that Spotify apparently did not like. Luckily, Sweden does not have the CFAA, and terms of service there do not hold the force of law. The Research Council repudiated Spotify’s claim that research studying private platforms was unethical and illegal if it violated the terms of service. Researchers and journalists in other countries need the same protection.

More Information

The full text of the motions in the case is available on the ACLU Web site. In our most recent filing there is an excellent summary of the case and the issues, starting on p. 6. You do not need to read the earlier filings for this to make sense.

There was a burst of news coverage when our lawsuit was filed. Standout pieces include the New Yorker’sHow an Old Hacking Law Hampers the Fight Against Online Discrimination” and “When Should Hacking Be Legal?” in The Atlantic.

The ACLU’s Rachel Goodman has recently published a short summary of how to do research under the shadow of the CFAA. It is titled as a tipsheet for “Data Journalism” but it applies equally well to academic researchers. A longer version co-authored with Esha Bhandari is also available.

(Note that I filed this lawsuit as a private citizen and it does not involve my university.)

IMAGE CREDIT: AgnosticPreachersKid via Wikimedia Commons

We’re Hiring a Research Assistant

The Social Media Collective is looking for a Research Assistant to work with us at Microsoft Research New England in Cambridge, Massachusetts.

The MSR Social Media Collective currently consists of Nancy Baym, Tarleton Gillespie, Mary L. Gray, Dan Greene, and Dylan Mulvin in Cambridge, Kate Crawford and danah boyd in New York City, as well as faculty visitors and Ph.D. interns affiliated with the MSR New England. The RA will take over from current RA Sarah Hamid and will work directly with Nancy Baym, Tarleton Gillespie, and Mary L. Gray.

An appropriate candidate will be a self-starter who is passionate and knowledgeable about the social and cultural implications of technology. Strong skills in writing, organization and academic research are essential, as are time-management and multi-tasking. Minimal qualifications are a BA or equivalent degree in a humanities or social science discipline and some qualitative research training. A Masters degree is preferred.

Job responsibilities will include:

– Sourcing and curating relevant literature and research materials
– Developing literature reviews and/or annotated bibliographies
– Coding ethnographic and interview data
– Copyediting manuscripts
– Working with academic journals on themed sections
– Assisting with research project data management and event organization

The RA will also have opportunities to collaborate on ongoing projects. While publication is not a guarantee, the RA will be encouraged to co-author papers while at MSR. The RAship will require 40 hours per week on site in Cambridge, MA. It is a 6 month contractor position, which we expect to extend an additional 6-12 months. The position pays hourly with flexible daytime hours. The start date will ideally be January 9, although flexibility may be possible for the right candidate.

This position is perfect for emerging scholars planning to apply to PhD programs in Communication, Media Studies, Sociology, Anthropology, Information Studies, History, Philosophy, STS and Critical Data Studies, and related fields who want to develop their research skills and area expertise before entering a graduate program. Current New England-based MA/PhD students are welcome to apply provided they can commit to 40 hours of on-site work per week.

To apply, please send an email to Nancy Baym (baym@microsoft.com) with the subject “RA Application” and include the following attachments:

– One-page (single-spaced) personal statement, including a description of research experience and training, interests, and professional goals
– CV or resume
– Writing sample (preferably a literature review or a scholarly-styled article)
– Links to online presence (e.g., blog, homepage, Twitter, journalistic endeavors, etc.)
– The names and email addresses of two recommenders

Be sure to include your last name in file names of all documents you attach.

We will begin reviewing applications on October 15. We hope to make a hiring decision in early November.

We regret that because this is a time-limited contract position, we can only consider candidates who are already legally authorized to work in the United States.

Please feel free to ask questions about the position in the blog comments.

Big Data Surveillance: The Case of Policing

Former SMC Postdoctoral Researcher, Sarah Brayne (University of Texas at Austin), has recently published a piece in the American Sociological Review about police use of big data.

The article is evidenced off over two and a half years of fieldwork with the Los Angeles Police Department — including observations from ride-alongs in patrol cars and interviews at the Joint Regional Intelligence Center (the “fusion center”) in Southern California.

Abstract: This article examines the intersection of two structural developments: the growth of surveillance and the rise of “big data.” Drawing on observations and interviews conducted within the Los Angeles Police Department, I offer an empirical account of how the adoption of big data analytics does—and does not—transform police surveillance practices. I argue that the adoption of big data analytics facilitates amplifications of prior surveillance practices and fundamental transformations in surveillance activities. First, discretionary assessments of risk are supplemented and quantified using risk scores. Second, data are used for predictive, rather than reactive or explanatory, purposes. Third, the proliferation of automatic alert systems makes it possible to systematically surveil an unprecedentedly large number of people. Fourth, the threshold for inclusion in law enforcement databases is lower, now including individuals who have not had direct police contact. Fifth, previously separate data systems are merged, facilitating the spread of surveillance into a wide range of institutions. Based on these findings, I develop a theoretical model of big data surveillance that can be applied to institutional domains beyond the criminal justice system. Finally, I highlight the social consequences of big data surveillance for law and social inequality.

You can read the full article here.