Welcome our new SMC postdoc, Elena Maris!

We’re thrilled to announce our newest postdoc in the Social Media Collective, based in the New England lab of Microsoft Research!

Elena Maris, University of Pennsylvania

marisElena received her Ph.D. in Communication from the Annenberg School of Communication at the University of Pennsylvania. Her research examines the ways media/tech industries and audiences work to influence each other, with a focus on their technological tactics and the roles of gender and sexuality in their interactions. She also studies how identity is represented and experienced in popular culture and online. Her dissertation explored how online audience groups construct media industries and opportunities for influencing media content, a concept she called the “imagined industry.” Elena returns to MSRNE after interning with the SMC in 2017, and will continue working on the project she began then, on industries’ use of metrics to measure fandom. She is also starting a new project about the increased demand for qualitative understandings of technology, big data, metrics and analytics in the tech industries, and the gendering of such ‘soft’ data work. Elena’s work has been published in Critical Studies in Media Communication, the European Journal of Cultural Studies, and Feminist Media Studies.

It’s of course hard to celebrate the choice of one, when we also had to say no to so many superb candidates. We are so honored and humbled by the quality and range of scholars who want to come work with us, and offer our best wishes to those we couldn’t bring in as well.

read an excerpt from Mike Ananny’s new book, Networked Press Freedom

cover, Networked Press FreedomIn my new book Networked Press Freedom: Creating Infrastructures for a Public Right to Hear [MIT Press | Amazon] I critically examine what press freedom means today.  I argue that, as news production, circulation, and interpretation are increasingly distributed across a new and unstable set of humans and nonhumans—from journalists and algorithms to platform designers and bots—it is increasingly difficult to say exactly what press freedom means.  What is the press trying to be free from?  To what ends and for which versions of the public?  How do we recognize a free versus an unfree press?

I define networked press freedom as a system of separations and dependencies among humans and nonhumans that helps to ensure not only journalists’ right to speak but publics’ rights to hear.  Engaging with a wide range of literature and analyzing a 7-year corpus of digital news examples, I argue that the networked press earns its freedom to the extent that it creates defensible publics.  Instead of only seeing press freedom as journalists’ right to pursue their visions of the public free from governments, markets, and technologies, the book tells a nuanced and historically grounded story that helps readers ask: what kind of public, what kind of freedom, and what kind of press?  Below is an excerpt. (This excerpt was first posted at the Nieman Lab.)


 

What, exactly, is press freedom, and why does it matter? In the popular discourse of the United States, we do not ask this question very often or very deeply. The answers are obvious and almost cliché: the public has a right to know, journalists are the people’s watchdogs, they afflict the comfortable and comfort the afflicted, democracy dies in darkness, and voters need objective information to be good citizens. Popular histories of modern U.S. journalism celebrate heroes who spoke truth to power and brought down institutions—Ida B. Wells, Nellie Bly, Ida Tarbell, Edward R. Murrow, I. F. Stone, Bob Woodward, Carl Bernstein, Walter Cronkite. They often are remembered as most effective when they were left alone to pursue their visions of what they thought the public needed. These virtuous, creative, public-spirited, hard-working storytellers occupy powerful positions within the modern mythology of press freedom. If we just get out of the way of good journalists and let them tell truth to power, they will produce the information that vibrant democracies need.

This myth is somewhat true, and these heroes were indeed expert storytellers who challenged each era’s norms. But when we think about press freedom only or even mostly as the freedom of journalists from constraints, it becomes a narrow and almost magical phenomenon that depends on individuals and heroism. It says that journalists already know what the public needs, and just need freedom from the state, marketplaces, and audiences to pursue self-evident things like truth and the public interest. These brave journalists and publishers show their commitment to the public and the power of their independence by going to court and sometimes jail to protect sources and fight censorship. If journalists and publishers can get truth to the public, then individual readers and viewers will be able to make informed decisions about how to think and vote. Ultimately, the press wants to be left alone so that you can be left alone. The kind of democracy that dominates this common image of press freedom relies on a lot of independences—a lot of freedoms from.

This book tries to challenge this mythology. I want to complicate the idea of press freedom and show that it emerges not from individual heroes but from social, technological, institutional, and normative forces that vie for power, imagine publics, and implicitly fight for visions of democracy. I see press freedom as a concept to think with—a generative and constructive tool for looking at any given era of the press and public life and asking, “Is this version of press freedom giving us the kind of publics we need? If not, how do we revise the institutional arrangements underpinning press freedom and make a different thing that we agree to call ‘the press’?” Alternatively, how do we adjust our normative expectations about what publics should be, creating a different image of freedom that we then might demand from institutions that make up the press? If we see press freedom not as heroic isolations—journalists breaking free to tell truths to the publics they imagine—but as a subtler system of separations and dependencies that make publics, then we might see each era’s types of press freedom as bellwethers for particular visions of the public. Ideas of press freedom become evidence of thinking about publics. Rethinking press freedom can be a way to see how press power flows, a prompt to ask which flows produce which publics, and a challenge: what types of news, publics, or presses are we not seeing because our vision of press freedom is so narrow?

If you think press freedom is a particular thing, you will likely look for that thing when you want to see whether a democracy is healthy or whether journalists are doing their jobs. Assumptions about press freedom can shut down conversations about the press and democracy: “We have a free press, so the election result is what it should be” or “We have a free press, and corruption is still rampant!” or “If we had a free press, then we’d have a different government” or “A free marketplace is a free press because truth comes from competing viewpoints.” Statements like these—coming from journalists, audiences, politicians, advertisers, publishers—assume that we already know what we mean by a free press and that our problem is just implementing it.

But if we can liberate the idea of press freedom from these assumptions and assumptions that equate it with whatever journalists say publics need, then press freedom becomes a generative and expansive tool—a way to think about publics, self-governance, and democracy. Because, as Edwin Baker puts it, different democracies need different media, we can complicate democracy by thinking more creatively about press freedom.

Given this moment, when media systems are in a fundamental flux, this book offers a way to think about press freedom as sociotechnical forces with separations and dependencies that help to make publics. I aim to engage with and use this moment of fundamental change to show what press freedom could mean. Contrary to the dominant historical myth in the United States, I argue that press freedom should not be seen simply as journalists’ freedom to write and publish. Rather, press freedom is a normative and institutional product of any given era: it is what people think press freedom should mean and how people have arranged people and power to achieve that vision.

Most simply, press freedom is the right and responsibility to create separations and dependencies that enable democratic self-governance. It is the power and obligation to know and defend the publics that its separations and dependences create. Today these separations and dependencies live in distributed, technological infrastructures with new actors and often invisible forces, so for the networked press to claim its autonomy, it needs to show how and why it arranges people and machines in particular ways. It needs to understand how its humans and nonhumans align or clash to create some publics but not others. It needs to be able to defend why it creates such meetings, and when necessary for a particular image of the public, it needs to develop new types of sociotechnical power that let it make new types of publics.

Rather than abandoning or collapsing the idea of press freedom—seeing it as naive or anachronistic—my aim is to revive and redeploy it. I trace the idea of press freedom through theories of democratic self-governance, situate it within the press’s institutional history, argue that each era of sociotechnical change creates a particular meaning of press freedom, and ask how the contemporary, networked press might claim its freedom and make new publics. Instead of being seen as a holdover from a time that no longer exists, press freedom could be viewed as a powerful framework for arguing why and how the networked press could change.

Interspersed with this tour of institutional forces, I try to deploy my framework and use this new notion of press freedom to argue for a particular normative value—a public right to hear. I claim that the dominant, historical, professionalized image of press freedom—as whatever journalists say they need to be free from to pursue self-evident public interest—privileges an individual right to speak over a public right to hear. It confuses journalists’ freedom to publish with publics’ rights to hear what they need to hear in order to sustain themselves as publics—to realize the inextricably shared conditions under which they live, discover and debate their similarities and differences, devise solutions to predicaments, insulate themselves from harmful forces and nurture contrarian viewpoints, recognize the resources that hold them together, and reinvent themselves through means other than the rational, informational models of citizenship that dominate the traditional mythology of U.S. press freedom. For publics to be anything other than what unconstrained journalists imagine them to be, press freedom can be defensible only if it can be shown that the press’s institutional arrangements produce expansive, dynamic, diverse publics.

In an era when many assumptions about communication and information are being reconsidered, it is difficult to say exactly what journalists can or should be free from. A better question to ask might be, “How is the networked press—journalists, software engineers, algorithms, relational databases, social media platforms, and quantified audiences—creating separations and dependencies that enable a public right to hear, make some publics more likely than others, and move beyond an image of the public as whatever journalists assume it to be?”

Three stories can help illustrate the phenomenon. First, in September 2008, high in Google News’s list of results for a search on “United Airlines” was a story in the South Florida Sun Sentinel on United’s recent bankruptcy filing. The story detailed how United had lost significant revenue, could not meet market forecasts, and needed protection from creditors and time to restructure. A Miami investment adviser responsible for publishing news alerts through Bloomberg News Service saw the story and added it to Bloomberg’s newsletter; United’s stock dropped 75 percent in one day before trading was halted. Unfortunately for United, the Sentinel’s website displayed the current date (2008) at the top of its page; it did not include the story’s original date of publication (2002). Google’s Web crawler mistook the old story for a current story, creating a perfect storm of misinformation: the Sentinel displayed dates in a confusing manner; Google’s crawler read the only date it saw and made an assumption; the investment adviser assumed that Google highly ranked recent information; Bloomberg subscribers and high-frequency traders assumed that the newsletter contained timely and actionable information; and the stock market assumed that its behavior was rational and based on true information. This is a story of networked press freedom because although the Sentinel may have tipped the first domino, the failure is the fault of no single actor. A sociotechnical failure of data, algorithms, individuals, and institutions together led to the creation of false news that drove action.

Second, in 2008, the Pocono Record published an online story about Brenda Enterline’s sexual harassment lawsuit against Pocono Medical Center. In comments left by readers under the story, several people anonymously said that they had personal knowledge of incidents relevant to the lawsuit. When Enterline’s attorneys subpoenaed the newspaper for access to the commenters, the paper refused, claiming that it had a right and obligation to protect the commenters’ First Amendment rights to anonymity. The Pennsylvania district court agreed, essentially extending a de facto shield law around the Pocono Record’s reporters and commenters. In contrast, also in September 2008, a grand jury in Illinois successfully subpoenaed the Alton Telegraph for the names, home addresses, and IP addresses of anonymous commenters who left responses to an online story the paper had run about a murder investigation. The paper argued that “the Illinois reporter’s shield law protects the identities of the anonymous commenters as ‘sources,’” but the court disagreed, saying that such a shield covers only reporters and not commenters. Such cases have continued, with an Idaho judge ruling in 2012 that the Spokesman-Review had to reveal the identity of an anonymous commenter accused of libel, and a 2014 U.S. federal court ruling that the NOLA Media Group had to reveal names, addresses, and phone numbers of its anonymous commenters. Even though the First Amendment protects Americans’ right to speak anonymously  and several states have shield laws designed to protect newspapers from releasing information against their will (Digital Media Law Project, 2013), it is unclear exactly where newspapers stop and audiences begin. The press may sometimes be free from compelled testimony, but there is little clarity on what exactly the press is and therefore who can claim its freedoms.

Finally, in 2016, Norwegian writer Tom Egeland posted to his Facebook account a story that included Nick Ut’s Pulitzer Prize–winning photo of Vietnamese children running away from a U.S. military napalm attack. One nine-year-old victim was a naked girl. Facebook removed the post because it contained “fully nude genitalia” and “fully nude female breast,” in violation of the company’s community standards. When Egeland appealed the removal, his account was suspended. The Norwegian newspaper Aftenposten then posted the image and a story on the censorship to its company’s Facebook site—and its post also was censored. The leader of Norway’s conservative party then posted the image and a protest against the censorship—and her post was censored. Facebook initially defended its decisions saying that although it recognized the photo’s iconic status, “it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.” It relented only after the Norwegian prime minister also posted the image with her own protest. Facebook eventually stated: “Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image”.

This is a story of networked press freedom. A Facebook user posts an image that has been recognized with one of journalism’s highest awards. It triggers a review by Facebook’s vast content-moderation operation tasked with policing millions of pieces of media in near real time. The user is suspended for appealing the decision. The incident attracts the attention of a news organization, political elites, and worldwide audiences. Eventually, Facebook relents after deciding for itself that the image is iconic, historically important, and worthy of sharing. In this incident, the journalist’s right to publish and the public right to hear are not housed within any one organization or profession. They instead are distributed across an image with agreed-on historical significance, platform algorithms surfacing content, social media companies with proprietary community standards, vast populations of piecework censors implementing standards quickly, editorial protests of professional journalists and elite politicians, and an eventual reversal by a private corporation only after it thinks that an image should be shared. Here, press autonomy is not just the freedom of Nick Ut, Tom Egeland, or the Aftenposten to publish. It is the product of a network of humans and nonhumans that make it more or less likely that a public will encounter media and debate its meaning and significance.

There are many more such stories. This book is about putting them in context—to show how these seemingly idiosyncratic incidents are indicative of the larger challenge of figuring out what democratic self-governance requires, what kind of free press should help to secure it, and how such freedom is distributed across a network of humans and machines that together create publics. If nothing else, my hope is that readers will take away from this book both a skepticism about the idea of press freedom and a sense of its promise as a tool for interrogating the networked press. If someone says “We need a free press,” my hope is that this book will nudge you to ask, “What kind of freedom, what kind of press, and for what kind of public?” Inspired by Michael Schudson’s question “autonomy from what?,” I try to ask “autonomy of what and for what?”


 

My aim in this book is not to dismiss earlier theories of press freedom but to argue that they tell only part of the story. That the press is a product of multiple forces and many different kinds of power is nothing new. But if we want to understand the networked press’s potential to create new publics, we might use the idea of networked press freedom as a kind of diagnostic. If we do not like the publics the networked press creates, we should examine its infrastructure and make changes. If we do not like the networked press’s infrastructure, we need to show why it leads to unacceptable publics. If a new element of the networked press appears, we need to be able to say quickly and thoughtfully what its relationships are and how they create new publics. And if we have an idea for a new element that we think should be part of the networked press, we must be able to say why we need the new public it might help create.

Custodians

I’m thrilled to say that my new book, Custodians of the Internet, is now available for purchase from Yale University Press, and your favorite book retailer. Those of you who know me know that I’ve been working on this book for a long time, and have cared about the issues it addresses for a while now. So I’m particularly excited that it is now no longer mine, but yours if you want it. I hope it’ll be of some value to those of you who are interested in interrogating and transforming the information landscape in which we find ourselves.

By way of introduction, I thought I would explain the book’s title, particularly my choice of the word “custodians.” This title came unnervingly late in the writing process, and after many, many conversations with my extremely patient friend and colleague Dylan Mulvin. “Custodians of the Internet” captured, better than many, many alternatives, the aspirations of social media platforms, the position they find themselves in, and my notion for how they should move forward.

moderators are the web’s “custodians,” quietly cleaning up the mess: The book begins with a quote from one of my earliest interviews, with a member of YouTube’s content policy team. As they put it, “In the ideal world, I think that our job in terms of a moderating function would be really to be able to just turn the lights on and off and sweep the floors . . . but there are always the edge cases, that are gray.” The image invoked is a custodian in the janitorial sense, doing the simple, mundane, and uncontroversial task of sweeping the floors. In this turn of phrase, content moderation was offered up as simple maintenance. It is not imagined to be difficult to know what needs scrubbing, and the process is routine. As with content moderation, there is labor involved, but largely invisible, just as actual janitorial staff are often instructed to “disappear,” working at night or with as little intrusion as possible. yet even then, years before Gamergate or ISIS beheadings or white nationalists or fake news, it was clear that moderation is not so simple.

platforms have taken “custody” of the Internet: Content moderation at the major platforms matters because those platforms have achieved such prominence in the intervening years.As I was writing the book, one news item in 2015 stuck with me: in a survey on people’s new media use, more people said that they used Facebook than said they used the Internet. Facebook, which by then had become one of the most popular online destinations in the world and had expanded to the mobile environment, did not “seem” like the Internet anymore. Rather than being part of the Internet, it had somehow surpassed it. This was not true, of course; Facebook and the other major platforms had in fact woven themselves deeper into the Internet, by distributing cookies, offering secure login mechanisms for other sites and platforms, expanding advertising networks, collecting reams of user data from third-party sites, and even exploring Internet architecture projects. In both the perception of users and in material ways, Facebook and the major social media platforms have taken “custody” of the Internet. This should change our calculus as to whether platform moderation is or is not “censorship,” and the responsibilities of platforms bear when they decide what to remove and who to exclude.

platforms should be better “custodians,” committed guardians of our struggles over value: In the book, I propose that these responsibilities have expanded. Users have become more acutely aware, of both the harms they encounter on these platforms, and the costs of being wronged by content moderation decisions. What’s more, social media platforms have become the place where a variety of speech coalitions do battle: activists, trolls, white nationalists, advertisers, abusers, even the President. And the implications of content moderation have expanded, from individual concerns to public ones. If a platform fails to moderate, everyone can be affected, even those who aren’t party to the circulation of the offensive, the fraudulent, or the hateful — even those who aren’t on social media at all.

What would it mean for platforms to play host not just to our content, but to our best intentions? The major platforms I discuss here have, for years, tried to position themselves as open and impartial conduits of information, defenders of their user’s right to speak, and legally shielded from any obligations for how they police their sites. As most platform managers see it, moderation should be theirs to do, conducted on their own terms, on our behalf, and behind the scenes. But that arrangement is crumbling, as critics begin to examine the responsibilities social media platforms have to the public they serve.

In the book, I propose that platforms become “custodians” of the public discourse they facilitate — not in the janitorial sense, but something more akin to legal guardianship. The custodian, given charge over a property, a company, a person, or a valuable resource, does not take it for their own or impose their will over it; they accept responsibility for ensuring that it is governed properly. This is akin to Jack Balkin’s suggestion that platforms act as “information fiduciaries,” with a greater obligation to protect our data. But I don’t just mean that platforms should be custodians of our content; platforms should be custodians of the deliberative process we all must engage in, that makes us a functioning public. Users need to be more accountable for making the hard decisions about what does and does not belong; platforms could facilitate that deliberation, and then faithfully enact the conclusions users reach. Safeguarding public discourse requires ensuring that it is governed by those to whom it belongs, making sure it survives, that its value is sustained in a fair and equitable way. Platforms could be not the police of our reckless chatter, but the trusted agents of our own interest in forming more democratic publics.

If you end up reading the book, you have my gratitude. And I’m eager to hear from anyone who has thoughts, comments, praise, criticism, and suggestions. You can find me on Twitter at @TarletonG.

Night modes and the new hue of our screens

Information & Culture just published (paywall; or free pre-print) an article I wrote about “night modes,” in which I try to untangle the history of light, screens, sleep loss, and circadian research. If we navigate our lives enmeshed with technologies and their attendant harms, I wanted to know how we make sense of our orientation to the things that prevent harm. To think, in other words, of the constellation of people and things that are meant to ward off, prevent, stave off, or otherwise mitigate the endemic effects of using technology.

If you’re not familiar with “night modes”: in recent years, hardware manufacturers and software companies have introduced new device modes that shift the color temperature of screens during evening hours. To put it another way: your phone turns orange at night now. Perhaps you already use f.lux, or Apple’s “Night Shift,” or “Twilight” for Android.

All of these software interventions come as responses to the belief that untimely light exposure closer to bedtime will result in less sleep or a less restful sleep. Research into human circadian rhythms has had a powerful influence on how we think and talk about healthy technology use. And recent discoveries in the human response to light, as you’ll learn in the article, are based on a tiny subset of blind persons who lack rods and cones. As such, it’s part of a longer history of using research on persons with disabilities to shape and optimize communication technologies – a historical pattern that the media and disability studies scholar, Mara Mills, has documented throughout her career.

 apple night shift

Continue reading “Night modes and the new hue of our screens”

The Senate Talks to Zuck

(or: “I will get back to you on that, Senator.”)

Here is an early response to Zuckerberg’s testimony before the US Senate today. If you want my overall score, as of 3:30 ET I think Zuckerberg is doing quite well, but some of the things being discussed need a lot of unpacking.

Those Poor Fools “whose privacy settings allowed it.”

In the beginning of his testimony, Zuck described what people are so upset about:

Zuckerberg: [The Kogan personality quiz app that shared data with Cambridge Analytica] was installed by around 300,000 people who agreed to share some of their Facebook information as well as some information from their friends whose privacy settings allowed it.

[Emphasis mine.]

Huh — this phrasing is so careful to be technically accurate but it is right up against the limit of truth. Then I think it goes past that. I looked up what the Facebook privacy settings screen looked like in 2015. It looked like this:

facebook privacy settings as of 2015 screenshot

If we follow the research findings in the security area, most users probably never saw this screen at all: people tend not to know about their own security settings.

But if you did find this screen, anyone who clicked “Friends” for any column surely could not have taken this to mean that their “privacy settings allowed” (Zuckerberg’s phrase) the harvesting of their data by an app that they never authorized and were not aware of.

This is presumably why Facebook disallowed this use of third-party data by apps well before this scandal. There is no third-party consent. So Zuckerberg’s claim that the Kogan/Cambridge Analytica app took information from people “whose privacy settings allowed it” seems a bridge too far.

The American Dream

A closing thought:  It is hyperbole, I know, but I was struck by Sen. John Thune’s (R-SD) remark that “Facebook represents the American dream.” Didn’t The Social Network cover this ground? I don’t remember the plot that way. Did Thune just mean that Zuck got rich?

Zuck’s the Scorpion and We Are The Frog?

Zuckerberg: [investments] in security…will significantly impact our profitability going forward. But I want to be clear about what our priority is: protecting our community is more important than maximizing our profits.

This is a nice quote by Zuck because it highlights the key problem with Facebook’s position. The issue isn’t really “security” though. It’s the fact that Facebook is fundamentally in the business of harvesting user data and that negative, polarizing (and even inaccurate) ads and status updates are good for the platform. They promote engagement through outrage.

To crib from Marshall McLuhan, what is in the public interest is not necessarily what the public is interested in. Gory road accidents turn heads. But is that what our media should be showing?

Zuck’s comment is also highlighting that by asking Facebook to fix these problems, we are asking advertising-supported media to behave in a way that makes no sense for them and is opposite to their nature.

Let’s take a look at some of the ads placed by fake accounts controlled by Агентство интернет-исследований  they are extremely polarizing (american.veterans is a beard or sock puppet account):

 

1509564403589-screen-shot-2017-11-01-at-30922-pm

And it is now very clear from both common sense and the Trump/Clinton Facebook CPM fracas that polarizing ads on Facebook are much more likely to gain clicks.

The Trump campaign’s official ads were quintessentially negative and polarizing ads: they did things like try to associate the phrase “Crooked Hillary” with a winged bag of money.

Political ad spending is also a windfall that old media (radio and television stations) depended on, but there was no “click engagement” dimension with old mediaold media left people with little to do. It seems possible that the new media political ad environment can create feedbacks with negative ads that might be much more significant than the old ways of doing things.

Pay-For-Privacy

Another thing that struck me in early Q&A is the concern raised about a paid Facebook model. This was floated yesterday in a media interview and now Sen. Bill Nelson (D-FL) is asking Zuckerberg if it is true users would have to, as he put it, “pay for privacy.”

Nelson seems outraged. On the one hand, this outrage makes no sense. If Facebook were switched completely away from an advertising model, it would be great for users as it would redefine the company’s incentives completely.

However, I think what is being proposed is a half-pay, half-free system (or opt-in payments). If that’s the plan, the outrage is justified. Pay-for-privacy makes social media even more regressive.

Privacy is already regressive in the sense that only those people who have time to learn about risks and fiddle with endless (and endlessly changing) settings pages have any hope of protecting themselves. The current system rewards computer skill and free time. And even with those things users still may not be able to protect themselves, because the options just aren’t available.

But an opt-in payment system makes privacy even worse by taking these intangible regressive dimensions and, in addition, putting a payment step on top of them. It’s not that people in opt-in privacy use either time or money to obtain privacy, rather it will be the case that people who both have time to follow this topic closely enough to know that they need privacy in the first place and can afford to pay for it will have privacy.

 

Big Social Won’t Be Fixed

I need to sign off because I can’t spend the day watching this. My summary so far: “Big Social” won’t be fixed by anything that was said here. The business models, institutions, and habits are too well-established and have too much inertia for a meaningful reconfiguration to come from the things I’ve heard so far.

Congratulations to the incoming SMC interns for summer 2018!

Another stellar crop of applicants poured in for the SMC internships this year, and another three emerged as the best of the best. Thanks to everyone who applied, it was painful not to accept more of you! For summer 2018, we’re thrilled to have these three remarkable students joining us in the Microsoft Research lab in New England, to conduct their own original research and to be part of the SMC community. (Remember that we offer these internships every summer: if you’re an advanced graduate student in the areas of communication, the anthropology or sociology of new media, information science, and related fields, watch this page for the necessary information.)

 

Robyn Caplan is a doctoral candidate at Rutgers University’s School of Communication and Information under the supervision of Professor Philip Napoli. For the last three years, she has also been a Researcher at the Data & Society Research Institute, working on projects related to platform accountability, media manipulation, and data and civil rights. Her most recent research explores how platforms and news media associations navigate content moderation decisions regarding trustworthy and credible content, and how current concerns regarding the rise of disinformation across borders are impacting platform governance, and national media and information policy. Previously she was a Fellow at the GovLab at NYU, where she worked on issues related to open data policy and use. She holds an MA from New York University in Media, Culture, and Communication, and a Bachelor of Science from the University of Toronto.

 

Michaelanne Dye is a Ph.D. candidate in Human-Centered Computing in the School of Interactive Computing at Georgia Tech. She also holds an M.A. in Cultural Anthropology. Michaelanne uses ethnographic methods to explore human-computer interaction and development (HCID) issues within social computing systems, paying attention to the complex factors that afford and constrain meaningful engagements with the internet in resource-constrained communities. Through fieldwork in Havana, Cuba, Michaelanne’s dissertation work examines how new internet infrastructures interact with cultural values and local constraints. Moreover, her research explores community-led information networks that have evolved in absence of access to the world wide web – in order to explore ways to design more meaningful and sustainable engagements for users in both “developing” and “developed” contexts. Michaelanne’s work has been published in the conference proceedings of Human Factors in Computing Systems (CHI) and Computer-Supported Cooperative Work and Social Computing (CSCW).

 

Penny Trieu is a PhD candidate in the School of Information at the University of Michigan. She is a member of the Social Media Research Lab, where she is primarily advised by Nicole Ellison. Her research concerns how people can use communication technologies, particularly social media, to better support their interpersonal relationships. She also looks at identity processes, notably self-presentation and impression management, on social media. Her research has appeared in venues such as Information, Communication, and Society; Social Media + Society, and at the International Communication Association conference. At the Social Media Collective, she will work on the dynamics of interpersonal feedback and self-presentation around ephemeral sharing via Instagram and Snapchat Stories.

Content moderation is not a panacea: Logan Paul, YouTube, and what we should expect from platforms

What do we expect of content moderation? And what do we expect of platforms?

There is an undeniable need, now more than ever, to reconsider the public responsibilities of social media platforms. For too long, platforms have enjoyed generous legal protections and an equally generous cultural allowance, to be “mere conduits” not liable for what users post to them. in the shadow of this protection, they have constructed baroque moderation mechanisms: flagging, review teams, crowdworkers, automatic detection tools, age barriers, suspensions, verification status, external consultants, blocking tools. They all engage in content moderation, but are not obligated to; they do it largely out of sight of public scrutiny, and are held to no official standards as to how they do so. This needs to change, and it is beginning to.

But in this crucial moment, one that affords such a clear opportunity to fundamentally reimagine how platforms work and what we can expect of them, we might want to get our stories straight about what those expectations should be.

The latest controversy involves Logan Paul, a twenty-two year old YouTube star with 15 million plus subscribers. His videos, a relentless barrage of boasts, pranks, and stunts, have garnered him legions of adoring fans. But he faced public backlash this week after posting a video in which he and his buddies ventured into the Aokigahara forest of Japan, only to find the body of a young man who had recently committed suicide. Rather than turning off the camera, Logan continued his antics, pinballing between awe and irreverence, showing the body up close and then turning the attention back to his own reaction. The video lingers of the body, including close ups of his swollen hand, and Paul’s reactions were self-centered and cruel. After a blistering wave of criticism in the video comments and on Twitter, Paul removed the video and issued a written apology, which was itself criticized for not striking the right tone. A somewhat more heartfelt video apology followed. He later announced he would be taking a break from YouTube.

There is no question that Paul’s video was profoundly insensitive, an abject lapse in judgment. But amidst the reaction, I am struck by the press coverage of and commentary about the incident: the willingness both to lump this controversy in with an array of other concerns about what’s online, as somehow all part of the “content moderation” problem; paired with a persistent and unjustified optimism for what content moderation should be able to handle.

YouTube has weathered a series of controversies over the course of the last year, many of which had to do with children, both their exploitation and their vulnerability as audiences. There was the controversy about popular vlogger PewDiePie, condemned for including anti-Semitic humor and Nazi imagery in his videos. Then there were the videos that slipped past the stricter standards YouTube has for its Kids app: amateur versions of cartoons featuring well-known characters with weirdly upsetting narrative third acts. That was quickly followed by the revelation of entire YouTube channels of videos in which children were being mistreated, frightened and exploited, that seem designed to skirt YouTube’s rules against violence and child exploitation. And just days later, Buzzfeed also reported that YouTube’s autocomplete displayed results that seemed to point to child sexual exploitation. YouTube representatives have apologized for all of these, promised to increase the number of moderators reviewing their videos, aggressively pursue better artificial intelligence solutions, and remove advertising from some of the questionable channels.

Content moderation, and different kinds of responsibility

But what do these incidents have in common, besides the platform? Journalists and commentators are eager to lump them together: part of a single condemnation of YouTube, its failure to moderate effectively, and its complicity with the profits made by producers of salacious or reprehensible content. But these incidents represent different kinds of problems, they implicate YouTube and content moderation in different ways — and, when lumped together, they suggest a contradictory set of expectations we have for platforms and their public responsibility.

Platforms assert a set of normative standards, guidelines by which users are expected to comport themselves. It is difficult to convince every user to honor these standards, in part because the platforms have spent years promising users an open and unfettered playing field, inviting users to do or say whatever they want. And it is difficult to enforce these standards, in part because the platforms have few of the traditional mechanism of governance: they can’t fire us, we are not salaried producers. All they have are the terms of service and the right to delete content and suspend users. And, there are competing economic incentives for platforms to be more permissive than they claim to be, and to treat high value producers differently than the rest.

Incidents like the exploitative videos of children, or the misleading amateur cartoons, take advantage of this system. They live amidst this enormous range of videos, some subset of which YouTube must remove. Some come from users who don’t know or care about the rules, or find what they’re making perfectly acceptable. Others are deliberately designed to slip past moderators, either by going unnoticed or by walking right up to but not across the community guidelines. They sometimes require hard decisions about speech, community, norms, and the right to intervene.

Logan Paul’s video, or PewDiePie’s racist outbursts, are of a different sort. As was clear in the news coverage and the public outrage, critics were troubled by Logan Paul’s failure to consider his responsibility to his audience, to show more dignity as a videomaker, to choose sensitivity over sensationalism. The fact that he has 15 million subscribers, many of them young, was reason for many to claim that he (and by implication, YouTube) have a greater responsibility. These sound more like traditional media concerns: the effects on audiences, the responsibilities of producers, the liability of providers. This could just as easily be a discussion about Ashton Kutcher and an episode of Punk’d. What would Kutcher’s, his production team’s, and MTV’s responsibility be if he had similarly crossed the line with one of his pranks?

But MTV was in a structurally different position than YouTube. We expect MTV to be accountable for a number of reasons: they had the opportunity to review the episode before broadcasting it; they employed Kutcher and his team, affording them specific power to impose standards; and they chose to hand him the megaphone in the first place. While YouTube also affords Logan Paul a way to reach millions, and he and YouTube share advertising revenue from popular videos, these offers are in principle made to all YouTube users. YouTube is a distribution platform, not a distribution bottleneck — or it is a bottleneck of a very different shape. This does not mean we cannot or should not hold YouTube accountable. We could decide as a society that we want YouTube to meet exactly the same responsibilities as MTV, or more. But we must take into account that these structural differences change not only what YouTube can do, but how and why we can expect it of them.

Moreover, is content moderation the right mechanism to manage this responsibility? Or to put it another way, what would the critics of Logan’s video have wanted YouTube to do? Some argued that YouTube should have removed the video, before Paul did. (It seems the video was reviewed, and was not removed, but Paul received a “strike” on his account, a kind of warning — we know this only based on this evidence. If you want to see the true range of disagreement about what YouTube should have done, just read down the lengthy thread of comments that followed this tweet.) In its PR response to the incident, a YouTube representative said it should have taken the video down, for being “shocking, sensational or disrespectful”. But it is not self-evident that Paul’s video violates YouTube’s policies. And from the comments from critics, it was Paul’s blithe, self-absorbed commentary, the tenor he took about the suicide victim he found, as much as showing the body itself, that was so troubling. Showing the body, lingering on its details, was part of Paul’s casual indifference, but so were his thoughtless jokes and exaggerated reactions. Is it so certain that YouTube should have removed this video on our behalf? I do not mean to imply that the answer is no, or that it is yes. I’m only noting that this is not an easy case to adjudicate — which is precisely why I we shouldn’t expect YouTube to already have a clean and settled policy towards it.

There’s no simple answer as to where such lines should be drawn. Every bright line rule YouTube might draw will be plagued with “what abouts”. Is it that corpses should not be shown in a video? What about news footage from a battlefield? What about public funerals? Should the prohibition be specific to suicide victims, out of respect? It would be reasonable to argue that YouTube should allow a tasteful documentary about the Aokigahara forest, concerned about the high rates of suicide among Japanese men. Such a video might even, for educational or provocative reasons, include images of the body of a suicide victim, or evidence of their deaths. In fact, YouTube already has some, of a variety of qualities (see 1, 2, 3, 4).

So what we critics may be implying is that YouTube should be responsible to distinguish the insensitive versions from the sensitive ones. Again, this sounds more like the kinds of expectations we had for television networks — which is fine if that’s what we want, but we should admit that this would be asking much more from YouTube than we might think.

As a society, we’ve already struggled with this very question, in traditional media: should the news show the coffins of U.S. soldiers as their returned from war? should the news show the grisly details of crime scenes? When is the typically too graphic video acceptable because it is newsworthy, educational, or historically relevant? Not only is the answer far from clear, and differs across cultures and periods. As a society, we need to engage in the debate; it cannot be answered for us by YouTube alone.

These moments of violation serve as the spark for that debate. It may be that all this condemnation of Logan Paul, in the comment threads on YouTube, on Twitter, and in the press coverage, is the closest we get to a real, public consideration of what’s appropriate for public consumption. And maybe the focus among critics on Paul’s irresponsibility, as opposed to YouTube’s, is indicative that this is not a moderation question, or a growing public sense that we cannot rely on YouTube’s moderation, that we need to cultivate a clearer sensibility of what public culture should look like, and teach creators to take their public responsibility more seriously. (Though even if it is, there will always be a new wave of twenty-year-olds waiting in the wings, who will jump at the chance social media offers to show off for a crowd, way before they ever grapple with social norms we may have worked out. This is why we need to keep having this debate.)

How exactly YouTube is complicit in the choices of its stars

This is not to suggest that platforms bear no responsibility for the content that they help circulate. Far from it. YouTube is implicated, in that they afford the opportunity for Logan to broadcast his tasteless video, help him gather millions of viewers who will have it instantly delivered to their feed, design and tune the recommendation algorithms that amplify its circulation, and profit enormously from the advertising revenue it accrues.

Some critics are doing the important work of putting platforms under scrutiny, to better understand the way producers and platforms are intertwined. But it is awfully tempting to draw too simple a line between the phenomenon and the provider, to paint platforms with too broad a brush. The press loves villains, and YouTube is one right now. But we err when we draw these lines of complicity too cleanly. Yes, YouTube benefits financially from Logan Paul’s success. That by itself does not prove complicity; it needs to be a feature of our discussion about complicity. We might want revenue sharing to come with greater obligations on the part of the platform; or, we might want platforms to be shielded from liability or obligation no matter what the financial arrangement; or, we might want equal obligations whether there is revenue shared or not; or we might want obligations to attend to popularity rather than revenue. These are all possible structures of accountability.

It is also easy to say that YouTube drives vloggers like Logan Paul to be more and more outrageous. If video makers are rewarded based on the number of views, whether that reward is financial or just reputational, it stands to reason that some videomakers will look for ways to increase those numbers, including going bigger. But it is not clear that metrics of popularity necessarily or only lead to being over more outrageous, and there’s nothing about this tactic that is unique to social media. Media scholars have long noted that being outrageous is one tactic producers use to cut through the clutter and grab viewers, whether its blaring newspaper headlines, trashy daytime talk shows, or sexualized pop star performances. That is hardly unique to YouTube. And YouTube videomakers are pursuing a number of strategies to seek popularity and the rewards therein, outrageousness being just one. many more seem to depend on repetition, building a sense of community or following, interacting with individual subscribers, and the attempt to be first. While over-caffeinated pranksters like Logan Paul might try to one-up themselves and their fellow bloggers, that is not the primary tactic for unboxing vidders or Minecraft world builders or fashion advisers or lip syncers or television recappers or music remixers. Others see Paul as part of a “toxic YouTube prank culture” that migrated from Vine, which is another way to frame YouTube’s responsibility. But a genre may develop, and a provider profiting from it may look the other way or even encourage it; that does not answer the question of what responsibility they have for it, it only opens it.

To draw too straight a line between YouTube’s financial arrangements and Logan Paul’s increasingly outrageous shenanigans misunderstands both of the economic pressures of media and the complexity of popular culture. It ignores the lessons of media sociology, which makes clear that the relationship between the pressures imposed by industry and the creative choices of producers is much more complex and dynamic. And it does prove that content moderation is the right way to address this complicity.

*   *   *

Let me say again: Paul’s video was in poor, poor taste, and he deserves all of the criticism he received. And I find this genre of boffo, entitled, show-off masculinity morally problematic and just plain tiresome. And while it may sound like I am defending YouTube, I am definitely not. Along with the other major social media platforms, YouTube has a greater responsibility for the content they circulate than they have thus far acknowledged; they have built a content moderation mechanism that is too reactive, too dismissive, and too opaque, and they are due for a public reckoning. In the last few years, the workings of content moderation and its fundamental limitations have come to the light, and this is good news. Content moderation should be more transparent, and platforms should be more accountable, not only for what traverses their system, but the ways in which they are complicit in its production, circulation, and impact. But it also seems we are too eager to blame all things on content moderation, and to expect platforms to maintain a perfectly honed moral outlook every time we are troubled by something we find there. Acknowledging that YouTube is not a mere conduit does not imply that it is exclusively responsible for everything available there.

As Davey Alba at Buzzfeed argued, “YouTube, after a decade of being the pioneer of internet video, is at an inflection point as it struggles to control the vast stream of content flowing across its platform, balancing the need for moderation with an aversion toward censorship.” This is true. But we are also at an inflection point of our own. After a decade of embracing social media platforms as key venues for entertainment, news, and public exchange, and in light of our growing disappointment in their preponderance of harassment, hate, and obscenity, we too are struggling: to modulate exactly what we expect of them and why, to balance how to improve the public sphere with what role intermediaries can reasonably be asked to take.

This essay is cross-posted at Culture Digitally. Many thanks to Dylan Mulvin for helping me think this through.

Call for applications! 2018 summer internship, MSR Social Media Collective

APPLICATION DEADLINE: JANUARY 19, 2018

Microsoft Research New England (MSRNE) is looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week Internship program. The Social Media Collective (in New England, we are Nancy Baym, Tarleton Gillespie, and Mary Gray, with current postdocs Dan Greene and Dylan Mulvin) bring together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. Learn more about us here.

MSRNE internships are 12-week paid stays in our lab in Cambridge, Massachusetts. During their stay, SMC interns are expected to devise and execute their own research project, distinct from the focus of their dissertation (see the project requirements below). The expected outcome is a draft of a publishable scholarly paper for an academic journal or conference of the intern’s choosing. Our goal is to help the intern advance their own career; interns are strongly encouraged to work towards a creative outcome that will help them on the academic job market.

The ideal candidate may be trained in any number of disciplines (including anthropology, communication, information studies, media studies, sociology, science and technology studies, or a related field), but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to media or communication technologies and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

Primary mentors for this year will be Nancy Baym and Tarleton Gillespie, with additional guidance offered by other members of the SMC. We are looking for applicants working in one or more of the following areas:

  1. Personal relationships and digital media
  2. Audiences and the shifting landscapes of producer/consumer relations
  3. Affective, immaterial, and other frameworks for understanding digital labor
  4. How platforms, through their design and policies, shape public discourse
  5. The politics of algorithms, metrics, and big data for a computational culture
  6. The interactional dynamics, cultural understanding, or public impact of AI chatbots or intelligent agents

Interns are also expected to give short presentations on their project, contribute to the SMC blog, attend the weekly lab colloquia, and contribute to the life of the community through weekly lunches with fellow PhD interns and the broader lab community. There are also natural opportunities for collaboration with SMC researchers and visitors, and with others currently working at MSRNE, including computer scientists, economists, and mathematicians. PhD interns are expected to be on-site for the duration of their internship.

Applicants must have advanced to candidacy in their PhD program by the time they start their internship. (Unfortunately, there are no opportunities for Master’s students or early PhD students at this time). Applicants from historically marginalized communities, underrepresented in higher education, and students from universities outside of the United States are encouraged to apply.

PEOPLE AT MSRNE SOCIAL MEDIA COLLECTIVE

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

  • How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)
  • How are social media platforms, through their algorithmic design and user policies, taking up the role of custodians of public discourse? (Tarleton Gillespie)
  • What are the cultural, political, and economic implications of crowdsourcing as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)
  • How do public institutions like schools and libraries prepare workers for the information economy, and how are they changed in the process? (Dan Greene)
  • How are media standards made, and what do their histories tell us about the kinds of things we can represent? (Dylan Mulvin)

SMC PhD interns may also have the opportunity to connect with our sister Social Media Collective members in New York City. Related projects in New York City include:

  • What are the politics, ethics, and policy implications of artificial intelligence and data science? (Kate Crawford, MSR-NYC)
  • What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

For more information about the Social Media Collective, and a list of past interns, visit the About page of our blog. For a complete list of all permanent researchers and current postdocs based at the New England lab, see: http://research.microsoft.com/en-us/labs/newengland/people/bios.aspx

 

COMPENSATION, RELOCATION, AND BENEFITS:

  • highly competitive salary
  • travel to/from internship location from your university location (including the intern and all eligible dependents)
  • housing costs: interns can select one of two housing options
    • fully furnished corporate housing covered by Microsoft
    • a lump sum for finding and securing your own housing
  • local transportation allowance for commuting
  • health insurance is not provided; most interns stay covered under their university insurance, but interns are eligible to enroll in a Microsoft sponsored medical plan
  • internship events and activities

 

APPLICATION PROCESS

To apply for a PhD internship with the Social Media Collective, fill out the online application form: https://careers.research.microsoft.com/

On the application website, please indicate that your research area of interest is “Anthropology, Communication, Media Studies, and Sociology” and that your location preference is “New England, MA, U.S.” in the pull down menus. Also enter the name of a mentor (Nancy Baym or Tarleton Gillespie) whose work most directly relates to your own in the “Microsoft Research Contact” field. IF YOU DO NOT MARK THESE PREFERENCES WE WILL NOT RECEIVE YOUR APPLICATION. So, please, make sure to follow these detailed instructions.

Your application needs to include:

  1. A short description (no more than 2 pages, single spaced) of 1 or 2 projects that you propose to do while interning at MSRNE, independently and/or in collaboration with current SMC researchers. The project proposals can be related to, but must be distinct from your dissertation research. Be specific and tell us:
    • What is the research question animating your proposed project?
    • What methods would you use to address your question?
    • How does your research question speak to the interests of the SMC?
    • Who do you hope to reach (who are you engaging) with this proposed research?
  2. A brief description of your dissertation project.
  3. An academic article-length manuscript (~7,000 or more) that you have authored or co-authored (published or unpublished) that demonstrates your writing skills.
  4. A copy of your CV.
  5. The names and contact information for 3 references (one must be your dissertation advisor).
  6. if available, pointers to your website or other online presence (this is not required).

A request for letters will be sent directly to your list of referees, on your behalf. IMPORTANT: THE APPLICATION SYSTEM WILL NOT REQUEST THOSE REFERENCE LETTERS UNTIL AFTER YOU HAVE SUBMITTED YOUR APPLICATION! Please warn your letter writers in advance so that they will be ready to submit them when they receive the prompt. The email they receive will automatically tell them they have two weeks to respond. Please ensure that they expect this email (tell them to check their spam folders, too!) and are prepared to submit your letter by our application deadline.  You can check the progress on individual reference requests at any time by clicking the status tab within your application page. Note that a complete application must include three submitted letters of reference.

If you have any questions about the application process, please contact Tarleton Gillespie at tarleton@microsoft.com and include “SMC PhD Internship” in the subject line.

 

TIMELINE

Due to the volume of applications, late submissions (including submissions with late letters of reference) will not be considered. We will not be able to provide specific feedback on individual applications. Finalists will be contacted in early February to arrange a Skype interview. Applicants chosen for the internship will be informed in March and announced on the socialmediacollective.org blog.

 

 

PREVIOUS INTERN TESTIMONIALS

“The internship at Microsoft Research was all of the things I wanted it to be – personally productive, intellectually rich, quiet enough to focus, noisy enough to avoid complete hermit-like cave dwelling behavior, and full of opportunities to begin ongoing professional relationships with other scholars who I might not have run into elsewhere.”
— Laura Noren, Sociology, New York University

“If I could design my own graduate school experience, it would feel a lot like my summer at Microsoft Research. I had the chance to undertake a project that I’d wanted to do for a long time, surrounded by really supportive and engaging thinkers who could provide guidance on things to read and concepts to consider, but who could also provoke interesting questions on the ethics of ethnographic work or the complexities of building an identity as a social sciences researcher. Overall, it was a terrific experience for me as a researcher as well as a thinker.”
— Jessica Lingel, Library and Information Science, Rutgers University

“My internship experience at MSRNE was eye-opening, mind-expanding and happy-making. If you are looking to level up as a scholar – reach new depth in your focus area, while broadening your scope in directions you would never dream up on your own; and you’d like to do that with the brightest, most inspiring and supportive group of scholars and humans – then you definitely want to apply.”
— Kat Tiidenberg, Sociology, Tallinn University, Estonia

“The Microsoft Internship is a life-changing experience. The program offers structure and space for emerging scholars to find their own voice while also engaging in interdisciplinary conversations. For social scientists especially the exposure to various forms of thinking, measuring, and problem-solving is unparalleled. I continue to call on the relationships I made at MSRE and always make space to talk to a former or current intern. Those kinds of relationships have a long tail.”
— Tressie McMillan Cottom, Sociology, Emory University

“My summer at MSR New England has been an important part of my development as a researcher. Coming right after the exhausting, enriching ordeal of general/qualifying exams, it was exactly what I needed to step back, plunge my hands into a research project, and set the stage for my dissertation… PhD interns are given substantial intellectual freedom to pursue the questions they care about. As a consequence, the onus is mostly on the intern to develop their research project, justify it to their mentors, and do the work. While my mentors asked me good, supportive, and often helpfully hard, critical questions, but my relationship with them was not the relationship of an RA to a PI– instead it was the relationship of a junior colleague to senior ones.”
— J. Nathan Matias, Media Lab, MIT (read more here)

“This internship provided me with the opportunity to challenge myself beyond what I thought was possible within three months. With the SMC’s guidance, support, and encouragement, I was able to reflect deeply about my work while also exploring broader research possibilities by learning about the SMC’s diverse projects and exchanging ideas with visiting scholars. This experience will shape my research career and, indeed, my life for years to come.”
— Stefanie Duguay, Communication, Queensland University of Technology

“There are four main reasons why I consider the summer I spent as an intern with the Social Media Collective to be a formative experience in my career. 1. was the opportunity to work one-on-one with the senior scholars on my own project, and the chance to see “behind the scenes” on how they approach their own work. 2. The environment created by the SMC is one of openness and kindness, where scholars encourage and help each other do their best work. 3. hearing from the interdisciplinary members of the larger MSR community, and presenting work to them, required learning how to engage people in other fields. And finally, 4. the lasting effect: Between senior scholars and fellow interns, you become a part of a community of researchers and create friendships that extend well beyond the period of your internship.”
— Stacy Blasiola, Communication, University of Illinois Chicago

“My internship with Microsoft Research was a crash course in what a thriving academic career looks like. The weekly meetings with the research group provided structure and accountability, the stream of interdisciplinary lectures sparked intellectual stimulation, and the social activities built community. I forged relationships with peers and mentors that I would never have met in my graduate training.”
— Kate Zyskowski, Anthropology, University of Washington

“It has been an extraordinary experience for me to be an intern at Social Media Collective. Coming from a computer science background, communicating and collaborating with so many renowned social science and media scholars teaches me, as a researcher and designer of socio-technical systems, to always think of these systems in their cultural, political and economic context and consider the ethical and policy challenges they raise. Being surrounded by these smart, open and insightful people who are always willing to discuss with me when I met problems in the project, provide unique perspectives to think through the problems and share the excitements when I got promising results is simply fascinating. And being able to conduct a mixed-method research that combines qualitative insights with quantitative methodology makes the internship just the kind of research experience that I have dreamed for.”
— Ming Yin, Computer Science, Harvard University

“Spending the summer as an intern at MSR was an extremely rewarding learning experience. Having the opportunity to develop and work on your own projects as well as collaborate and workshop ideas with prestigious and extremely talented researchers was invaluable. It was amazing how all of the members of the Social Media Collective came together to create this motivating environment that was open, supportive, and collaborative. Being able to observe how renowned researchers streamline ideas, develop projects, conduct research, and manage the writing process was a uniquely helpful experience – and not only being able to observe and ask questions, but to contribute to some of these stages was amazing and unexpected.”
— Germaine Halegoua, Communication Arts, University of Wisconsin-Madison

“Not only was I able to work with so many smart people, but the thoughtfulness and care they took when they engaged with my research can’t be stressed enough. The ability to truly listen to someone is so important. You have these researchers doing multiple, fascinating projects, but they still make time to help out interns in whatever way they can. I always felt I had everyone’s attention when I spoke about my project or other issues I had, and everyone was always willing to discuss any questions I had, or even if I just wanted clarification on a comment someone had made at an earlier point. Another favorite aspect of mine was learning about other interns’ projects and connecting with people outside my discipline.”
–Jolie Matthews, Education, Stanford University

We are hiring a Postdoc

The Social Media Collective at Microsoft Research New England (MSRNE) is looking for a social media postdoctoral researcher (start date: July, 2018). This position is an ideal opportunity for a scholar whose work draws on anthropology, communication, media studies, sociology, and/or science and technology studies to bring empirical and critical perspectives to complex socio-technical issues. Application deadline: 1 December 2017. This year, we will also consider applications for a possible candidate slot, based in SMC, bridging SMC and one or more areas of the MSRNE lab, including machine learning, bioinformatics, cryptography, algorithmic game theory, and economics.

Microsoft Research provides a vibrant multidisciplinary research environment, with an open publications policy and close links to top academic institutions around the world. Postdoctoral researcher positions provide emerging scholars (PhDs received late 2017 or to be conferred by July 2018) an opportunity to develop their research career and to interact with some of the top minds in the research community. Postdoctoral researchers define their own research agenda. Successful candidates will have a well-established research track record as demonstrated by journal publications and conference papers, as well as participation on program committees, editorial boards, and advisory panels.

While each of the Microsoft Research labs has openings in a variety of different disciplines, this position with the Social Media Collective at Microsoft Research New England specifically seeks social science/humanities candidates with critical approaches to their topics. Qualifications include a strong academic record in anthropology, communication, media studies, sociology, science and technology studies, or a related field. The ideal candidate may be trained in any number of disciplines, but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to technology or the internet and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

– How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)

– How are social media platforms, through algorithmic design and user policies, adopting the role of intermediaries for public discourse? (Tarleton Gillespie)

– What are the cultural, political, and economic implications of on-demand contract work as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)

– How do standards, defaults, and infrastructures encode our assumptions about human behavior and perception? (Dylan Mulvin)

– How are public and private institutions training people for the future of work, and deciding who should be included in that future? (Dan Greene)

SMC postdocs may have the opportunity to visit and collaborate with our sister Social Media Collective members in New York City. Related projects in New York City include:

– What are the politics, ethics, and policy implications of big data science? (Kate Crawford, MSR-NYC, AI Now)

– What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

Postdoctoral researchers receive a competitive salary and benefits package, and are eligible for relocation expenses.  Postdoctoral researchers are hired for a two-year term appointment following the academic calendar, starting in July 2018. Applicants must have completed the requirements for a PhD, including submission of their dissertation, prior to joining Microsoft Research. We encourage those with tenure-track job offers from other institutions to apply, so long as they can defer their start date to accept our position.

Microsoft does not discriminate against any applicant on the basis of age, ancestry, color, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.

To apply for a postdoc position at MSRNE:

Submit an online application here.

– On the application website, indicate that your research area of interest is “Anthropology, Communication, Media Studies, and Sociology” and that your location preference is “New England, MA, U.S.” in the pull down menus. IF YOU DO NOT MARK THESE PREFERENCES WE WILL NOT RECEIVE YOUR APPLICATION. 

– In addition to your CV and names of three referees (including your dissertation advisor) that the online application requires, upload the following 3 attachments with your online application:

  1. two journal articles, book chapters, or equivalent writing samples (uploaded as two separate attachments);
  1. a single research statement (four page maximum length) that does the following: outlines the questions and methodologies central to your research agenda (~two page); provides an abstract and chapter outline of your dissertation (~one page); offers a description of how your research agenda relates to research conducted by the Social Media Collective (~one page)

After you submit your application, a request for letters will be sent to your list of referees on your behalf. NOTE: THE APPLICATION SYSTEM WILL NOT REQUEST REFERENCE LETTERS UNTIL AFTER YOU HAVE SUBMITTED YOUR APPLICATION! Please warn your letter writers in advance so that they will be ready to submit them when they receive the prompt. The email they receive will automatically tell them they have two weeks to respond but that an individual call for applicants may have an earlier deadline. Please ensure that they expect this and are prepared to submit your letter by our application deadline of December 1, 2017. Please make sure to check back with your referees if you have any questions about the status of your requested letters of recommendation. You can check the progress on individual reference requests at any time by clicking the status tab within your application page. Note that a complete application must include three submitted letters of reference.

For more information, see here.

Feel free to ask questions about the position in the comments below.

 

 

Heading to the Courthouse for Sandvig v. Sessions

E._Barrett_Prettyman_Federal_Courthouse,_DC

(or: Research Online Should Not Be Illegal)

I’m a college professor. But on Friday morning I won’t be in the classroom, I’ll be in courtroom 30 in the US District Courthouse on Constitution Avenue in Washington DC. The occasion? Oral arguments on the first motion in Sandvig v. Sessions.

You may recall that the ACLU, academic researchers (including me), and journalists are bringing suit against the government to challenge the constitutionality of “The Worst Law in Technology” — the US law that criminalizes most online research. Our hopes are simple: Researchers and reporters should not fear prosecution or lawsuits when we seek to obtain information that would otherwise be available to anyone, by visiting a Web site, recording the information we see there, and then publishing research results based on what we find.

As things stand, the misguided US anti-hacking law, called the Computer Fraud and Abuse Act (CFAA), makes it a crime if a computer user “exceeds authorized access.” What is authorized access to a Web site? Previous court decisions and the federal government have defined it as violating the site’s own stated “Terms of Service,” (ToS) but that’s ridiculous. The ToS is a wish-list of what corporate lawyers dream about, written by corporate lawyers. (Crazy example, example, example.) ToS sometimes prohibit people from using Web sites for research, they prohibit users from saying bad things about the corporation that runs the Web site, they prohibit users from writing things down. They should not be made into criminal violations of the law.

In the latest developments of our case, the government has argued that Web servers are private property, and that anyone who exceeds authorized access is trespassing “on” them. (“In” them? “With” them? It’s a difficult metaphor.) In other cases the CFAA was used to say that because Web servers are private, users are also wasting capacity on these servers, effectively stealing a server’s processing cycles that the owner would rather use for other things. I visualize a cartoon thief with a bag of electrons.

Are Internet researchers and data journalists “trespassing” and “stealing”? These are the wrong metaphors. Lately I’ve been imagining what would happen in the world of print if the CFAA metaphors were our guide back when the printing press were invented.

If you picked up a printed free newspaper like Express, the Metro, or the Chicago Reader at a street corner and the CFAA applied to it, there would be a lengthy “Terms of Readership” printed on an inside page in very small type. Since these are advertising-supported publications, it would say that people who belong to undesirable demographics are trespassing on the printed page if they attempt to read it. After all, the newspaper makes no money from readers who are not part of a saleable advertising audience. In fact, since the printing presses are private property, unwanted readers are stealing valuable ink and newsprint that should be reserved for the paper’s intended readers. To cover all the bases, readers would be forbidden from writing anything based on what they read in the paper if the paper’s owners wouldn’t like it. And readers could be sued by the newspaper or prosecuted by the federal government if they did any of these things. The scenario sounds foolish and overblown, but it’s the way that Web sites work now under the CFAA.

Another major government argument has been that we researchers and journalists have nothing to be concerned about because prosecutors will use this law with the appropriate discretion. Any vagueness is OK because we can trust them. Concern by researchers and reporters is groundless.

Yet federal prosecutors have a terrible record when it comes to the CFAA. And the idea that online platforms want to silence research and journalism is not speculative. After our lawsuit was filed, the Streaming Heritage research team funded by the Swedish Research Council (similar to the US National Science Foundation) received shocking news: Spotify’s lawyers had contacted the Research Council and asked the council to take “resolute action” against the project, suggesting it had violated “applicable law.” Professors Snickars, Vonderau, and others were studying the Spotify platform. What “law” did Spotify claim was being violated? The site’s own Terms of Service. (Here’s a description of what happened. Note: It’s in Swedish.)

This demand occurred just after a member of the research team appeared in a news story that characterized Spotify in a way that Spotify apparently did not like. Luckily, Sweden does not have the CFAA, and terms of service there do not hold the force of law. The Research Council repudiated Spotify’s claim that research studying private platforms was unethical and illegal if it violated the terms of service. Researchers and journalists in other countries need the same protection.

More Information

The full text of the motions in the case is available on the ACLU Web site. In our most recent filing there is an excellent summary of the case and the issues, starting on p. 6. You do not need to read the earlier filings for this to make sense.

There was a burst of news coverage when our lawsuit was filed. Standout pieces include the New Yorker’sHow an Old Hacking Law Hampers the Fight Against Online Discrimination” and “When Should Hacking Be Legal?” in The Atlantic.

The ACLU’s Rachel Goodman has recently published a short summary of how to do research under the shadow of the CFAA. It is titled as a tipsheet for “Data Journalism” but it applies equally well to academic researchers. A longer version co-authored with Esha Bhandari is also available.

(Note that I filed this lawsuit as a private citizen and it does not involve my university.)

IMAGE CREDIT: AgnosticPreachersKid via Wikimedia Commons