Custodians

I’m thrilled to say that my new book, Custodians of the Internet, is now available for purchase from Yale University Press, and your favorite book retailer. Those of you who know me know that I’ve been working on this book for a long time, and have cared about the issues it addresses for a while now. So I’m particularly excited that it is now no longer mine, but yours if you want it. I hope it’ll be of some value to those of you who are interested in interrogating and transforming the information landscape in which we find ourselves.

By way of introduction, I thought I would explain the book’s title, particularly my choice of the word “custodians.” This title came unnervingly late in the writing process, and after many, many conversations with my extremely patient friend and colleague Dylan Mulvin. “Custodians of the Internet” captured, better than many, many alternatives, the aspirations of social media platforms, the position they find themselves in, and my notion for how they should move forward.

moderators are the web’s “custodians,” quietly cleaning up the mess: The book begins with a quote from one of my earliest interviews, with a member of YouTube’s content policy team. As they put it, “In the ideal world, I think that our job in terms of a moderating function would be really to be able to just turn the lights on and off and sweep the floors . . . but there are always the edge cases, that are gray.” The image invoked is a custodian in the janitorial sense, doing the simple, mundane, and uncontroversial task of sweeping the floors. In this turn of phrase, content moderation was offered up as simple maintenance. It is not imagined to be difficult to know what needs scrubbing, and the process is routine. As with content moderation, there is labor involved, but largely invisible, just as actual janitorial staff are often instructed to “disappear,” working at night or with as little intrusion as possible. yet even then, years before Gamergate or ISIS beheadings or white nationalists or fake news, it was clear that moderation is not so simple.

platforms have taken “custody” of the Internet: Content moderation at the major platforms matters because those platforms have achieved such prominence in the intervening years.As I was writing the book, one news item in 2015 stuck with me: in a survey on people’s new media use, more people said that they used Facebook than said they used the Internet. Facebook, which by then had become one of the most popular online destinations in the world and had expanded to the mobile environment, did not “seem” like the Internet anymore. Rather than being part of the Internet, it had somehow surpassed it. This was not true, of course; Facebook and the other major platforms had in fact woven themselves deeper into the Internet, by distributing cookies, offering secure login mechanisms for other sites and platforms, expanding advertising networks, collecting reams of user data from third-party sites, and even exploring Internet architecture projects. In both the perception of users and in material ways, Facebook and the major social media platforms have taken “custody” of the Internet. This should change our calculus as to whether platform moderation is or is not “censorship,” and the responsibilities of platforms bear when they decide what to remove and who to exclude.

platforms should be better “custodians,” committed guardians of our struggles over value: In the book, I propose that these responsibilities have expanded. Users have become more acutely aware, of both the harms they encounter on these platforms, and the costs of being wronged by content moderation decisions. What’s more, social media platforms have become the place where a variety of speech coalitions do battle: activists, trolls, white nationalists, advertisers, abusers, even the President. And the implications of content moderation have expanded, from individual concerns to public ones. If a platform fails to moderate, everyone can be affected, even those who aren’t party to the circulation of the offensive, the fraudulent, or the hateful — even those who aren’t on social media at all.

What would it mean for platforms to play host not just to our content, but to our best intentions? The major platforms I discuss here have, for years, tried to position themselves as open and impartial conduits of information, defenders of their user’s right to speak, and legally shielded from any obligations for how they police their sites. As most platform managers see it, moderation should be theirs to do, conducted on their own terms, on our behalf, and behind the scenes. But that arrangement is crumbling, as critics begin to examine the responsibilities social media platforms have to the public they serve.

In the book, I propose that platforms become “custodians” of the public discourse they facilitate — not in the janitorial sense, but something more akin to legal guardianship. The custodian, given charge over a property, a company, a person, or a valuable resource, does not take it for their own or impose their will over it; they accept responsibility for ensuring that it is governed properly. This is akin to Jack Balkin’s suggestion that platforms act as “information fiduciaries,” with a greater obligation to protect our data. But I don’t just mean that platforms should be custodians of our content; platforms should be custodians of the deliberative process we all must engage in, that makes us a functioning public. Users need to be more accountable for making the hard decisions about what does and does not belong; platforms could facilitate that deliberation, and then faithfully enact the conclusions users reach. Safeguarding public discourse requires ensuring that it is governed by those to whom it belongs, making sure it survives, that its value is sustained in a fair and equitable way. Platforms could be not the police of our reckless chatter, but the trusted agents of our own interest in forming more democratic publics.

If you end up reading the book, you have my gratitude. And I’m eager to hear from anyone who has thoughts, comments, praise, criticism, and suggestions. You can find me on Twitter at @TarletonG.

Content moderation is not a panacea: Logan Paul, YouTube, and what we should expect from platforms

What do we expect of content moderation? And what do we expect of platforms?

There is an undeniable need, now more than ever, to reconsider the public responsibilities of social media platforms. For too long, platforms have enjoyed generous legal protections and an equally generous cultural allowance, to be “mere conduits” not liable for what users post to them. in the shadow of this protection, they have constructed baroque moderation mechanisms: flagging, review teams, crowdworkers, automatic detection tools, age barriers, suspensions, verification status, external consultants, blocking tools. They all engage in content moderation, but are not obligated to; they do it largely out of sight of public scrutiny, and are held to no official standards as to how they do so. This needs to change, and it is beginning to.

But in this crucial moment, one that affords such a clear opportunity to fundamentally reimagine how platforms work and what we can expect of them, we might want to get our stories straight about what those expectations should be.

The latest controversy involves Logan Paul, a twenty-two year old YouTube star with 15 million plus subscribers. His videos, a relentless barrage of boasts, pranks, and stunts, have garnered him legions of adoring fans. But he faced public backlash this week after posting a video in which he and his buddies ventured into the Aokigahara forest of Japan, only to find the body of a young man who had recently committed suicide. Rather than turning off the camera, Logan continued his antics, pinballing between awe and irreverence, showing the body up close and then turning the attention back to his own reaction. The video lingers of the body, including close ups of his swollen hand, and Paul’s reactions were self-centered and cruel. After a blistering wave of criticism in the video comments and on Twitter, Paul removed the video and issued a written apology, which was itself criticized for not striking the right tone. A somewhat more heartfelt video apology followed. He later announced he would be taking a break from YouTube.

There is no question that Paul’s video was profoundly insensitive, an abject lapse in judgment. But amidst the reaction, I am struck by the press coverage of and commentary about the incident: the willingness both to lump this controversy in with an array of other concerns about what’s online, as somehow all part of the “content moderation” problem; paired with a persistent and unjustified optimism for what content moderation should be able to handle.

YouTube has weathered a series of controversies over the course of the last year, many of which had to do with children, both their exploitation and their vulnerability as audiences. There was the controversy about popular vlogger PewDiePie, condemned for including anti-Semitic humor and Nazi imagery in his videos. Then there were the videos that slipped past the stricter standards YouTube has for its Kids app: amateur versions of cartoons featuring well-known characters with weirdly upsetting narrative third acts. That was quickly followed by the revelation of entire YouTube channels of videos in which children were being mistreated, frightened and exploited, that seem designed to skirt YouTube’s rules against violence and child exploitation. And just days later, Buzzfeed also reported that YouTube’s autocomplete displayed results that seemed to point to child sexual exploitation. YouTube representatives have apologized for all of these, promised to increase the number of moderators reviewing their videos, aggressively pursue better artificial intelligence solutions, and remove advertising from some of the questionable channels.

Content moderation, and different kinds of responsibility

But what do these incidents have in common, besides the platform? Journalists and commentators are eager to lump them together: part of a single condemnation of YouTube, its failure to moderate effectively, and its complicity with the profits made by producers of salacious or reprehensible content. But these incidents represent different kinds of problems, they implicate YouTube and content moderation in different ways — and, when lumped together, they suggest a contradictory set of expectations we have for platforms and their public responsibility.

Platforms assert a set of normative standards, guidelines by which users are expected to comport themselves. It is difficult to convince every user to honor these standards, in part because the platforms have spent years promising users an open and unfettered playing field, inviting users to do or say whatever they want. And it is difficult to enforce these standards, in part because the platforms have few of the traditional mechanism of governance: they can’t fire us, we are not salaried producers. All they have are the terms of service and the right to delete content and suspend users. And, there are competing economic incentives for platforms to be more permissive than they claim to be, and to treat high value producers differently than the rest.

Incidents like the exploitative videos of children, or the misleading amateur cartoons, take advantage of this system. They live amidst this enormous range of videos, some subset of which YouTube must remove. Some come from users who don’t know or care about the rules, or find what they’re making perfectly acceptable. Others are deliberately designed to slip past moderators, either by going unnoticed or by walking right up to but not across the community guidelines. They sometimes require hard decisions about speech, community, norms, and the right to intervene.

Logan Paul’s video, or PewDiePie’s racist outbursts, are of a different sort. As was clear in the news coverage and the public outrage, critics were troubled by Logan Paul’s failure to consider his responsibility to his audience, to show more dignity as a videomaker, to choose sensitivity over sensationalism. The fact that he has 15 million subscribers, many of them young, was reason for many to claim that he (and by implication, YouTube) have a greater responsibility. These sound more like traditional media concerns: the effects on audiences, the responsibilities of producers, the liability of providers. This could just as easily be a discussion about Ashton Kutcher and an episode of Punk’d. What would Kutcher’s, his production team’s, and MTV’s responsibility be if he had similarly crossed the line with one of his pranks?

But MTV was in a structurally different position than YouTube. We expect MTV to be accountable for a number of reasons: they had the opportunity to review the episode before broadcasting it; they employed Kutcher and his team, affording them specific power to impose standards; and they chose to hand him the megaphone in the first place. While YouTube also affords Logan Paul a way to reach millions, and he and YouTube share advertising revenue from popular videos, these offers are in principle made to all YouTube users. YouTube is a distribution platform, not a distribution bottleneck — or it is a bottleneck of a very different shape. This does not mean we cannot or should not hold YouTube accountable. We could decide as a society that we want YouTube to meet exactly the same responsibilities as MTV, or more. But we must take into account that these structural differences change not only what YouTube can do, but how and why we can expect it of them.

Moreover, is content moderation the right mechanism to manage this responsibility? Or to put it another way, what would the critics of Logan’s video have wanted YouTube to do? Some argued that YouTube should have removed the video, before Paul did. (It seems the video was reviewed, and was not removed, but Paul received a “strike” on his account, a kind of warning — we know this only based on this evidence. If you want to see the true range of disagreement about what YouTube should have done, just read down the lengthy thread of comments that followed this tweet.) In its PR response to the incident, a YouTube representative said it should have taken the video down, for being “shocking, sensational or disrespectful”. But it is not self-evident that Paul’s video violates YouTube’s policies. And from the comments from critics, it was Paul’s blithe, self-absorbed commentary, the tenor he took about the suicide victim he found, as much as showing the body itself, that was so troubling. Showing the body, lingering on its details, was part of Paul’s casual indifference, but so were his thoughtless jokes and exaggerated reactions. Is it so certain that YouTube should have removed this video on our behalf? I do not mean to imply that the answer is no, or that it is yes. I’m only noting that this is not an easy case to adjudicate — which is precisely why I we shouldn’t expect YouTube to already have a clean and settled policy towards it.

There’s no simple answer as to where such lines should be drawn. Every bright line rule YouTube might draw will be plagued with “what abouts”. Is it that corpses should not be shown in a video? What about news footage from a battlefield? What about public funerals? Should the prohibition be specific to suicide victims, out of respect? It would be reasonable to argue that YouTube should allow a tasteful documentary about the Aokigahara forest, concerned about the high rates of suicide among Japanese men. Such a video might even, for educational or provocative reasons, include images of the body of a suicide victim, or evidence of their deaths. In fact, YouTube already has some, of a variety of qualities (see 1, 2, 3, 4).

So what we critics may be implying is that YouTube should be responsible to distinguish the insensitive versions from the sensitive ones. Again, this sounds more like the kinds of expectations we had for television networks — which is fine if that’s what we want, but we should admit that this would be asking much more from YouTube than we might think.

As a society, we’ve already struggled with this very question, in traditional media: should the news show the coffins of U.S. soldiers as their returned from war? should the news show the grisly details of crime scenes? When is the typically too graphic video acceptable because it is newsworthy, educational, or historically relevant? Not only is the answer far from clear, and differs across cultures and periods. As a society, we need to engage in the debate; it cannot be answered for us by YouTube alone.

These moments of violation serve as the spark for that debate. It may be that all this condemnation of Logan Paul, in the comment threads on YouTube, on Twitter, and in the press coverage, is the closest we get to a real, public consideration of what’s appropriate for public consumption. And maybe the focus among critics on Paul’s irresponsibility, as opposed to YouTube’s, is indicative that this is not a moderation question, or a growing public sense that we cannot rely on YouTube’s moderation, that we need to cultivate a clearer sensibility of what public culture should look like, and teach creators to take their public responsibility more seriously. (Though even if it is, there will always be a new wave of twenty-year-olds waiting in the wings, who will jump at the chance social media offers to show off for a crowd, way before they ever grapple with social norms we may have worked out. This is why we need to keep having this debate.)

How exactly YouTube is complicit in the choices of its stars

This is not to suggest that platforms bear no responsibility for the content that they help circulate. Far from it. YouTube is implicated, in that they afford the opportunity for Logan to broadcast his tasteless video, help him gather millions of viewers who will have it instantly delivered to their feed, design and tune the recommendation algorithms that amplify its circulation, and profit enormously from the advertising revenue it accrues.

Some critics are doing the important work of putting platforms under scrutiny, to better understand the way producers and platforms are intertwined. But it is awfully tempting to draw too simple a line between the phenomenon and the provider, to paint platforms with too broad a brush. The press loves villains, and YouTube is one right now. But we err when we draw these lines of complicity too cleanly. Yes, YouTube benefits financially from Logan Paul’s success. That by itself does not prove complicity; it needs to be a feature of our discussion about complicity. We might want revenue sharing to come with greater obligations on the part of the platform; or, we might want platforms to be shielded from liability or obligation no matter what the financial arrangement; or, we might want equal obligations whether there is revenue shared or not; or we might want obligations to attend to popularity rather than revenue. These are all possible structures of accountability.

It is also easy to say that YouTube drives vloggers like Logan Paul to be more and more outrageous. If video makers are rewarded based on the number of views, whether that reward is financial or just reputational, it stands to reason that some videomakers will look for ways to increase those numbers, including going bigger. But it is not clear that metrics of popularity necessarily or only lead to being over more outrageous, and there’s nothing about this tactic that is unique to social media. Media scholars have long noted that being outrageous is one tactic producers use to cut through the clutter and grab viewers, whether its blaring newspaper headlines, trashy daytime talk shows, or sexualized pop star performances. That is hardly unique to YouTube. And YouTube videomakers are pursuing a number of strategies to seek popularity and the rewards therein, outrageousness being just one. many more seem to depend on repetition, building a sense of community or following, interacting with individual subscribers, and the attempt to be first. While over-caffeinated pranksters like Logan Paul might try to one-up themselves and their fellow bloggers, that is not the primary tactic for unboxing vidders or Minecraft world builders or fashion advisers or lip syncers or television recappers or music remixers. Others see Paul as part of a “toxic YouTube prank culture” that migrated from Vine, which is another way to frame YouTube’s responsibility. But a genre may develop, and a provider profiting from it may look the other way or even encourage it; that does not answer the question of what responsibility they have for it, it only opens it.

To draw too straight a line between YouTube’s financial arrangements and Logan Paul’s increasingly outrageous shenanigans misunderstands both of the economic pressures of media and the complexity of popular culture. It ignores the lessons of media sociology, which makes clear that the relationship between the pressures imposed by industry and the creative choices of producers is much more complex and dynamic. And it does prove that content moderation is the right way to address this complicity.

*   *   *

Let me say again: Paul’s video was in poor, poor taste, and he deserves all of the criticism he received. And I find this genre of boffo, entitled, show-off masculinity morally problematic and just plain tiresome. And while it may sound like I am defending YouTube, I am definitely not. Along with the other major social media platforms, YouTube has a greater responsibility for the content they circulate than they have thus far acknowledged; they have built a content moderation mechanism that is too reactive, too dismissive, and too opaque, and they are due for a public reckoning. In the last few years, the workings of content moderation and its fundamental limitations have come to the light, and this is good news. Content moderation should be more transparent, and platforms should be more accountable, not only for what traverses their system, but the ways in which they are complicit in its production, circulation, and impact. But it also seems we are too eager to blame all things on content moderation, and to expect platforms to maintain a perfectly honed moral outlook every time we are troubled by something we find there. Acknowledging that YouTube is not a mere conduit does not imply that it is exclusively responsible for everything available there.

As Davey Alba at Buzzfeed argued, “YouTube, after a decade of being the pioneer of internet video, is at an inflection point as it struggles to control the vast stream of content flowing across its platform, balancing the need for moderation with an aversion toward censorship.” This is true. But we are also at an inflection point of our own. After a decade of embracing social media platforms as key venues for entertainment, news, and public exchange, and in light of our growing disappointment in their preponderance of harassment, hate, and obscenity, we too are struggling: to modulate exactly what we expect of them and why, to balance how to improve the public sphere with what role intermediaries can reasonably be asked to take.

This essay is cross-posted at Culture Digitally. Many thanks to Dylan Mulvin for helping me think this through.

The platform metaphor, revisited

This is cross-posted from the HIIG Science Blog, and is part of a series on metaphors and digital society hosted by Christian Katzenbach and Stefan Larsson. I recommend the other essays as well: Nik John on sharing, Noam Tirosh on revolution, and Christian Djeffal on artificial intelligence

Sometimes a metaphor settles into everyday use so comfortably, it can be picked back up to extend its meaning away from what it now describes, a metaphor doing metaphorical service. Platform has certainly done that. When I first wrote about the term in 2010, social media companies like YouTube and Facebook were beginning to use the term to describe their web 2.0 services, to their users, to advertisers and investors, and to themselves. Now social media companies have embraced the term fully, and have extended it to services that broker the exchange not just of content or sociality but rides (Uber), apartments (AirBnB), and labor (Taskrabbit). The term so comfortably describes these services that critics and commentators can draw on the word to extend out for the purposes of argument. The past few years have witnessed a “platform revolution”, (Parker, van Alstyne, and Choudary) the rise of “platform capitalism” (Srnicek) driven by “platform strategy” (Reillier and Reillier), with the possibility of “platform cooperativism” (Scholz) all part of “the platform society” (van Dijck, Poell, and DeWaal) These books need not even be referring to the same platforms (they all have their favorite examples, somewhat overlapping), their readers know what they’re referring to.

From programmability to opportunity

As platform first took root in the lexicography of social media, it was both leaning on and jettisoning a more specific computational meaning: a programmable infrastructure upon which other software can be built and run, like the operating systems in our computers and gaming consoles, or information services that provide APIs so developers can design additional layers of functionality. The new use shed the sense of programmability, instead drawing on older meanings of the word (which the computational definition itself had drawn on): an architecture from which to speak or act, like a train platform or a political stage. Now Twitter or Instagram could be a platform simply by providing an opportunity from which to speak, socialize, and participate.

At the time, some suggested that the term should be constrained to its computational meaning, but it’s too late, platform has been widely accepted in this new sense – by users, by the press, by regulators, and by the platform providers themselves. I argued then that the term was particularly useful because it helped social media companies appeal to several different stakeholders of interest to them. Calling themselves platforms promised users an open playing field for free and unencumbered participation, promised advertisers a wide space in which to link their products to popular content, and promised regulators that they were a fair and impartial conduit for user activity, needing further regulation.

This is what metaphors do. They propose a way of understanding something in the terms of another; the analogy distorts the phenomenon being described, by highlighting those features most aligned with what it is being compared to. Platform lent social media services a particular form, highlighted certain features, naturalized certain presumed relations, and set expectations for their use, impact, and responsibility. Figuratively, a platform is flat, open, sturdy. In its connotations, a platform offers the opportunity to act, connect, or speak in ways that are powerful and effective: catching the train, drilling for oil, proclaiming one’s beliefs. And a platform lifts that person above everything else, gives them a vantage point from which to act powerfully, a raised place to stand.

What metaphors hide

Metaphors don’t only highlight; they also downplay aspects that are not captured by the metaphor. “A metaphorical concept can keep us from focusing on other aspects of the concept that are inconsistent with that metaphor.” (Lakoff and Johnson, 10) We might think of this as incidental or unavoidable, in that any comparison highlights some aspects and thereby leaves others aside. Or we could think of it as strategic, in that those deploying a metaphor have something to gain in the comparison it makes, presumably over other comparisons that might highlight different aspects.

By highlighting similarities – social media services are like platforms – metaphors can have a structural impact on the way we think about and act upon the world. At the same time, metaphor cannot be only about similarity – otherwise the ideal metaphor would be tautological, “X is like X.” Metaphor also depends on the difference between the two phenomena; the construction of similarity is powerful only if it bridges a significant semantic gap. Steven Johnson points out that “the crucial element in this formula is the difference that exists between ‘the thing’ and the ‘something else.’ What makes a metaphor powerful is the gap between the two poles of the equation.” (58-59) Phil Agre goes further, suggesting that “metaphors operate as a ‘medium of exchange’” (37) between distinct semantic fields, negotiating a tension between elements that are, at least in some ways, incompatible. This structural bridge constructed by metaphor depends on choosing aspects of comparison that will be salient and rendering others insignificant. The platform metaphor does a great deal of work, not only in what it emphasizes, but in what it hides:

  1. Platform downplays the fact that these services are not flat. Their central service is to organize, structure, and channel information, according both to arrangements established by the platform (news feed algorithms, featured partner arrangements, front pages, categories) and arrangements built by the user, though structured or measured by the platform (friend or follower networks, trending lists). Platforms are not flat, open spaces where people speak or exchange, they are intricate and multi-layered landscapes, with complex features above and dense warrens below. Information moves in and around them, shaped both by the contours provided by the platform and by the accretions of users and their activity – all of which can change at the whim of the designers. The metaphor of platform captures none of this, implying that all activity is equally and meritocratically available, visible, public, and potentially viral. It does not prepare us, for example, for the ability of trolls to organize in private spaces and then swoop together as a brigade to harass users in a coordinated way, in places where the suddenness and publicness of the attack is a further form of harm.
  2. The platform metaphor also obscures the fact that platforms are populated by many, diverse, sometimes overlapping, and sometimes contentious communities. It is absurd to talk about Facebook users, as if two billion people can be a single group of anything; talk about the Twitter community only papers over the tension and conflict that has been fundamental, and sometimes destructive to how Twitter is actually used. As Jessa Lingel argues, social media platforms are in fact full of communities that turn to social media for specific purposes, often with ambivalent or competing needs around visibility, pseudonymity, and collectivity; then they struggle with how the platforms actually work and their sometimes ill fit with the aims of that community. When we think not of ‘Facebook users’ but a group of Brooklyn drag queens, the relationship between users and platform is not an abstract one of opportunity, but a contentious one about identity and purpose.
  3. Platform also helps elide questions about platforms’ responsibility for their public footprint. Train platforms are not responsible for the passengers. Like other metaphors like conduit and media and network, platform suggests an impartial between-ness that policymakers in the U.S. are eager to preserve – unlike European policymakers, where there is more political will to push responsibility onto platforms, though in a variety of untested ways. When, as Napoli and Caplan point out, Facebook refuses to call itself a media company, they are disavowing the kind of public and policy expectations imposed on media. They’re merely a platform. In the meantime, they have each built up a complex apparatus of content moderation and user governance to enforce their own guidelines, yet these interventions are opaque and overlooked.
  4. Finally, platform hides all of the labor necessary to produce and maintain these services. The audience is not supposed to see the director or the set decorators or the stagehands, only the actors in the spotlight. Underneath a platform is an empty, dusty space – it’s just there. Social media platforms are in fact the product of an immense amount of human labor, whether it be designing the algorithms or policing away prohibited content. When we do get a glimpse of the work and the workers involved, it is culturally unexpected and contentious: the revelation, for example, that Facebook’s Trending Topics might have been curated by a team of journalism school grads, working like machines. (1, 2) What if they make mistakes? What if they are politically biased? How are humans involved, and why does that matter? Platform discourages us from asking these questions, by leaving the labor out of the picture.

We need not discard the term, just to swap in another metaphor in its place. It is not as if it’s impossible to think about these obscured aspects of platforms; the metaphor can downplay them, but cannot erase them. But we have to either struggle upstream against the discursive power of the term, or playful subvert it. A platform may hide the labor it requires, but in a different framework it could be asked to shelter that labor, protect it. If a platform lifts up its users, then there may be some manner of responsibility for lifting some people up over others. We might also play with other metaphors: are platforms also shopping malls, or bazaars? amusement parks, or vending machines? nests, or hives? pyramids, or human pyramids? But mostly, we can scrutinize the metaphor in order to identify what it fails to highlight, how that may serve the interest of the metaphor’s practitioners, and what design interventions and obligations might best attend to these gaps and obscurities. And, as Kuhn notes about scientific paradigms, any frame of understanding works to coalesce the phenomenon by leaving off aspects that do not fit – and these discarded aspects can return to challenge to that frame, and sometimes tear it down. Platforms downplay these aspects at their own peril.

Introducing our SMC interns for summer 2017!

We get the sharpest, most impressive crop of applicants for ourSocial Media Collective internship, it is no easy task to turn away so many extremely promising PhD students. But it is a pleasure to introduce those we did select. (Keep in mind that we offer these internships every summer; if you will be an advanced graduate student in our field in the summer of 2018, keep an eye on this blog or for updates to this page for the next deadline.) For 2017, we are proud to have the following young scholars joining us:

At Microsoft Research New England

  Ysabel Gerrard is a PhD Candidate in the School of Media and Communication, University of Leeds. Her doctoral thesis examines teen drama fans’ negotiations of their (guilty) pleasures in an age of social media. In addition to her research and teaching, Ysabel is the Young Scholars’ Representative for ECREA’s Digital Culture and Communication section, and is currently co-organising the Data Power Conference 2017 (along with two others). She has published in the Journal of Communication Inquiry and has presented her work at numerous international conferences, such as ECREA (European Communication Research and Education Association) and Console-ing Passions. Ysabel will be investigating Instagram and Tumblr’s responses to public discourses about eating disorders.

 

Elena Maris is a PhD Candidate at the Annenberg School for Communication at the University of Pennsylvania. Her research examines the ways media industries and audiences work to influence one another, with a focus on technological strategies and the roles of gender and sexuality. She also studies the ways identity is represented and experienced in popular culture, often writing about race, gender and sexuality in television, fandom and Hip-Hop. Her work has been published in Critical Studies in Media Communication and the European Journal of Cultural Studies.

 

At Microsoft Research New York City:

Aaron Shapiro is a PhD candidate at the University of Pennsylvania’s Annenberg School for Communication. He also holds an M.A. in Anthropology and a Graduate Certificate in Urban Studies. Aaron previously worked as a field researcher and supervisor at NO/AIDS Task Force in New Orleans, conducting social research with communities at high risk for HIV. His current research addresses the cultural politics of urban data infrastructures, focusing on issues of surveillance and control, labor subjectivities, and design imaginaries. His work has been published in Nature, Space & CultureMedia, Culture & Society, and New Media & Society. He will be working on a study about bias in machine learning.

Reminder, the application deadline for 2017 SMC internships is fast approaching…

Just a reminder, January 1 is the deadline for applications for the summer 2017 internship program with the Social Media Collective, at Microsoft Research New England. All the information you need, about the internship, the necessary qualifications, and how to apply, can be found here. During their twelve-week stay, SMC interns devise and execute their own research project, distinct from the focus of their dissertation. The expected outcome is a draft of a publishable scholarly paper for an academic journal or conference. Our goal is to help interns advance their own careers.

The Social Media Collective (in New England, we are Nancy Baym, Tarleton Gillespie, and Mary Gray, with current postdocs Dan Greene and Dylan Mulvin) bring together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. Primary mentors for this year will be Nancy Baym and Tarleton Gillespie, with additional guidance offered by other members of SMC.

 

The accountability of social media platforms, in the age of Trump

Pundits and commentators are just starting to pick through the rubble of this election and piece together what happens and what it means. In such cases, it is often easier to grab hold of one explanation — Twitter! racism! Brexit! James Comey! — and use it as a clothesline to hang the election on and shake it into some semblance of sense. But as scholars, we do a disservice to allow for simple or single explanations. “Perfect storm” has become a cliche, but I can see a set of elements that had to all be true, that came together, to produce the election we just witnessed: Globalization, economic precarity, and fundamentalist reactionary responses; the rise of the conservative right and its target tactics, especially against the Clintons; backlashes to multiculturalism, diversity, and the election of President Obama; the undoing of the workings and cultural authority of journalism; the alt-right and the undercurrents of social media; the residual fear and anxiety in America after 9/11. It is all of these things, and they were all already connected, before candidate Trump emerged.

Yet at the same time, my expertise does not stretch across all of these areas. I have to admit that I have trained myself right down to a fine point: social media, public discourse, technology, control, law. I have that hammer, and can only hit those nails. If I find myself being particular concerned about social media and harassment, or want to draw links between Trump’s dog whistle politics, Steve Bannon and Breitbart, the tactics of the alt-right, and the failings of Twitter to consider the space of discourse it has made possible, I risk making it seem like I think there’s one explanation, that technology produces social problems. I do not mean this. In the end, I have to have faith that, as I try to step up and say something useful about this one aspect, some other scholar is similarly stepping up an saying something about fundamentalist reactions to globalization, and someone else is stepping up to speak about the divisiveness of the conservative movement.

The book I’m working on now, nearing completion, is about social media platforms and the way they have (and have not) stepped into the role of arbiters of public discourse. The focus is on the platforms, their ambivalent combination of neutrality and intervention, the actual ways in which they go about policing offensive content and behavior, and the implications those tactics and arrangements have for how we think about the private curation of public discourse. But the book is framed in terms of the rise and now, for lack of a better word, adolescence of social media platforms, and how the initial optimism and enthusiasm that fueled the rise of the web, overshadowed the darker aspects already emergent there, and spurred the rise of the first social media platforms, seems to have given way to a set of concerns about how social media platforms work and how they are used — sometimes against people, and towards very different ends than were originally imagined. Those platforms did not at first imagine, and have not thoroughly thought through, how they now support (among many other things) a targeted project of racial animosity and a cold gamesmanship about public engagement. In the context of the election, my new goal is to boost that part of the argument, to highlight the opportunities that social media platforms offer to forms of public discourse that are not only harassing, racist, or criminal, but also that can take advantage of the dynamics of social media to create affirming circles of misinformation, to sip the poison of partisanship, to spur leaderless movements ripe for demagoguery — and how the social media platforms who now host this discourse have embraced a woefully insufficient sense of accountability, and must rethink how they have become mechanisms of social and political discourse, good and ill.

This specific project is too late in the game for a radical shift. But as I think beyond it, I feel an imperative to be sure that my choices of research topics are driven more by cultural and political imperative than merely my own curiosity. Or, ideally, the perfect meeting point of the two. It seems like the logical outcome of my interest in platforms and content moderation is to shift how we think of platforms, not as mere intermediaries between speakers (if they ever were, they are no longer) to understand them as constitutive of public discourse. If we understand them as constituting discourse — both by the choreography they install in their design, the moderation they conduct as a form of policy, and in the algorithmic selection of which raw material becomes “my feed,” then we expand their sense of responsibility. moreover, we might ask what it would mean to hold them accountable for making the political arena we want, we need. These questions will only grow in importance and complexity as these information systems depend more on more on algorithmic, machine learning, and other automated techniques;, more regularly include bots who are difficult to discern from the human participants; and that continue to extend their global reach for new consumers, also extending and entangling with the very shifts of globalization and tribalization we will continue to grapple with.

These comments were part of a longer post at Culture Digitally that I helped organize, in which a dozen scholars of media and information reflected on the election and the future directions of their own work, and our field, in light of the political realities we woke up to Wednesday morning. My specific scholarly community cannot address every issue that’s likely on the horizon, but our work does touch a surprising number of them. The kinds of questions that motivate our scholarship — from fairness and equity, to labor and precarity, to harassment and misogyny, to globalism and fear, to systems and control, to journalism and ignorance — all of these seem so much more pressing today then they even did yesterday.

Call for applications! MSR Social Media Collective PhD interns, for summer 2017

APPLICATION DEADLINE: JANUARY 1, 2017

Microsoft Research New England (MSRNE) is looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week Internship program. The Social Media Collective (in New England, we are Nancy Baym, Tarleton Gillespie, and Mary Gray, with current postdocs Dan Greene and Dylan Mulvin) bring together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. Learn more about us here.

MSRNE internships are 12-week paid stays in our lab in Cambridge, Massachusetts. During their stay, SMC interns are expected to devise and execute their own research project, distinct from the focus of their dissertation (see the project requirements below). The expected outcome is a draft of a publishable scholarly paper for an academic journal or conference of the intern’s choosing. Our goal is to help the intern advance their own career; interns are strongly encouraged to work towards a creative outcome that will help them on the academic job market.

The ideal candidate may be trained in any number of disciplines (including anthropology, communication, information studies, media studies, sociology, science and technology studies, or a related field), but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to media or communication technologies and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

Primary mentors for this year will be Nancy Baym and Tarleton Gillespie, with additional guidance offered by other members of SMC. We are looking for applicants working in one or more of the following areas:

  • Personal relationships and digital media
  • Audiences and the shifting landscapes of producer/consumer relations
  • Affective, immaterial, and other frameworks for understanding digital labor
  • How platforms, through their design and policies, shape public discourse
  • The politics of algorithms, metrics, and big data for a computational culture
  • The interactional dynamics, cultural understanding, or public impact of AI chatbots or intelligent agents

Interns are also expected to give short presentations on their project, contribute to the SMC blog, attend the weekly lab colloquia, and contribute to the life of the community through weekly lunches with fellow PhD interns and the broader lab community. There are also natural opportunities for collaboration with SMC researchers and visitors, and with others currently working at MSRNE, including computer scientists, economists, and mathematicians. PhD interns are expected to be on-site for the duration of their internship.

Applicants must have advanced to candidacy in their PhD program by the time they start their internship. (Unfortunately, there are no opportunities for Master’s students or early PhD students at this time). Applicants from historically marginalized communities, underrepresented in higher education, and students from universities outside of the United States are encouraged to apply.

 

PEOPLE AT MSRNE SOCIAL MEDIA COLLECTIVE

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

  • How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)
  • How are social media platforms, through their algorithmic design and user policies, taking up the role of intermediaries for public discourse? (Tarleton Gillespie)
  • What are the cultural, political, and economic implications of crowdsourcing as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)
  • How do public institutions like schools and libraries prepare workers for the information economy, and how are they changed in the process? (Dan Greene)
  • How are media standards made, and what do their histories tell us about the kinds of things we can represent? (Dylan Mulvin)

SMC PhD interns may also have the opportunity to connect with our sister Social Media Collective members in New York City. Related projects in New York City include:

  • What are the politics, ethics, and policy implications of artificial intelligence and data science? (Kate Crawford, MSR-NYC)
  • What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

For more information about the Social Media Collective, and a list of past interns, visit the About page of our blog. For a complete list of all permanent researchers and current postdocs based at the New England lab, see: http://research.microsoft.com/en-us/labs/newengland/people/bios.aspx

 

APPLICATION PROCESS

To apply for a PhD internship with the Social Media Collective, fill out the online application form: https://careers.research.microsoft.com/

On the application website, please indicate that your research area of interest is “Anthropology, Communication, Media Studies, and Sociology” and that your location preference is “New England, MA, U.S.” in the pull down menus. Also enter the name of a mentor (Nancy Baym or Tarleton Gillespie) whose work most directly relates to your own in the “Microsoft Research Contact” field. IF YOU DO NOT MARK THESE PREFERENCES WE WILL NOT RECEIVE YOUR APPLICATION. So, please, make sure to follow these detailed instructions.

Your application needs to include:

  1. A short description (no more than 2 pages, single spaced) of 1 or 2 projects that you propose to do while interning at MSRNE, independently and/or in collaboration with current SMC researchers. The project proposals can be related to, but must be distinct from your dissertation research. Be specific and tell us:
    • What is the research question animating your proposed project?
    • What methods would you use to address your question?
    • How does your research question speak to the interests of the SMC?
    • Who do you hope to reach (who are you engaging) with this proposed research?
  2. A brief description of your dissertation project.
  3. An academic article-length manuscript (~7,000 or more) that you have authored or co-authored (published or unpublished) that demonstrates your writing skills.
  4. A copy of your CV.
  5. The names and contact information for 3 references (one must be your dissertation advisor).
  6. A pointer to your website or other online presence (if available; this is not required).

A request for letters will be sent directly to your list of referees, on your behalf. IMPORTANT: THE APPLICATION SYSTEM WILL NOT REQUEST THOSE REFERENCE LETTERS UNTIL AFTER YOU HAVE SUBMITTED YOUR APPLICATION! Please warn your letter writers in advance so that they will be ready to submit them when they receive the prompt. The email they receive will automatically tell them they have two weeks to respond. Please ensure that they expect this email (tell them to check their spam folders, too!) and are prepared to submit your letter by our application deadline.  You can check the progress on individual reference requests at any time by clicking the status tab within your application page. Note that a complete application must include three submitted letters of reference.

If you have any questions about the application process, please contact Tarleton Gillespie at tarleton@microsoft.com and include “SMC PhD Internship” in the subject line.

 

TIMELINE

Due to the volume of applications, late submissions (including submissions with late letters of reference) will not be considered. We will not be able to provide specific feedback on individual applications. Finalists will be contacted in January to arrange a Skype interview. Applicants chosen for the internship will be informed in February and announced on the socialmediacollective.org blog.

 


 

PREVIOUS INTERN TESTIMONIALS

“The internship at Microsoft Research was all of the things I wanted it to be – personally productive, intellectually rich, quiet enough to focus, noisy enough to avoid complete hermit-like cave dwelling behavior, and full of opportunities to begin ongoing professional relationships with other scholars who I might not have run into elsewhere.”
— Laura Noren, Sociology, New York University

“If I could design my own graduate school experience, it would feel a lot like my summer at Microsoft Research. I had the chance to undertake a project that I’d wanted to do for a long time, surrounded by really supportive and engaging thinkers who could provide guidance on things to read and concepts to consider, but who could also provoke interesting questions on the ethics of ethnographic work or the complexities of building an identity as a social sciences researcher. Overall, it was a terrific experience for me as a researcher as well as a thinker.”
— Jessica Lingel, Library and Information Science, Rutgers University

“My internship experience at MSRNE was eye-opening, mind-expanding and happy-making. If you are looking to level up as a scholar – reach new depth in your focus area, while broadening your scope in directions you would never dream up on your own; and you’d like to do that with the brightest, most inspiring and supportive group of scholars and humans – then you definitely want to apply.”
— Kat Tiidenberg, Sociology, Tallinn University, Estonia

“The Microsoft Internship is a life-changing experience. The program offers structure and space for emerging scholars to find their own voice while also engaging in interdisciplinary conversations. For social scientists especially the exposure to various forms of thinking, measuring, and problem-solving is unparalleled. I continue to call on the relationships I made at MSRE and always make space to talk to a former or current intern. Those kinds of relationships have a long tail.”
— Tressie McMillan Cottom, Sociology, Emory University

“My summer at MSR New England has been an important part of my development as a researcher. Coming right after the exhausting, enriching ordeal of general/qualifying exams, it was exactly what I needed to step back, plunge my hands into a research project, and set the stage for my dissertation… PhD interns are given substantial intellectual freedom to pursue the questions they care about. As a consequence, the onus is mostly on the intern to develop their research project, justify it to their mentors, and do the work. While my mentors asked me good, supportive, and often helpfully hard, critical questions, but my relationship with them was not the relationship of an RA to a PI– instead it was the relationship of a junior colleague to senior ones.”
— J. Nathan Matias, Media Lab, MIT (read more here)

“This internship provided me with the opportunity to challenge myself beyond what I thought was possible within three months. With the SMC’s guidance, support, and encouragement, I was able to reflect deeply about my work while also exploring broader research possibilities by learning about the SMC’s diverse projects and exchanging ideas with visiting scholars. This experience will shape my research career and, indeed, my life for years to come.”
— Stefanie Duguay, Communication, Queensland University of Technology

“There are four main reasons why I consider the summer I spent as an intern with the Social Media Collective to be a formative experience in my career. 1. was the opportunity to work one-on-one with the senior scholars on my own project, and the chance to see “behind the scenes” on how they approach their own work. 2. The environment created by the SMC is one of openness and kindness, where scholars encourage and help each other do their best work. 3. hearing from the interdisciplinary members of the larger MSR community, and presenting work to them, required learning how to engage people in other fields. And finally, 4. the lasting effect: Between senior scholars and fellow interns, you become a part of a community of researchers and create friendships that extend well beyond the period of your internship.”
— Stacy Blasiola, Communication, University of Illinois Chicago

“My internship with Microsoft Research was a crash course in what a thriving academic career looks like. The weekly meetings with the research group provided structure and accountability, the stream of interdisciplinary lectures sparked intellectual stimulation, and the social activities built community. I forged relationships with peers and mentors that I would never have met in my graduate training.”
— Kate Zyskowski, Anthropology, University of Washington

“It has been an extraordinary experience for me to be an intern at Social Media Collective. Coming from a computer science background, communicating and collaborating with so many renowned social science and media scholars teaches me, as a researcher and designer of socio-technical systems, to always think of these systems in their cultural, political and economic context and consider the ethical and policy challenges they raise. Being surrounded by these smart, open and insightful people who are always willing to discuss with me when I met problems in the project, provide unique perspectives to think through the problems and share the excitements when I got promising results is simply fascinating. And being able to conduct a mixed-method research that combines qualitative insights with quantitative methodology makes the internship just the kind of research experience that I have dreamed for.”
— Ming Yin, Computer Science, Harvard University

“Spending the summer as an intern at MSR was an extremely rewarding learning experience. Having the opportunity to develop and work on your own projects as well as collaborate and workshop ideas with prestigious and extremely talented researchers was invaluable. It was amazing how all of the members of the Social Media Collective came together to create this motivating environment that was open, supportive, and collaborative. Being able to observe how renowned researchers streamline ideas, develop projects, conduct research, and manage the writing process was a uniquely helpful experience – and not only being able to observe and ask questions, but to contribute to some of these stages was amazing and unexpected.”
— Germaine Halegoua, Communication Arts, University of Wisconsin-Madison

“Not only was I able to work with so many smart people, but the thoughtfulness and care they took when they engaged with my research can’t be stressed enough. The ability to truly listen to someone is so important. You have these researchers doing multiple, fascinating projects, but they still make time to help out interns in whatever way they can. I always felt I had everyone’s attention when I spoke about my project or other issues I had, and everyone was always willing to discuss any questions I had, or even if I just wanted clarification on a comment someone had made at an earlier point. Another favorite aspect of mine was learning about other interns’ projects and connecting with people outside my discipline.”
–Jolie Matthews, Education, Stanford University

 


 

FREQUENTLY ASKED QUESTIONS
How much is the salary/stipend? How is it disbursed?
The exact amount changes year to year and depends on a student’s degree status and any past internships with MSR, but it’s somewhere above $2,000/month (after taxes). Interns are paid every 2 weeks. Be aware that the first paycheck doesn’t arrive until about week 3 or 4 (takes awhile for the paperwork to process) so you’d need to make sure you have resources to cover you transition to Cambridge, MA.
Is housing included? Is there assistance finding housing?
The internship comes with funds for travel to/from the area, a small relocation budget, and either a housing stipend or assigned housing.
Are other living expenses included, such as healthcare?
Commuting is covered through either a voucher to get a bike, parking at the building, or a commuter pass. Healthcare is *not* provided, though there is a (pricey) policy that students can purchase while here. The assumption is that interns are covered by their home institution’s healthcare policies, as you would be if you are on summer break.
Are there any provisions for dependents traveling with the intern?
There are, but they can change, so feel free to ask about the specifics that pertain to you. Dependents can be covered with housing (i.e. interns with families receive housing assignments that accommodate their children and partners). Interns with families have definitely been able to make the visit work.
Please note: This internship is *intense* – even for the pretty good pay and the sweet view, it’s not worth applying for this unless you’re ready to work as hard (or harder) than you have in any grad seminar before.

Algorithms, clickworkers, and the befuddled fury around Facebook Trends

The controversy about the human curators behind Facebook Trends has grown, since the allegations made last week by Gizmodo. Besides being a major headache for Facebook, it has helped prod a growing discussion about the power of Facebook to shape the information we see and what we take to be most important. But we continue to fail to find the right words to describe what algorithmic systems are, who generates them, and what they should do for users and for the public. We have to get this clear.

Here’s the case so far: Gizmodo says that Facebook hired human curators to decide which topics, identified by algorithms, would be listed as trending, how they should be named and summarized; one former curator alleged that his fellow curators often overlooked or suppressed conservative topics. This came too close on the heels of a report a few weeks back that Facebook employees had asked internally if the company had a responsibility to slow Donald Trump’s momentum. Angry critics have noted that Zuckerberg, Search VP Tom Stocky, and other FB execs are liberals. Facebook has vigorously disputed the allegation, saying that they have guidelines in place to insure consistency and neutrality, asserting that there’s no evidence that it happened, distributing their guidelines for how Trending topics are selected and summarized, after they were leaked, inviting conservative leaders in for a discussion, and pointing out their conservative bona fides. The Senate’s Commerce Committee, chaired by Republican Senator John Thune, issued a letter demanding answers from Facebook about it. Some wonder if the charges may have been overstated. Other Facebook news curators have spoken up, some to downplay the allegations and defend the process that was in place, others to highlight the sexist and toxic work environment they endured.

Commentators have used the controversy to express a range of broader concerns about Facebook’s power and prominence. Some argue it is unprecedented: “When a digital media network has one billion people connected to entertainment companies, news publications, brands, and each other, the right historical analogy isn’t television, or telephones, or radio, or newspapers. The right historical analogy doesn’t exist.” Others have made the case that Facebook is now as powerful as the media corporations, which have been regulated for their influence; that their power over news organizations and how they publish is growing; that they could potentially and knowingly engage in political manipulation; that they are not transparent about their choices; that they have become an information monopoly.

This is an important public reckoning about Facebook, and about social media platforms more generally, and it should continue. We clearly don’t yet have the language to capture the kind of power we think Facebook now holds. But it would be great if, along the way, we could finally mothball some foundational and deeply misleading assumptions about Facebook and social media platforms, assumptions that have clouded our understanding of their role and responsibility. Starting with the big one:

Algorithms are not neutral. Algorithms do not function apart from people.

 

We prefer the idea that algorithms run on their own, free of the messy bias, subjectivity, and political aims of people. It’s a seductive and persistent myth, one Facebook has enjoyed and propagated. But its simply false.

I’ve already commented on this, and many of those who study the social implications of information technology have made this point abundantly clear (including Pasquale, Crawford, Ananny, Tufekci, boyd, Seaver, McKelvey, Sandvig, Bucher, and nearly every essay on this list). But it persists: in statements made by Facebook, in the explanations offered by journalists, even in the words of Facebook’s critics.

If you still think algorithms are neutral because they’re not people, here’s a list, not even an exhaustive one, of the human decisions that have to be made to produce something like Facebook’s Trending Topics (which, keep in mind, pales in scope and importance to Facebook’s larger algorothmic endeavor, the “news feed” listing your friends’ activity). Some are made by the engineers designing the algorithm, others are made by curators who turn the output of the algorithm into something presentable. If your eyes start to glaze over, that’s the point; read any three points and then move on, they’re enough to dispel the myth. Ready?

(determining what activity might potentially be seen as a trend)
– what data should be counted in this initial calculation of what’s being talked about (all Facebook users, or subset? English language only? private posts too, or just public ones?)
– what time frame should be used in this calculation — both for the amount of activity happening “now” (one minute, one hour, one day?) and to get a baseline measure of what’s typical (a week ago? a different day at the same time, or a different time on the same day? one point of comparison or several?)
– should Facebook emphasize novelty? longevity? recurrence? (e.g., if it has trended before, should it be easier or harder for it to trend again?)
– how much of a drop in activity is sufficient for a trending topic to die out?
– which posts actually represent a single topic (e.g., when do two hashtags referring to the same topic?)
– what other signals should be taken into account? what do they mean? (should Facebook measure posts only, or take into account likes? how heavily should they be weighed?)
– should certain contributors enjoy some privileged position in the count? (corporate partners, advertisers, high-value users? pay-for-play?)

(from all possible trends, choosing which should be displayed)
– should some topics be dropped, like obscenity or hate speech?
– if so, who decides what counts as obscene or hateful enough to leave off?
– what topics should be left off because they’re too generic? (Facebook noted that it didn’t include “junk topics” that do not correlate to a real world event. What counts as junk, case by case?)

(designing how trends are displayed to the users)
– who should do this work? what expertise should they have? who hires them?
– how should a trend be presented? (word? title? summary?)
– what should clicking on a trend headline lead to? (some form of activity on Facebook? some collection of relevant posts? an article off the platform, and if so, which one?)
– should trends be presented in single list, or broken into categories? if so, can the same topic appear in more than one category?
– what are the boundaries of those categories (i.e. what is or isn’t “politics”?)
– should trends be grouped regionally or not? if so, what are the boundaries of each region?
– should trends lists be personalized, or not? If so, what criteria about the user are used to make that decision?

(what to do if the list is deemed to be broken or problematic in particular ways)
– who looks at this project to assess how its doing? how often, and with what power to change it?
– what counts as the list being broken, or off the mark, or failing to meet the needs of users or of Facebook?
– what is the list being judged against, to know when its off (as tested against other measures of Facebook activity? as compared to Twitter? to major news sites?)
– should they re-balance a Trends list that appears unbalanced, or leave it? (e.g. what if all the items in the list at this moment are all sports, or all celebrity scandal, or all sound “liberal”?)
– should they inject topics that aren’t trending, but seem timely and important?
– if so, according to what criteria? (news organizations? which ones? how many? US v. international? partisan vs not? online vs off?)
– should topics about Facebook itself be included?

These are all human choices. Sometimes they’re made in the design of the algorithm, sometimes around it. The result we see, a changing list of topics, is not the output of “an algorithm” by itself, but rather of an effort that combined human activity and computational analysis, together, to produce it.

So algorithms are in fact full of people and the decisions they make. When we let ourselves believe that they’re not, we let everyone — Zuckerberg, his software engineers, regulators, and the rest of us — off the hook for actually thinking out how they should work, leaving us all unprepared when they end up in the tall grass of public contention. “Any algorithm that has to make choices has criteria that are specified by its designers. And those criteria are expressions of human values. Engineers may think they are “neutral”, but long experience has shown us they are babes in the woods of politics, economics and ideology.” Calls for more public accountability, like this one from my colleague danah boyd, can only proceed once we completely jettison the idea that algorithms are neutral — and replace it with a different language that can assess the work that people and systems do together.

The problem is not algorithms, it’s that Facebook is trying to clickwork the news.

 

It is certainly in Facebook’s interest to obscure all the people involved, so users can keep believing that a computer program is fairly and faithfully hard at work. Dismantling this myth raises the kind of hard questions Facebook is fielding. But, once we jettison this myth, what’s left? It’s easy to despair that with so many human decisions involved, how could we ever get a fair and impartial measure of what matters? And forget the handful of people that designed the algorithm and the handful of people that select and summarize from it: Trends are themselves a measure of the activity of Facebook users. These trending topics aren’t produced by dozens of people but millions. Their judgment of what’s worth talking about, in each case and in the aggregate, may be so distressingly incomplete, biased, skewed, and vulnerable to manipulation, that it’s absurd to pretend it can tell us anything at all.

But political bias doesn’t come from the mere presence of people. It comes from how those people organized to do what they’re asked to do. Along with our assumption that algorithms are neutral is a matching and equally misleading assumption that people are always and irretrievably biased. But human endeavors are organized affairs, and can organized to work against bias. Journalism is full of people too, making all sorts of just as opaque, limited, and self-interested decisions. What we hope keeps journalism from slipping into bias and error is the well-established professional norms and thoughtful oversight.

The real problem here is not the liberal leanings of Facebook’s news curators. If conservative news topics were overlooked, it’s only a symptom of the underlying problem. Facebook wanted to take surges of activity that its algorithms could identify and turn them into news-like headlines. But it treated this as an information processing problem, not an editorial one. They’re “clickworking” the news.

Clickwork begins with the recognition that computers are good at some kinds of tasks, and humans others. The answer, it suggests, is to break the task at hand down into components and parcel them out to each accordingly. For Facebook’s trending topics, the algorithm is good at scanning an immense amount of data and identifying surges of activity, but not at giving those surges a name and a coherent description. That is handled by people — in industry parlance, this is the “human computation” part. These identified surges of activities are delivered to a team of curators, each one tasked with following a set of procedures to identify and summarize them. The work is segmented into simple and repetitive tasks, and governed by a set of procedures such that, even though different people are doing it, their output will look the same. In effect, the humans are given tasks that only humans can do, but they are not invited to do them in a human way: they are “programmed” by the modularized work flow and the detailed procedures so that they do the work like computers would. As Lilly Irani put it, clickwork “reorganizes digital workers to fit them both materially and symbolically within existing cultures of new media work.”

This is apparent in the guidelines that Facebook gives to their Trends curators. The documents, leaked to The Guardian then released by Facebook, did not reveal some bombshell about political manipulation, nor did they do much to demonstrate careful guidance on the part of Facebook around the issue of political bias. What’s most striking is that they are mind-numbingly banal: “Write the description up style, capitalizing the first letter of all major words…” “Do not copy another outlet’s headline…” “Avoid all spoilers for descriptions of scripted shows…” “After identifying the correct angle for a topic, click into the dropdown menu underneath the Unique Keyword fielding select the Unique Keyword that best fits the topic…” “Mark a topic as ‘National Story’ importance if it is among the 1-3 top stories of the day. We measure this by checking if it is leading at least 5 of the following 10 news websites…”  “Sports games: rewrite the topic name to include both teams…” This is not the news room, it’s the secretarial pool.

Moreover, these workers were kept separate from the rest of the full-time employees, worked under quotas for how many trends to identify and summarize that were increased as the project went on. As one curator noted, “The team prioritizes scale over editorial quality and forces contractors to work under stressful conditions of meeting aggressive numbers coupled with poor scheduling and miscommunication. If a curator is underperforming, they’ll receive an email from a supervisor comparing their numbers to another curator.” All were hourly contractors, were kept under non-disclose agreements and asked not to mention that they worked for Facebook. “’It was degrading as a human being,’ said another. ‘We weren’t treated as individuals. We were treated in this robot way.’” A new piece in The Guardian from one such news curator insists that it was also a toxic work environment, especially for women. These “data janitors” are rendered so invisible in the images of Silicon Valley and how tech works that, when we suddenly hear from one, we’re surprised.

Their work was organized to quickly produce capsule descriptions of bits of information that are styled the same — as if they were produced by an algorithm. (this lines up with other concerns about the use of algorithms and clickworkers to produce cheap journalism at scale, and the increasing influence of audience metrics about what’s popular on news judgment.)  It was not, however, organized to thoughtfully assemble a vital information resource that some users treat as the headlines of the day. It was not organized to help these news curators develop experience together on how to do this work well, or handle contentious topics, or reflect on the possible political biases in their choices. It was not likely to foster a sense of community and shared ambitions with Facebook, which might lead frustrated and over-worked news curators to indulge in their own political preferences. And I suspect it was not likely to funnel any insights they had about trending topics back to the designers of the algorithms they depended on.

Trends are not the same as news, but Facebook kinda wants them to be.

 

Part of why charges of bias are so compelling is that we have a longstanding concern about the problem of bias in news. For more than a century we’ve fretted about the individual bias of reporters, the slant of news organizations, and the limits of objectivity [http://jou.sagepub.com/content/2/2/149.abstract]. But is a list of trending topics a form of news? Are the concerns we have about balance and bias in the news relevant for trends?

“Trends” is a great word, the best word to have emerged amidst the social media zeitgeist. In a cultural moment obsessed with quantification, defended as being the product of an algorithm, “trends” is a powerfully and deliberately vague term that does not reveal what it measures. Commentators poke at Facebook for clearer explanations of how they choose trends, but “trends” could mean such a wide array of things, from the most activity to the most rapidly rising to a completely subjective judgment about what’s popular.

But however they are measured and curated, Facebook’s Trends are, at their core, measures of activity on the site. So, at least in principle, they are not news, they are expressions of interest. Facebook users are talking about some things, a lot, for some reason. This has little to do with “news” which implies an attention to events in the world and some judgment of importance. Of course, many things Facebook users talk about, though not all, are public events. And it seems reasonable to assume that talking about a topic represents some judgment of its importance, however minimal. Facebook takes these identifiable surges of activity as proxies for importance. Facebook users “surface” the news… approximately. The extra step and “injecting” stories drawn from the news that were for whatever reason not surging among Facebook users goes a step further, to turn their proxy of the news into a simulation of it. Clearly this was an attempt to best Twitter, may also have played into their effort to persuade news organizations to partner with them and take advantage of their platform as a means of distribution. But it also encouraged us to hold Trends accountable for news-like concerns, like liberal bias.

We could think about Trends differently, not as approximating the news but as taking the public’s pulse. If Trends were designed to strictly represent “what are Facebook users talking about a lot,” presumably there is some scientific value, or at least cultural interest, in knowing what (that many) people are actually talking about. If that were its understood value, we might still worry about the intervention of human curators and their political preferences, but not because their choices would shape users’  political knowledge or attitudes, but because e’d want this scientific glimpse to be unvarnished by misrepresentation.

But that is not how Facebook has invited us to think about its Trending topics, and it couldn’t do so if it wanted: its interest in Trending topics is neither as a form of news production nor as a pulse of the public, but as a means to keep users on the site and involved. The proof of this, and the detail that so often gets forgotten in these debates, is that the Trending Topics are personalized. Here’s Facebook’s own explanation: “Trending shows you a list of topics and hashtags that have recently spiked in popularity on Facebook. This list is personalized based on a number of factors, including Pages you’ve liked, your location and what’s trending across Facebook.” Knowing what has “spiked in popularity” is not the same as news; a list “personalized based on… Pages you’ve liked” is no longer a site-wide measure of popular activity; an injected topic is no longer just what an algorithm identified.

As I’ve said elsewhere, “trends” are not a barometer of popular activity but a hieroglyph, making provocative but oblique and fleeting claims about “us” but invariably open to interpretation. Today’s frustration with Facebook, focused for the moment on the role their news curators might have played in producing these Trends, is really a discomfort with the power Facebook seems to exert — a kind of power that’s hard to put a finger on, a kind of power that our traditional vocabulary fails to capture. But across the controversies that seem to flare again and again, a connecting thread is Facebook’s insistence on colonizing more and more components of social life (friendship, community, sharing, memory, journalism), and turning the production of shared meaning so vital to sociality into the processing of information so essential to their own aims.

CFP: Studying Social Media and Digital Infrastructures: a workshop-within-a-conference

 

part of the 50th Hawaii International Conference on System Sciences (HICSS-50)

paper submission deadline: June 15, 2016, 11:59pm HST.

  

For fifty years, the Hawaii International Conference on System Sciences (HICSS) has been a home for researchers in the information, computer, and system sciences (http://www.hicss.org/). The 50th anniversary event will be held January 4-7, 2017, at the Hilton Waikoloa Village. With an eye to the exponential growth of digitalization and information networks in all aspects of human activity, HICSS has continued to expand its track on Digital and Social Media (http://www.hicss.org/#!track3/c1xcj).

This year, among the Digital and Social Media track’s numerous offerings, we offer two minitracks meant to work in concert. Designed to sequence together into a single day-long workshop-within-a-conference, they will host the best emerging scholarship from sociology, anthropology, communication, information studies, and science & technology studies that addresses the most pressing concerns around digital and social media. In addition, we have developed a pre-conference workshop on digital research methods that will inform and complement the work presented in these minitracks.

 

Minitrack 1: Critical and Ethical Studies of Digital and Social Media

http://www.hicss.org/#!critical-ethical-studies-of-dsm/c24u6

Organizers: Tarleton Gillespie, Mary Gray, and Robert Mason

The minitrack will critically interrogate the role of DSM in supporting existing power structures or realigning power for underrepresented or social marginalized groups, and raise awareness or illustrate the ethical issues associated with doing research on DSM. Conceptual papers would address foundational theories of critical studies of media or ethical conduct in periods of rapid sociotechnical change—e.g., new ways of thinking about information exchange in communities and societies. Empirical papers would draw on studies of social media data that illustrate the critical or ethical dimensions of the use of such data. We welcome papers considering topics such as (but not limited to):

*   the power and responsibility of digital platforms

*   bias and discrimination in the collection and use of social data

*   political economies and labor conditions of paid and unpaid information work

*   values embedded in search engines and social media algorithms

*   changes in societal institutions driven by social media and data-intensive techniques

*   alternative forms of digital and social media

*   the ethical dynamics of studying human subjects through their online data

*   challenges in studying the flow of information and misinformation

*   barriers to and professional obligations around accessing and studying proprietary DSM data

 

Minitrack 2: Values, Power, and Politics in Digital Infrastructures

http://www.hicss.org/#!values-power-and-politics-in-digital-i/c19uj

Organizers: Katie Shilton, Jaime Snyder, and Matthew Bietz

This minitrack will explore the themes of values, power, and politics in relation to the infrastructures that support digital data, documents, and interactions. By considering how infrastructures – the underlying material properties, policy decisions, and mechanisms of interoperability that support digital platforms – are designed, maintained, and dismantled, the work presented in this mini-track will contribute to debates about sociotechnical aspects of digital and social media, with a focus on data, knowledge production, and information access. This session will focus on research that employs techniques such as infrastructural inversion, trace ethnography or design research (among other methods) to explore factors that influence the development of infrastructures and their use in practice. We welcome papers considering topics such as (but not limited to):

*  politics and ethics in digital platforms and infrastructures

*  values of stakeholders in digital infrastructures

*  materiality of values, power, or politics in digital infrastructures

*  tensions between commercial infrastructures and the needs of communities of practice

*  maintenance, repair, deletion, decay of digital and social media infrastructures

*  resistance, adoption and adaptation of digital infrastructures

*  alternative perspectives on what comprises infrastructures

 

Pre-conference workshop: Digital Methods “Best Practices”

http://shawnw.io/workshops/HICSS-digitalmethods

Organizers: Shawn Walker, Mary Gray, and Robert Mason

While the study of digital and social media and its impact on society has exploded, discussion of the best methods for doing so remains thin. Academic researchers and practitioners have deployed traditional techniques, from ethnography to social network analysis; but digital and social media challenge and even defy these techniques in a number of ways that must be examined. At the same time, digital and social media may benefit from more organic and unorthodox methods that get at aspects that cannot be examined otherwise. This intensive half day workshop will focus on approaches and best practices for studying digital and social media. We aim to go beyond the application of existing methods into online environments and collect innovative methods that break new ground while producing rigorous insights. This workshop will draw on invited and other participants’ research, teaching, classroom, and business experiences to think through “mixed methods” for qualitative and quantitative studies of digital and social media systems.

Through a series of roundtables and guided discussions, the workshop will focus on best practices for studying digital and social media. As part of these discussions, we also will highlight technical and ethical challenges that arise from our studying cross-platform, digital and social media phenomenon. The output of this workshop will be an open, “co-authored” syllabus for a seminar offering what we might call a mixed-method, “from causal to complicated” approach to digital and social media research, applicable to both researchers and practitioners alike.

 

How to apply

April 1, 2016: Paper submission opens.

June 15, 2016: Paper submission ends, 11:59pm HST.

Submission to one of the the mini-tracks requires a complete paper. Instructions for submission requirements are available here: http://www.hicss.org/#!author-instructions/c1dsb

Though the two minitracks are designed to work together, for submitting a paper you must choose one to apply to. Feel free to contact the mini-track organizers if you have questions about which is a better fit for your work. For the pre-conference workshop, application instructions, updates, materials, and a group syllabus will be posted on the workshop website.

#trendingistrending: when algorithms become culture

trendingistrending_frontpage_Page_01I wanted to share a new essay, “#Trendingistrending: When Algorithms Become Culture” that I’ve just completed for a forthcoming Routledge anthology called Algorithmic Cultures: Essays on Meaning, Performance and New Technologies, edited by Robert Seyfert and Jonathan Roberge. My aim is to focus on the various “trending algorithms” that populate social media platforms, consider what they do as a set, and then connect them to a broader history of metrics used in popular media, to both assess audience tastes and portray them back to that audience, as a cultural claim in its own right and as a form of advertising.

The essay is meant to extend the idea of “calculated publics” I first discussed here and the concerns that animated  this paper. But more broadly I hope it pushes us to think about algorithms not as external forces on the flow of popular culture, but increasingly as elements of popular culture themselves, something we discuss as culturally relevant, something we turn to face so as to participate in culture in particular ways. It also has a bit more to say about how we tend to think about and talk about “algorithms” in this scholarly discussion, something I have more to say about here.

I hope it’s interesting, and I really welcome your feedback. I already see places where I’ve not done the issue justice: I should connect the argument more to discussions of financial metrics, like credit ratings, as another moment when institutions have reason to turn such measures back as meaningful claims. I found the excellent essay (journal; academia.edu), where Jeremy Morris writes about what he calls “infomediaries,” late in my process, so while I do gesture to it, it could have informed my thinking even more. There are a dozen other things I wanted to say, and the essay is already a little overstuffed.

I do have some opportunity to make specific changes before it goes to press, so I’d love to hear any suggestions, if you’re inclined to read it.