What should people who are interested in accountability and algorithms be thinking about? Here is one answer: My eleven-minute remarks are now online from a recent event at NYU. I’ve edited them to intersperse my slides.
This talk was partly motivated by the ethics work being done in the machine learning community. That is very exciting and interesting work and I love, love, love it. My remarks are an attempt to think through the other things we might also need to do. Let me know how to replace the “??” in my slides with something more meaningful!
Preview: My remarks contain a minor attempt at a Michael Jackson joke.
A number of fantastic Social Media Collective people were at this conference — you can hear Kate Crawford in the opening remarks. For more videos from the conference, see:
Algorithms and Accountability
Thanks to Joris van Hoboken, Helen Nissenbaum and Elana Zeide for organizing such a fab event.
If you bought this 11-minute presentation you might also buy: Auditing Algorithms, a forthcoming workshop at Oxford.
(This was cross-posted to multicast.)
Microsoft Research (MSR) is looking for a Research Assistant to work with the Social Media Collective in the New England lab, based in Cambridge, Massachusetts. The MSR Social Media Collective consists of Nancy Baym, Sarah Brayne, Kevin Driscoll, Tarleton Gillespie, Mary L. Gray, and Lana Schwartz in Cambridge, and Kate Crawford and danah boyd in New York City, as well as faculty visitors and Ph.D. interns affiliated with the MSR New England. The RA will work directly with Nancy Baym, Kate Crawford, Tarleton Gillespie, and Mary L. Gray. Unfortunately because this is a time-limited contract position, we can only consider candidates who are already legally eligible to work in the United States.
An appropriate candidate will be a self-starter who is passionate and knowledgeable about the social and cultural implications of technology. Strong skills in writing, organization and academic research are essential, as are time-management and multi-tasking. Minimal qualifications are a BA or equivalent degree in a humanities or social science discipline and some qualitative research training.
Job responsibilities will include:
– Sourcing and curating relevant literature and research materials
– Developing literature reviews and/or annotated bibliographies
– Coding ethnographic and interview data
– Copyediting manuscripts
– Working with academic journals on themed sections
– Assisting with research project data management and event organization
The RA will also have opportunities to collaborate on ongoing projects. While publication is not a guarantee, the RA will be encouraged to co-author papers while at MSR. The RAship will require 40 hours per week on site in Cambridge, MA, and remote coordination with New York City-based researchers. It is a 12 month contractor position, with the opportunity to extend the contract an additional 6 months. The position pays hourly with flexible daytime hours. The start date will ideally be July 15, although flexibility is possible for the right candidate.
This position is perfect for emerging scholars planning to apply to PhD programs in Communication, Media Studies, Sociology, Anthropology, Information Studies, and related fields who want to develop their research skills before entering a graduate program. Current New England-based MA/PhD students are welcome to apply provided they can commit to 40 hours of on-site work per week.
To apply, please send an email to Mary Gray (mLg@microsoft.com) with the subject “RA Application” and include the following attachments:
– One-page (single-spaced) personal statement, including a description of research experience and training, interests, and professional goals
– CV or resume
– Writing sample (preferably a literature review or a scholarly-styled article)
– Links to online presence (e.g., blog, homepage, Twitter, journalistic endeavors, etc.)
– The names and emails of two recommenders
We will begin reviewing applications on May 15 and will continue to do so until we find an appropriate candidate. We will post to the blog when the position is filled.
Please feel free to ask quesions about the position in the blog comments!
The Social Media Collective is thrilled to announce that Tarleton Gillespie has joined Microsoft Research New England as a Principal Researcher. He joins Nancy Baym and Mary Gray in New England and danah boyd and Kate Crawford in New York City in forming the permanent core of the SMC. Tarleton is known for his influential work on the cultural politics of algorithms and platforms. His most recent book is the co-edited collection Media Technologies: Essays on Communication, Materiality, and Society (2014). He has also written about copyright in his book Wired Shut: Copyright and the Shape of Digital Culture (2007). He’s been at the forefront of bringing researchers together to think through issues of digital culture through the scholarly blog he co-founded, culturedigitally.org, which any regular reader of this site should be reading as well.
Prior to joining MSR, Tarleton was an Associate Professor in both Communication and Information Science at Cornell. He remains affiliated with Cornell as an Adjunct Associate Professor.
Those lucky enough to work with Tarleton know that in addition to being wicked smart (or, as they would say here in Boston, wicked smaht), he is a remarkably generous scholar and thinker who always makes the work of those around him better. Also, he’s an incredibly nice guy.
(Or, Should I Stay or Should I Go?)
Is it time to boycott “traditional” scholarly publishing? Perhaps you are an academic researcher, just like me. Perhaps, just like me, you think that there are a lot of exciting developments in scholarly publishing thanks to the Internet. And you want to support them. And you also want people to read your research. But you also still need to be sure that your publication venues are held in high regard.
Or maybe you just receive research funding that is subject to new open access requirements.
Academia is a funny place. We are supposedly self-governing. So if we don’t like how our scholarly communications are organized we should be able to fix this ourselves. If we are dissatisfied with the journal system, we’re going to have to do something about it. The question of whether or not it is now time to eschew closed access journals is something that comes up a fair amount among my peers.
It comes up often enough that a group of us at Michigan decided to write an article on the topic. Here’s the article. It just came out yesterday (open access, of course):
Carl Lagoze, Paul Edwards, Christian Sandvig, & Jean-Christophe Plantin. (2015). Should I stay or Should I Go? Alternative Infrastructures in Scholarly Publishing. International Journal of Communication 9: 1072-1081.
The article is intended for those who want some help figuring out the answer to the question the article title poses: Should I stay or should I go? It’s meant help you decipher the unstable landscape of scholarly publishing these days. (Note that we restrict our topic to journal publishing.)
Researching it was a lot of fun, and I learned quite a bit about how scholarly communication works.
- It contains a mention of the first journal. Yes, the first one that we would recognize as a journal in today’s terms. It’s Philosophical Transactions published by the Royal Society of London. It’s on Volume 373.
- It should teach you about some of the recent goings-on in this area. Do you know what a green repository is? What about an overlay journal? Or the “serials crisis“?
- It addresses a question I’ve had for a while: What the heck are those arXiv people up to? If it’s so great, why hasn’t it spread to all disciplines?
- There’s some fun discussion of influential experiments in scholarly publishing. Remember the daring foundation of the Electronic Journal of Communication? Vectors? Were you around way-back-in-the-day when the pioneering, Web-based JCMC looked like this hot mess below? Little did we know that we were actually looking at the future.(*)
(JCMC circa 1995)
(*): Unless we were looking at the Gopher version, then in that case we were not looking at the future.
Ultimately, we adapt a framework from Hirschman that we found to be an aid to our thinking about what is going on today in scholarly communication. Feel free to play the following song on a loop as you read it.
(This post has been cross-posted on multicast.)
Well, after a truly exciting spell of reviewing an AMAZING set of applications for our 2015 PhD Internship Program, we had the absolutely excruciating task of selecting just a few from the pool (note: this is our Collective’s least favorite part of the process).
Without further ado, we are pleased to announce our 2015 Microsoft Research SMC PhD interns:
At MSR New England:
Aleena Chia is a Ph.D. Candidate in Communication and Culture at Indiana University. Her ethnographic research investigates the affective politics and moral economics of participatory culture, in the context of digital and live-action game worlds. She is a recipient of the Wenner-Gren Dissertation Fieldwork grant and has published work in American Behavioral Scientist. Aleena will be working with Mary L. Gray, researching connections between consumer protests, modularity of consumer labor, and portability of compensatory assets in digital and live-action gaming communities.
Stacy Blasiola is a Ph.D. Candidate in the Department of Communication at the University of Illinois at Chicago and also holds an M.A. in Media Studies from the University of Wisconsin at Milwaukee. Stacy uses a mixed methods approach to study the social impacts of algorithms. Using the methods of big data she examines how news events appear in newsfeeds, and using qualitative methods she investigates how the people that use digital technologies understand, negotiate, and challenge the algorithms that present digital information. As a recipient of a National Science Foundation IGERT Fellowship in Electronic Security and Privacy, her work includes approaching algorithms and the databases that enable them from a privacy perspective. Stacy will be working with Nancy Baym and Tarleton Gillespie on a project that analyzes the discursive work of Facebook in regards to its social newsfeed algorithm.
J. Nathan Matias
Nathan Matias is a Ph.D. Student at the MIT Media Lab Center for Civic Media, a fellow at the Berkman Center for Internet and Society, and a DERP Institute fellow. Nathan researches technology for civic cooperation, activism, and expression through qualitative action research with communities, data analysis, software design, and field experiments. Most recently, Nathan has been conducting large-scale studies and interventions on the effects of gender bias, online harassment, gratitude, and peer thanks in social media, corporations, and creative communities like Wikipedia. Nathan was a MSR Fuse Labs intern in 2013 with Andrés Monroy Hernández, where he designed NewsPad, a collaborative technology for neighborhood blogging. Winner of the ACM’s Nelson Prize, Nathan has published data journalism, technology criticism, and literary writing for the Atlantic, the Guardian, and PBS. Before MIT, he worked at technology startups Texperts and SwiftKey, whose products have reached over a hundred million people worldwide. At MSR, Nathan will be working with Tarleton Gillespie and Mary L. Gray, studying the professionalization of digital labor among community managers and safety teams in civic, microwork, and peer economy platforms. He will also be writing about ways that marginalized communities use data and code to respond and reshape their experience of harassment and hate speech online.
At MSR New York City:
Ifeoma Ajunwa is a Paul F. Lazersfeld Fellow in the Sociology Department at the University of Columbia. She received her MPhil in Sociology from Columbia University in 2012. She was the recipient of the AAUW Selected Professions Fellowship in law school after which she practiced business law, international law, and intellectual property law. She has also conducted research for such organizations as the NAACP, the United Nations Human Rights Council, the ACLU of NY (the NYCLU), and UNESCO. Her prior independent research before graduate school include a pilot study at Stanford Law School where she interrogated the link between stereotype threat and the intersecting dynamics of gender, race, and economic class in relation to Bar exam preparation and passage. Ifeoma’s writing has also been published in the NY Times, the HuffingtonPost, and she has been interviewed for Uptown Radio in NYC. She will be working with Kate Crawford at MSR-NYC on data discrimination.
On March 16, Facebook updated its “Community Standards,” in ways that were both cosmetic and substantive. The version it replaced, though it had enjoyed minor updates, had been largely the same since at least 2011. The change comes on the heels of several other sites making similar adjustments to their own policies, including Twitter, YouTube, Blogger, and Reddit – and after months, even years of growing frustration and criticism on the part of social media users about platforms and their policies. This frustration and criticism is of two minds: sometimes, criticism about overly conservative, picky, vague, or unclear restrictions; but also, criticism that these policies fall far too short protecting users, particularly from harassment, threats, and hate speech.
“Guidelines” documents like this one are an important part of the governance of social media platforms; though the “terms of service” are a legal contract meant to spell out the rights and obligations of both the users and the company — often to impose rules on users and indemnify the company against any liability for their actions — it is the “guidelines” that are more likely to be read by users who have a question about the proper use of the site, or find themselves facing content or other users that trouble them. More than that, they serve a broader rhetorical purpose: they announce the platform’s principles and gesture toward the site’s underlying approach to governance.
Facebook described the update as a mere clarification: “While our policies and standards themselves are not changing, we have heard from people that it would be helpful to provide more clarity and examples, so we are doing so with today’s update.” Most of the coverage among the technology press embraced this idea (like here, here, here, here, here, and here). But while Facebook’s policies may not have changed dramatically, so much is revealed in even the most minor adjustments.
First, it’s revealing to look not just at what the rules say and how they’re explained, but how the entire thing is framed. While these documents are now ubiquitous across social media platforms, it is still a curiosity that these platforms so readily embrace and celebrate the role of policing their users – especially amidst the political ethos of Internet freedom, calls for “Net neutrality” at the infrastructural level, and the persistent dreams of the open Web. Every platform must deal with this contradiction, and they often do it in the way they introduce and describe guidelines. These guidelines pages inevitably begin with a paragraph or more justifying not just the rules but the platform’s right to impose them, including a triumphant articulation of the platform’s aspirations.
Before this update, Facebook’s rules were justified as follows: “To balance the needs and interests of a global population, Facebook protects expression that meets the community standards outlined on this page.” In the new version, the priority has shifted, from protecting speech to ensuring that users “feel safe:” “Our goal is to give people a place to share and connect freely and openly, in a safe and secure environment.” I’m not suggesting that Facebook has stopped protecting speech in order to protect users. All social media platforms struggle to do both. But which goal is most compelling, which is held up as the primary justification, has changed.
This emphasis on safety (or more accurately, the feeling of safety) is also evident in the way the rules are now organized. What were, in the old version, eleven rule categories are now fifteen, but they are now grouped into four broad categories – the first of which is, “ keeping you safe.” This is indicative of the effect of the criticisms of recent years: that social networking sites like Facebook and Twitter have failed users, particularly women, in the face of vicious trolling.
As for the rules themselves, it’s hard not to see them as the aftermath to so many particular controversies that have dogged the social networking site over the years. Facebook’s Community Standards increasingly look like a historic battlefield: while it may appear to be a bucolic pasture, the scars of battle remain visible, carved into the land, thinly disguised beneath the landscaping and signage. Some of the most recent skirmishes are now explicitly addressed: A new section on sexual violence and exploitation includes language prohibiting revenge porn. The rule against bullying and harassment now includes a bullet point prohibiting “Images altered to degrade private individuals,” a clear reference to the Photoshopped images of bruised and battered women that were deployed (note: trigger warning) against Anita Sarkessian and others in the Gamergate controversy. The section on self-injury now includes a specific caveat that body modification doesn’t count.
In this version, Facebook seems extremely eager to note that contentious material is often circulated for publicly valuable purposes, including awareness raising, social commentary, satire, and activism. A version of this appears again and again, as part of the rules against graphic violence, nudity, hate speech, self injury, dangerous organizations, and criminal activity. In most cases, these socially valuable uses are presented as a caveat to an otherwise blanket prohibition: even hate speech, which is almost entirely prohibited and in strongest terms, now has a caveat protecting users who circulate examples of hate speech for the purposes of education and raising awareness. It is clear that Facebook is ever more aware of its role as a public platform, where contentious politics and difficult debate can occur. Now it must offer to patrol the tricky line between the politically contentious and the culturally offensive.
Oddly, in the rule about nudity, and only there, the point about socially acceptable uses is not a caveat, but part of an awkward apology for imposing blanket restrictions anyway: “People sometimes share content containing nudity for reasons like awareness campaigns or artistic projects. We restrict the display of nudity because some audiences within our global community may be sensitive to this type of content – particularly because of their cultural background or age. In order to treat people fairly and respond to reports quickly, it is essential that we have policies in place that our global teams can apply uniformly and easily when reviewing content. As a result, our policies can sometimes be more blunt than we would like and restrict content shared for legitimate purposes.” Sorry, Femen. On the other hand, apparently its okay if its cartoon nudity: “Restrictions on the display of both nudity and sexual activity also apply to digitally created content unless the content is posted for educational, humorous, or satirical purposes.” A nod to Charlie Hebdo, perhaps? Or just a curious inconsistency.
The newest addition to the document, and the one most debated in the press coverage, is the new way Facebook now articulates its long-standing requirement that users use their real identity. The rule was recently challenged by a number of communities eager to use Facebook under aliases or stage names, as well as by communities (such as Native Americans) who find themselves on the wrong side of Facebook’s policy simply because the traditions of naming in their culture do not fit Facebook’s. After the 2014 scuffle with drag queens about the right to use a stage identity instead of or alongside a legal one, Facebook promised to make its rule more accommodating. in this update Facebook has adopted the phrase “ authentic identity,” their way of allowing adopted performance names but continuing to prohibit duplicate accounts. The update is also a chance for them to re-justify their rule: at more than one point in the document, and in the accompanying letter from Facebook’s content team, this “authentic identity” requirement is presented as assuring responsible and accountable participation: “Requiring people to use their authentic identity on Facebook helps motivate all of us to act responsibly, since our names and reputations are visibly linked to our words and actions.”
There is also some new language in an even older battle: for years, Facebook has been removing images of women breastfeeding, as a violation its rules against nudity. This has long angered a community of women who strongly believe that sharing such images is not only their right, but important for new mothers and for the culture at large (only in 2007, 2008, 2010, 2011, 2012, 2013, 2014, 2015…). After years of disagreements, protests, and negotiations, in 2014 published a special rule saying that it would allow images of breast-feeding so long as they did not include an exposed nipple. This was considered a triumph by many involves, though reports continue to emerge of women having photos removed and accounts suspended despite the promise. This assurance reappears in the new version of the community standards just posted: “We also restrict some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring.” The Huffington Post reads this as (still) prohibiting breastfeeding photos if they include an exposed nipple, but if the structure of this sentence is read strictly, the promise to “ always” allow photos of women breast-feeding seems to me to trump the previous phrase about exposed nipples. I may be getting nitpicky here, but it’s only as a result of years of back and forth about the precise wording of this rule, and Facebook’s willingness and ability to honor it in practice.
In my own research, I have tracked the policies of major social media platforms, noting both the changes and continuities, the justifications and the missteps. One could dismiss these guidelines as mere window dressing — as a performed statement of coherent values that do not in fact drive the actual enforcement of policy on the site, which so often turns out to be more slapdash or strategic or hypocritical. I find it more convincing to say that these are statements of both policy and principle that are struggled over at times, are deployed when they are helpful and can be sidestepped when they’re constraining, and that do important discursive work beyond simply guiding enforcement. These guidelines matter, and not only when they are enforced, and not only for lending strength to the particular norms they represent. Platforms adjust their guidelines in relation to each other, and smaller sites look to the larger ones for guidance, sometimes borrowing them wholesale. The rules as articulated by Facebook matter well beyond Facebook. And they perform, and therefore reveal in oblique ways, how platforms see themselves in the role of public arbiters of cultural value. They are also by no means the end of the story, as no guidelines in the abstract could possibly line up neatly with how they are enforced in practice.
Facebook’s newest update is consistent with changes over the past few years on many of the major sites, a common urge to both impose more rules and use more words to describe them clearly. This is a welcome adjustment, as so many of the early policy documents, including Facebook’s, were sparse, abstract, and unprepared for the variety and gravity of questionable content and a awful behavior they would soon face. There are some laudable principles made explicit here. On the other hand, adding more words, more detailed examples, and further clarifications does not – cannot – resolve the other challenge: these are still rules that must be applied in specific situations, requiring judgment calls made by overworked, freelance clickworkers. And, while it is a relief to see Facebook and other platforms taking a firmer stand on issues like misogyny, rape threats, trolling, and self-harm, they often are accompanied by ever more restriction not just of bad behavior but of questionable content, a place where the mode of ‘protection’ means something quite different, much more patronizing. The basic paradox remains: these are private companies policing public speech, and are often intervening according to a culturally specific or a financially conservative morality. It is the next challenge for social media to strike a better balance in this regard: more effectively intervening to protect users themselves, while intervening less on behalf of users’ values.
This is cross-posted on the Culture Digitally blog.