The accountability of social media platforms, in the age of Trump

Pundits and commentators are just starting to pick through the rubble of this election and piece together what happens and what it means. In such cases, it is often easier to grab hold of one explanation — Twitter! racism! Brexit! James Comey! — and use it as a clothesline to hang the election on and shake it into some semblance of sense. But as scholars, we do a disservice to allow for simple or single explanations. “Perfect storm” has become a cliche, but I can see a set of elements that had to all be true, that came together, to produce the election we just witnessed: Globalization, economic precarity, and fundamentalist reactionary responses; the rise of the conservative right and its target tactics, especially against the Clintons; backlashes to multiculturalism, diversity, and the election of President Obama; the undoing of the workings and cultural authority of journalism; the alt-right and the undercurrents of social media; the residual fear and anxiety in America after 9/11. It is all of these things, and they were all already connected, before candidate Trump emerged.

Yet at the same time, my expertise does not stretch across all of these areas. I have to admit that I have trained myself right down to a fine point: social media, public discourse, technology, control, law. I have that hammer, and can only hit those nails. If I find myself being particular concerned about social media and harassment, or want to draw links between Trump’s dog whistle politics, Steve Bannon and Breitbart, the tactics of the alt-right, and the failings of Twitter to consider the space of discourse it has made possible, I risk making it seem like I think there’s one explanation, that technology produces social problems. I do not mean this. In the end, I have to have faith that, as I try to step up and say something useful about this one aspect, some other scholar is similarly stepping up an saying something about fundamentalist reactions to globalization, and someone else is stepping up to speak about the divisiveness of the conservative movement.

The book I’m working on now, nearing completion, is about social media platforms and the way they have (and have not) stepped into the role of arbiters of public discourse. The focus is on the platforms, their ambivalent combination of neutrality and intervention, the actual ways in which they go about policing offensive content and behavior, and the implications those tactics and arrangements have for how we think about the private curation of public discourse. But the book is framed in terms of the rise and now, for lack of a better word, adolescence of social media platforms, and how the initial optimism and enthusiasm that fueled the rise of the web, overshadowed the darker aspects already emergent there, and spurred the rise of the first social media platforms, seems to have given way to a set of concerns about how social media platforms work and how they are used — sometimes against people, and towards very different ends than were originally imagined. Those platforms did not at first imagine, and have not thoroughly thought through, how they now support (among many other things) a targeted project of racial animosity and a cold gamesmanship about public engagement. In the context of the election, my new goal is to boost that part of the argument, to highlight the opportunities that social media platforms offer to forms of public discourse that are not only harassing, racist, or criminal, but also that can take advantage of the dynamics of social media to create affirming circles of misinformation, to sip the poison of partisanship, to spur leaderless movements ripe for demagoguery — and how the social media platforms who now host this discourse have embraced a woefully insufficient sense of accountability, and must rethink how they have become mechanisms of social and political discourse, good and ill.

This specific project is too late in the game for a radical shift. But as I think beyond it, I feel an imperative to be sure that my choices of research topics are driven more by cultural and political imperative than merely my own curiosity. Or, ideally, the perfect meeting point of the two. It seems like the logical outcome of my interest in platforms and content moderation is to shift how we think of platforms, not as mere intermediaries between speakers (if they ever were, they are no longer) to understand them as constitutive of public discourse. If we understand them as constituting discourse — both by the choreography they install in their design, the moderation they conduct as a form of policy, and in the algorithmic selection of which raw material becomes “my feed,” then we expand their sense of responsibility. moreover, we might ask what it would mean to hold them accountable for making the political arena we want, we need. These questions will only grow in importance and complexity as these information systems depend more on more on algorithmic, machine learning, and other automated techniques;, more regularly include bots who are difficult to discern from the human participants; and that continue to extend their global reach for new consumers, also extending and entangling with the very shifts of globalization and tribalization we will continue to grapple with.

These comments were part of a longer post at Culture Digitally that I helped organize, in which a dozen scholars of media and information reflected on the election and the future directions of their own work, and our field, in light of the political realities we woke up to Wednesday morning. My specific scholarly community cannot address every issue that’s likely on the horizon, but our work does touch a surprising number of them. The kinds of questions that motivate our scholarship — from fairness and equity, to labor and precarity, to harassment and misogyny, to globalism and fear, to systems and control, to journalism and ignorance — all of these seem so much more pressing today then they even did yesterday.

Beyond bugs and features: A case for indeterminacy

spandrels-of-san-marco
Spandrels of San Marco. [CC License from Tango7174]
In 1979, Harvard professors Stephen Jay Gould and Richard Lewontin identified what they saw as a shortcoming in American and English evolutionary biology. It was, they argued, dominated by an adaptationist program.[1] By this, they meant that it embraced a misguided atomization of an organism’s traits, which then “are explained as structures optimally designed by natural selection for their function.”[2] For example, an exaggerated version of the adaptationist program might look at a contemporary human face, see a nose, and argue that it was adapted and selected for its ability to hold glasses. Such a theory of the nose not only ignores the plural functions the nose serves, but the complex history of its evolution, its shifting usefulness for different kinds of activities, its mutational detours, the different kinds of noses, and the nose’s evolution as part of the larger systems of faces, bodies, and environments.  So how should we talk about noses? Or, more importantly, how do we talk about any single feature of a complex system? Continue reading “Beyond bugs and features: A case for indeterminacy”

SMC at AoIR 2016: Internet Rules!

The 17th annual meeting of the Association of Internet Researchers is being held this week (Oct 5-8) in Berlin, Germany. It is a thrill to see so many past and present SMC members presenting their latest work, especially with Kate Crawford as part of the conference’s plenary panel Thursday evening. Below is a cheat sheet of all the SMC presentations, in case you want to follow along. (If we forgot somebody, please email us and we’ll add you!)

Wednesday, October 5th, 2016

Nancy Baym 9:00 AM – 5:30 PM Studying Labor: A Workshop on Theory and Methods
Jean Burgess 9:00 AM – 5:30 PM Digital Methods in Internet Research: A Sampling Menu
Kevin Driscoll 9:00 AM – 5:30 PM 404 History Not Found: Challenges in Internet History and Memory Studies
Tarleton Gillespie 9:00 AM – 5:30 PM The Internet Rules, But How? A Science and Technical Studies Take on Doing Internet Governance
Mary L. Gray 9:00 AM – 5:30 PM Studying Labor: A Workshop on Theory and Methods

Thursday, October 6th, 2016

Mike Ananny 9:00 AM – 10:30 AM Like, Share, Discuss? How News Factors and Secondary Factors Predict User Engagement with News Stories on Facebook
Nancy Baym 9:00 AM – 10:30 PM Platform Studies: The Rules of Engagement
Jean Burgess 9:00 AM – 10:30 AM Platform Studies: The Rules of Engagement
Katrin Tiidenberg 9:00 AM – 10:30 AM Session Chair: Fakes
Nancy Baym 11:00 AM – 12:30 PM Economies of the Internet
Eszter Hargittai 11:00 AM – 12:30 PM Session Chair: (Non)Participation
Tero Karppi 11:00 AM – 12:30 PM Algorithmic Identities
Alice Marwick 11:00 AM – 12:30 PM Scandal or Sex Crime? Ethical Implications of the Celebrity Nude Photo Leaks
Nancy Baym 2:00 PM – 3:30 PM Technically Unequal: Representational Issues in Technology Scholarship and Journalism
Eszter Hargittai 2:00 PM – 3:30 PM Unconnected: How Privacy Concerns Impact Internet Adoption
Katrin Tiidenberg 2:00 PM – 3:30 PM Representation, Presentation, Embodiment/Emplacement
Siva Vaidhyanatha 2:00 PM – 3:30 PM Technically Unequal: Representational Issues in Technology Scholarship and Journalism
Tarleton Gillespie 4:00 PM – 5:30 PM Roundtable: Censorship Online, and the Challenges of Studying What’s No Longer there
Kishonna Gray 4:00 PM – 5:30 PM Color-Coded: Breaking the Rules of Whiteness Online
Kate Crawford 7:00 PM – 8:30 PM Plenary Panel: Who Rules the Internet? Kate Crawford (Microsoft Research NYC), Fieke Jansen (Tactical Tech), Carolin Gerlitz (University of Siegen)

Friday, October 7th, 2016

Mike Ananny 9:00 AM – 10:30 AM Roundtable: Still Platforms: The Apparent Stability of Digital Intermediaries in the Face of Change and Challenge
Solon Barocas 9:00 AM – 10:30 AM Roundtable: Still Platforms: The Apparent Stability of Digital Intermediaries in the Face of Change and Challenge
Tarleton Gillespie 9:00 AM – 10:30 AM Roundtable: Still Platforms: The Apparent Stability of Digital Intermediaries in the Face of Change and Challenge
Stacy Blasiola 11:00 AM – 12:30 PM The Rules of Engagement: Managing Boundaries, Managing Identities
Jean Burgess 11:00 AM – 12:30 PM What Would Feminist Big Data, Data Studies and Datavis Look Like?
Kate Crawford 11:00 AM – 12:30 AM What Would Feminist Big Data, Data Studies and Datavis Look Like?
Airi Lampinen 11:00 AM – 12:30 PM The Rules of Engagement: Managing Boundaries, Managing Identities
Katrin Tiidenberg 11:00 AM – 12:30 PM Making and Breaking Rules on the Internet
Kate Miltner 2:00 PM – 3:30 PM Playing with the Rules
Kishonna Gray 4:00 PM – 5:30 PM The Cultural Politics of Feminism and Anti-Feminism After Gamergate
Tero Karppi 4:00 PM – 5:30 PM Disconnect. Unfriend. Disengage.
Susanna Paasonen 4:00 PM – 5:30 PM The Cultural Politics of Feminism and Anti-Feminism After Gamergate

Saturday, October 8th, 2016

Jean Burgess 11:00 AM – 12:30 PM The Sharing Economy and Its Discontents
Stefanie Duguay 11:00 AM – 12:30 PM The Sharing Economy and Its Discontents
Mary L. Gray 11:00 AM – 12:30 PM The Sharing Economy and Its Discontents
Dan Greene 11:00 AM – 12:30 PM Internet Industry Research Rules! A Roundtable on Methods
Germaine Halegoua 11:00 AM – 12:30 PM Intersections of Technology & Place
Jessa Lingel 11:00 AM – 12:30 PM Session Chair: Tech/Place
Nick Seaver 11:00 AM – 12:30 PM Internet Industry Research Rules! A Roundtable on Methods
Lana Swartz 11:00 AM – 12:30 PM Internet Industry Research Rules! A Roundtable on Methods
Kevin Driscoll 2:00 PM – 3:30 PM Session Chair: Histories
Annette Markham 2:00 PM – 3:30 PM AoIR Institutional Memory Panel
Dylan Mulvin 2:00 PM – 3:30 PM Embedded Dangers: The History of the Year 2000 Problem and the Politics of Technological Repair

New Article in New Media + Society

Germaine Halegoua (University of Kansas), Alex Leavitt (Facebook), and Mary L. Gray recently published an article based on research conducted while Germaine was a Ph.D. Intern and Alex was a Research Assistant at MSR.

The article, “Jumping For Fun?: Negotiating Mobility and the Geopolitics of Foursquare” was published in Social Media + Society and is available here: http://sms.sagepub.com/content/2/3/2056305116665859.full.pdf+html.

Abstract: Rather than assume that there is some universal “right way” to engage social media platforms, we interrogate how the location-based social media practice known as “jumping” played out on the popular service Foursquare. We use this case to investigate how a “global” or universal system is constructed with an imagined user in mind, one who enjoys a particular type of mobility and experience of place. We argue that the practices of “Indonesian” Foursquare jumpers and the discourses surrounding their use of Foursquare illustrate that practices understood as transgressive or resistive might best be read as strategies for engaging with a platform as groups contend with marginalizing social, economic, and/or political conditions.

Citation: Halegoua, Germaine R., Alex Leavitt, and Mary L. Gray. “Jumping for Fun? Negotiating Mobility and the Geopolitics of Foursquare.” Social Media + Society 2, no. 3 (July 1, 2016): 2056305116665859. doi:10.1177/2056305116665859.

Call for applications! MSR Social Media Collective PhD interns, for summer 2017

APPLICATION DEADLINE: JANUARY 1, 2017

Microsoft Research New England (MSRNE) is looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week Internship program. The Social Media Collective (in New England, we are Nancy Baym, Tarleton Gillespie, and Mary Gray, with current postdocs Dan Greene and Dylan Mulvin) bring together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. Learn more about us here.

MSRNE internships are 12-week paid stays in our lab in Cambridge, Massachusetts. During their stay, SMC interns are expected to devise and execute their own research project, distinct from the focus of their dissertation (see the project requirements below). The expected outcome is a draft of a publishable scholarly paper for an academic journal or conference of the intern’s choosing. Our goal is to help the intern advance their own career; interns are strongly encouraged to work towards a creative outcome that will help them on the academic job market.

The ideal candidate may be trained in any number of disciplines (including anthropology, communication, information studies, media studies, sociology, science and technology studies, or a related field), but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to media or communication technologies and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

Primary mentors for this year will be Nancy Baym and Tarleton Gillespie, with additional guidance offered by other members of SMC. We are looking for applicants working in one or more of the following areas:

  • Personal relationships and digital media
  • Audiences and the shifting landscapes of producer/consumer relations
  • Affective, immaterial, and other frameworks for understanding digital labor
  • How platforms, through their design and policies, shape public discourse
  • The politics of algorithms, metrics, and big data for a computational culture
  • The interactional dynamics, cultural understanding, or public impact of AI chatbots or intelligent agents

Interns are also expected to give short presentations on their project, contribute to the SMC blog, attend the weekly lab colloquia, and contribute to the life of the community through weekly lunches with fellow PhD interns and the broader lab community. There are also natural opportunities for collaboration with SMC researchers and visitors, and with others currently working at MSRNE, including computer scientists, economists, and mathematicians. PhD interns are expected to be on-site for the duration of their internship.

Applicants must have advanced to candidacy in their PhD program by the time they start their internship. (Unfortunately, there are no opportunities for Master’s students or early PhD students at this time). Applicants from historically marginalized communities, underrepresented in higher education, and students from universities outside of the United States are encouraged to apply.

 

PEOPLE AT MSRNE SOCIAL MEDIA COLLECTIVE

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

  • How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)
  • How are social media platforms, through their algorithmic design and user policies, taking up the role of intermediaries for public discourse? (Tarleton Gillespie)
  • What are the cultural, political, and economic implications of crowdsourcing as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)
  • How do public institutions like schools and libraries prepare workers for the information economy, and how are they changed in the process? (Dan Greene)
  • How are media standards made, and what do their histories tell us about the kinds of things we can represent? (Dylan Mulvin)

SMC PhD interns may also have the opportunity to connect with our sister Social Media Collective members in New York City. Related projects in New York City include:

  • What are the politics, ethics, and policy implications of artificial intelligence and data science? (Kate Crawford, MSR-NYC)
  • What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

For more information about the Social Media Collective, and a list of past interns, visit the About page of our blog. For a complete list of all permanent researchers and current postdocs based at the New England lab, see: http://research.microsoft.com/en-us/labs/newengland/people/bios.aspx

 

APPLICATION PROCESS

To apply for a PhD internship with the Social Media Collective, fill out the online application form: https://careers.research.microsoft.com/

On the application website, please indicate that your research area of interest is “Anthropology, Communication, Media Studies, and Sociology” and that your location preference is “New England, MA, U.S.” in the pull down menus. Also enter the name of a mentor (Nancy Baym or Tarleton Gillespie) whose work most directly relates to your own in the “Microsoft Research Contact” field. IF YOU DO NOT MARK THESE PREFERENCES WE WILL NOT RECEIVE YOUR APPLICATION. So, please, make sure to follow these detailed instructions.

Your application needs to include:

  1. A short description (no more than 2 pages, single spaced) of 1 or 2 projects that you propose to do while interning at MSRNE, independently and/or in collaboration with current SMC researchers. The project proposals can be related to, but must be distinct from your dissertation research. Be specific and tell us:
    • What is the research question animating your proposed project?
    • What methods would you use to address your question?
    • How does your research question speak to the interests of the SMC?
    • Who do you hope to reach (who are you engaging) with this proposed research?
  2. A brief description of your dissertation project.
  3. An academic article-length manuscript (~7,000 or more) that you have authored or co-authored (published or unpublished) that demonstrates your writing skills.
  4. A copy of your CV.
  5. The names and contact information for 3 references (one must be your dissertation advisor).
  6. A pointer to your website or other online presence (if available; this is not required).

A request for letters will be sent directly to your list of referees, on your behalf. IMPORTANT: THE APPLICATION SYSTEM WILL NOT REQUEST THOSE REFERENCE LETTERS UNTIL AFTER YOU HAVE SUBMITTED YOUR APPLICATION! Please warn your letter writers in advance so that they will be ready to submit them when they receive the prompt. The email they receive will automatically tell them they have two weeks to respond. Please ensure that they expect this email (tell them to check their spam folders, too!) and are prepared to submit your letter by our application deadline.  You can check the progress on individual reference requests at any time by clicking the status tab within your application page. Note that a complete application must include three submitted letters of reference.

If you have any questions about the application process, please contact Tarleton Gillespie at tarleton@microsoft.com and include “SMC PhD Internship” in the subject line.

 

TIMELINE

Due to the volume of applications, late submissions (including submissions with late letters of reference) will not be considered. We will not be able to provide specific feedback on individual applications. Finalists will be contacted in January to arrange a Skype interview. Applicants chosen for the internship will be informed in February and announced on the socialmediacollective.org blog.

 


 

PREVIOUS INTERN TESTIMONIALS

“The internship at Microsoft Research was all of the things I wanted it to be – personally productive, intellectually rich, quiet enough to focus, noisy enough to avoid complete hermit-like cave dwelling behavior, and full of opportunities to begin ongoing professional relationships with other scholars who I might not have run into elsewhere.”
— Laura Noren, Sociology, New York University

“If I could design my own graduate school experience, it would feel a lot like my summer at Microsoft Research. I had the chance to undertake a project that I’d wanted to do for a long time, surrounded by really supportive and engaging thinkers who could provide guidance on things to read and concepts to consider, but who could also provoke interesting questions on the ethics of ethnographic work or the complexities of building an identity as a social sciences researcher. Overall, it was a terrific experience for me as a researcher as well as a thinker.”
— Jessica Lingel, Library and Information Science, Rutgers University

“My internship experience at MSRNE was eye-opening, mind-expanding and happy-making. If you are looking to level up as a scholar – reach new depth in your focus area, while broadening your scope in directions you would never dream up on your own; and you’d like to do that with the brightest, most inspiring and supportive group of scholars and humans – then you definitely want to apply.”
— Kat Tiidenberg, Sociology, Tallinn University, Estonia

“The Microsoft Internship is a life-changing experience. The program offers structure and space for emerging scholars to find their own voice while also engaging in interdisciplinary conversations. For social scientists especially the exposure to various forms of thinking, measuring, and problem-solving is unparalleled. I continue to call on the relationships I made at MSRE and always make space to talk to a former or current intern. Those kinds of relationships have a long tail.”
— Tressie McMillan Cottom, Sociology, Emory University

“My summer at MSR New England has been an important part of my development as a researcher. Coming right after the exhausting, enriching ordeal of general/qualifying exams, it was exactly what I needed to step back, plunge my hands into a research project, and set the stage for my dissertation… PhD interns are given substantial intellectual freedom to pursue the questions they care about. As a consequence, the onus is mostly on the intern to develop their research project, justify it to their mentors, and do the work. While my mentors asked me good, supportive, and often helpfully hard, critical questions, but my relationship with them was not the relationship of an RA to a PI– instead it was the relationship of a junior colleague to senior ones.”
— J. Nathan Matias, Media Lab, MIT (read more here)

“This internship provided me with the opportunity to challenge myself beyond what I thought was possible within three months. With the SMC’s guidance, support, and encouragement, I was able to reflect deeply about my work while also exploring broader research possibilities by learning about the SMC’s diverse projects and exchanging ideas with visiting scholars. This experience will shape my research career and, indeed, my life for years to come.”
— Stefanie Duguay, Communication, Queensland University of Technology

“There are four main reasons why I consider the summer I spent as an intern with the Social Media Collective to be a formative experience in my career. 1. was the opportunity to work one-on-one with the senior scholars on my own project, and the chance to see “behind the scenes” on how they approach their own work. 2. The environment created by the SMC is one of openness and kindness, where scholars encourage and help each other do their best work. 3. hearing from the interdisciplinary members of the larger MSR community, and presenting work to them, required learning how to engage people in other fields. And finally, 4. the lasting effect: Between senior scholars and fellow interns, you become a part of a community of researchers and create friendships that extend well beyond the period of your internship.”
— Stacy Blasiola, Communication, University of Illinois Chicago

“My internship with Microsoft Research was a crash course in what a thriving academic career looks like. The weekly meetings with the research group provided structure and accountability, the stream of interdisciplinary lectures sparked intellectual stimulation, and the social activities built community. I forged relationships with peers and mentors that I would never have met in my graduate training.”
— Kate Zyskowski, Anthropology, University of Washington

“It has been an extraordinary experience for me to be an intern at Social Media Collective. Coming from a computer science background, communicating and collaborating with so many renowned social science and media scholars teaches me, as a researcher and designer of socio-technical systems, to always think of these systems in their cultural, political and economic context and consider the ethical and policy challenges they raise. Being surrounded by these smart, open and insightful people who are always willing to discuss with me when I met problems in the project, provide unique perspectives to think through the problems and share the excitements when I got promising results is simply fascinating. And being able to conduct a mixed-method research that combines qualitative insights with quantitative methodology makes the internship just the kind of research experience that I have dreamed for.”
— Ming Yin, Computer Science, Harvard University

“Spending the summer as an intern at MSR was an extremely rewarding learning experience. Having the opportunity to develop and work on your own projects as well as collaborate and workshop ideas with prestigious and extremely talented researchers was invaluable. It was amazing how all of the members of the Social Media Collective came together to create this motivating environment that was open, supportive, and collaborative. Being able to observe how renowned researchers streamline ideas, develop projects, conduct research, and manage the writing process was a uniquely helpful experience – and not only being able to observe and ask questions, but to contribute to some of these stages was amazing and unexpected.”
— Germaine Halegoua, Communication Arts, University of Wisconsin-Madison

“Not only was I able to work with so many smart people, but the thoughtfulness and care they took when they engaged with my research can’t be stressed enough. The ability to truly listen to someone is so important. You have these researchers doing multiple, fascinating projects, but they still make time to help out interns in whatever way they can. I always felt I had everyone’s attention when I spoke about my project or other issues I had, and everyone was always willing to discuss any questions I had, or even if I just wanted clarification on a comment someone had made at an earlier point. Another favorite aspect of mine was learning about other interns’ projects and connecting with people outside my discipline.”
–Jolie Matthews, Education, Stanford University

 


 

FREQUENTLY ASKED QUESTIONS
How much is the salary/stipend? How is it disbursed?
The exact amount changes year to year and depends on a student’s degree status and any past internships with MSR, but it’s somewhere above $2,000/month (after taxes). Interns are paid every 2 weeks. Be aware that the first paycheck doesn’t arrive until about week 3 or 4 (takes awhile for the paperwork to process) so you’d need to make sure you have resources to cover you transition to Cambridge, MA.
Is housing included? Is there assistance finding housing?
The internship comes with funds for travel to/from the area, a small relocation budget, and either a housing stipend or assigned housing.
Are other living expenses included, such as healthcare?
Commuting is covered through either a voucher to get a bike, parking at the building, or a commuter pass. Healthcare is *not* provided, though there is a (pricey) policy that students can purchase while here. The assumption is that interns are covered by their home institution’s healthcare policies, as you would be if you are on summer break.
Are there any provisions for dependents traveling with the intern?
There are, but they can change, so feel free to ask about the specifics that pertain to you. Dependents can be covered with housing (i.e. interns with families receive housing assignments that accommodate their children and partners). Interns with families have definitely been able to make the visit work.
Please note: This internship is *intense* – even for the pretty good pay and the sweet view, it’s not worth applying for this unless you’re ready to work as hard (or harder) than you have in any grad seminar before.

Negotiating Identity in Social Media: Ph.D. course in Aarhus after AoIR

Registration open for: “Negotiating Identity in Social Media: Relational, emotional, and visual labor” with Nancy Baym, Annette Markham and Katrin Tiidenberg.

REGISTRATION: https://auws.au.dk/negotiationofidentityinsocialmedia

Time: Oct 11-14, 2016 (Just after the AoIR conference in Berlin)
Place: Aarhus University and DOKK 1,  Aarhus, Denmark
Online: We’ll post an online participation option soon. Check back!

Instructors:
Nancy Baym (Microsoft Research New England and MIT);
Annette Markham (Aarhus University);
Katrin Tiidenberg (Aarhus University and Tallinn University).

Description: This course introduces participants to contemporary concepts for studying how self, identity, and contexts are negotiated through interactive processes involving visuality, relationality, and emotionality. The metaphor of labor is used to highlight how these practices are constrained and enabled by economic rationalities, affordances of digital technologies, and contemporary norms around building identity through social media.

1. Emotional Labor was developed as a sociological concept to understand certain workplace practices. This theory usefully addresses how, within an economic framework of producing the self as a ‘brand’ via social media, a labor model of controlled emotionality is invoked. This critical stance toward identity performance is a useful lens for studying how people perform and negotiate identity in social media contexts.

2. Relational labor, a term developed by Nancy Baym to illustrate how performers build ongoing connections with disparate audiences, is an extension of emotional labor. This concept helps us consider the neoliberal frames within which our identity practices are caught, when using social media platforms geared toward audience building, and how the issues raised by emotional labor play out when moved from particular interactions to the unending connectivity social media demand.

3. Visual labor is a concept that, like the previous two, can help researchers consider issues and practices around the digitally saturated self as a product of a visual economy.

Who can attend? Course is appropriate for PhD students, postdocs, and early career researchers in media studies, information studies, anthropology, sociology, political science, and other fields addressing social media practices or negotiation of identity. No prerequisite knowledge is necessary.

Readings:

Emotional labor:

Hochschild, A. R. (1983). The managed heart: Commercialization of human feeling. Berkeley: University of California Press.
Tracy, J. S. (2000). Becoming a character for commerce: emotion labor, self-subordination, and discursive construction of identity in a total institution. Management Communication Quarterly, 14(1), 90–128.
Kang, M. (2003). The managed hand: the commercialization of bodies and emotions in Korean immigrant-owned nail salons. Gender and Society, 17(6), 820–839.

Relational labor:

Baym, N. K. (2012). Fans or friends?: seeing social media audiences as musicians do. Participations: Journal of Audience and Reception Studies, 9(2), 286–316.
Baym, N. K. (2014). Connect with your audience! the relational labor of connection. The Communication Review, 18(1), 14-22.

Bounded rationality/bounded emotionality:

Mumby, D. K., & Putnam, L. L. (1992). The politics of emotion: a feminist reading of bounded rationality. The Academy of Management Review, 17(3), 465–486.

Interpersonal relations:

Baxter, L. A., & Montgomery, B. M. (1996). Chapter 1 “Thinking dialectically about communication in personal relationships.” In Relating: dialogues and dialectics. New York: The Guilford Press.

Identity:

Gergen, K. (2000). Chapter “Truth in trouble” and chapter “From self to relationship.“  In The saturated self: dilemmas of identity in contemporary life (pp. 81 –110). New York: Basic Books.
Goffman, E. (1966) Chapter “Interpretations”. In Behavior in public places (pp. 193–242). New York: The Free Press.
Goffman, E. (1981). Footing. In Forms of talk (pp. 124–159). Philadelphia: University of Pennsylvania Press. (we assume that participants have read Erving Goffman’s Presentation of Self in Everyday Life).
Markham, A. (2013). The dramaturgy of digital experience. In C. Edgley (Ed.), The drama of social life: a dramaturgical handbook (pp. 279–293). Farnham: Ashgate.

Visuality:

Tiidenberg, K, & Gomez Cruz, E. (2015). Selfies, image and the re-making of the body. Body & Society, 1–26.
Abidin, C. (2016). “Aren’t these just young, rich women doing vain things online?”: influencer selfies as subversive frivolity. Social Media + Society, 2(2), 1–17.

Preliminary schedule:

Tuesday, October 11, 2016:
09:30-12:00: Introduction to the course and discussion
12:00 – 13:00 Lunch
13:00 – 14:30: Public Lecture by Annette Markham on Emotional Labor

Casual (self funded) dinner with the seminar participants, location TBA

Wednesday, October 12, 2016:
09:30 – 12:00: Discuss emotional labor (previous day’s lecture plus texts)
12:00 – 13:00 Lunch
13:00 – 14:30 Public Lecture by Nancy Baym on Relational Labor
15:00-16:30: QTC Wednesdays at the DLRC (Digital Living Research Commons). Informal conversation with Nancy Baym

Dinner with Media Studies and Information Studies faculty: Location TBA

Thursday, October 13, 2016:
9:30- 12:00: Discuss relational labor (previous day’s lecture plus texts)
12:00 – 13:00 Lunch
13:00 – 14:30 Public Lecture by Katrin Tiidenberg on Embodiment and Visual Labor
15:00-16:00 Discussion of issues, ethics, and concerns
16:00-17:00 wrap-up and evaluation

Organized dinner with participants, location TBA

Three flawed assumptions the Daily Beast made about dating apps

Cpo7yz2VUAQs_5K
Image from @Cernovich

Last week, the Daily Beast published an article by one of its editors who sought to report about how dating apps were facilitating sexual encounters in Rio’s Olympic Village. Instead, his story focused mainly on athletes using Grindr, an app for men seeking men, and included enough personal information about individuals to identify and out them. After the article was criticized as dangerous and unethical across media outlets and social media, the Daily Beast replaced it with an apology. However, decisions to publish articles like this are made based on assumptions about who uses dating apps and how people share information on them. These assumptions are visible not only in how journalists act but also in the approaches that researchers and app companies take when it comes to users’ personal data. Ethical breeches like the one made by the Daily Beast will continue unless we address the following three (erroneous) assumptions:

Assumption 1. Data on dating apps is shareable like a tweet or a Facebook post

 Since dating apps are a hybrid between dating websites of the past and today’s social media, there is an assumption that the information users generate on dating apps should be shared. Zizi Papacharissi and Paige Gibson[1] have written about ‘shareability’ as the built-in way that social network sites encourage sharing and discourage withholding information. This is evident within platforms like Facebook and Twitter, through ‘share’ and ‘retweet’ buttons, as well as across the web as social media posts are formatted to be easily embedded in news articles and blog posts.

Dating apps provide many spaces for generating content, such as user profiles, and some app architectures are increasingly including features geared toward shareability. Tinder, for example, provides users with the option of creating a ‘web profile’ with a distinct URL that anyone can view without even logging into the app. While users determine whether or not to share their web profiles, Tinder also recently experimented with a “share” button allowing users to send a link to another person’s profile by text message or email. This creates a platform-supported means of sharing profiles to individuals who may never have encountered them otherwise.

The problem with dating apps adopting social media’s tendency toward sharing is that dating environments construct particular spaces for the exchange of intimate information. Dating websites have always required a login and password to access their services. Dating apps are no different in this sense – regardless of whether users login through Facebook authentication or create a new account, dating apps require users to be members. This creates a shared understanding of the boundaries of the app and the information shared within it.  Everyone is implicated in the same situation: on a dating app, potentially looking for sexual or romantic encounters. A similar boundary exists for me when I go to the gay bar; everyone I encounter is also in the same space so the information of my whereabouts is equally as implicating for them. However, a user hitting ‘share’ on someone’s Tinder profile and sending it to a colleague, family member, or acquaintance removes that information from the boundaries within which it was consensually provided. A journalist joining a dating app to siphon users’ information for a racy article flat out ignores these boundaries.

Assumption 2. Personal information on dating apps is readily available and therefore can be publicized

 When the Daily Beast’s editor logged into Grindr and saw a grid full of Olympic athletes’ profiles, he likely assumed that if this information was available with a few taps of his screen then it could also be publicized without a problem. Many arguments about data ethics get stuck debating whether information shared on social media and apps is public or private. In actuality, users place their information in a particular context with a specific audience in mind. The violation of privacy occurs when another party re-contextualizes this information by placing it in front of a different audience.

Although scholars have pointed out that re-contextualization of personal information is a violation of privacy, this remains a common occurrence even across academia. We were reminded of this last May when 70,000 OkCupid users’ data was released without permission by researchers in Denmark. Annette Markham’s post on the SMC blog pointed out that “the expectation of privacy about one’s profile information comes into play when certain information is registered and becomes meaningful for others.” This builds on Helen Nissenbaum’s[2] notion of “privacy in context” meaning that people assume the information they share online will be seen by others in a specific context. Despite the growing body of research confirming that this is exactly how users view and manage their personal information, I have come across many instances where researchers have re-published screenshots of user profiles from dating apps without permission. These screenshots are featured in presentations, blog posts, and theses with identifying details that violate individuals’ privacy by re-contextualizing their personal information for an audience outside the app. As an academic community, we need to identify this as an unethical practice that is potentially damaging to research subjects.

Dating app companies also perpetuate the assumption that user information can be shared across contexts through their design choices. Recently, Tinder launched a new feature in the US called Tinder Social, which allows users to join with friends and swipe on others to arrange group hangouts. Since users team up with their Facebook friends, activating this feature lets you see everyone else on your Facebook account who is also on Tinder with this feature turned on. While Tinder Social requires users to ‘unlock’ its functionality from their Settings screen, its test version in Australia automatically opted users in. When Australian users updated their app, this collapsed a boundary between the two platforms that previously kept the range of family, friends, and acquaintances accumulated on Facebook far, far away from users’ dating lives. While Tinder seems to have learned from the public outcry about this privacy violation, the company’s choice to overlap Facebook and Tinder audiences disregards how important solid boundaries between social contexts can be for certain users.

 Assumption 3. Sexuality is no big deal these days

 At the crux of the Daily Beast article was the assumption that it was okay to share potentially identifying details about people’s sexuality. As others have pointed out, just because same-sex marriage and other rights have been won by lesbian, bisexual, gay, trans, and queer (LGBTQ) people in some countries, many cultures, religions, and political and social groups remain extremely homophobic. Re-contextualization of intimate and sexual details shared within the boundaries of a dating app not only constitutes a violation of privacy, it could expose people to discrimination, abuse, and violence.

In my research with LGBTQ young people, I’ve learned that a lot of them are very skilled at placing information about their sexuality where they want it to be seen and keeping it absent from spaces where it may cause them harm. For my master’s thesis, I interviewed university students about their choices of whether or not to come out on Facebook. Many of them were out to a certain degree, posting about pro-LGBTQ political views and displaying their relationships in ways that resonated with friendly audiences but eluded potentially homophobic audiences like coworkers or older adults.

In my PhD, I’ve focused on how same-sex attracted women manage their self-representations across social media. Their practices are not clear-cut since different social media spaces mean different things to users. One interviewee talked about posting selfies with her partner to Facebook for friends and family but not to Instagram where she’s trying to build a network of work and church-related acquaintances. Another woman spoke about cross-posting Vines to friendly LGBTQ audiences on Tumblr but keeping them off of Instagram and Facebook where her acquaintances were likely to pick fights over political issues. Many women talked about frequently receiving negative, discriminatory, and even threatening homophobic messages despite these strategies, highlighting just how important it was for them to be able to curate their self-representations. This once again defies the tendency to designate some sites or pieces of information as ‘public’ and others as ‘private.’ We need to follow users’ lead by respecting the context in which they’ve placed personal information based on their informed judgments about audiences.

Journalists, researchers, and app companies frequently make decisions based on assumptions about dating apps. They assume that since the apps structurally resemble other social media then it’s permissible to carry out similar practices tending toward sharing user-generated information. This goes hand-in-hand with the assumption that if user data is readily available, it can be re-contextualized for other purposes. On dating apps, this assumes (at best) that user data about sexuality will be received neutrally across contexts and at its worst, this data is used without regard for the harm it may cause. There is ample evidence that none of these assumptions hold true when we look at how people create bounded spaces for exchanging intimate information, how users manage their personal information in particular contexts, and how LGBTQ people deal with enduring homophobia and discrimination. While the Daily Beast should not have re-contextualized dating app users’ identifying information in its article, this instance provides an opportunity to dispel these assumptions and change how we design, research, and report about dating apps in order to treat users’ information more ethically.

 

 [1] Papacharissi, Z., & Gibson, P. L. (2011). Fifteen minutes of privacy: Privacy, sociality and publicity on social network sites. In S. Trepte & L. Reinecke (Eds.), Privacy Online (pp. 75–89). Berlin: Springer.

[2] Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford, CA: Standford University Press.

Awakenings of the Filtered

I was delighted to give the Robert M. Pockrass Memorial Lecture at Penn State University this year, titled “Awakenings of the Filtered: Algorithmic Personalization in Social Media and Beyond.” I used the opportunity to give a broad overview of recent work about social media filtering algorithms and personalization. Here it is:

I tried to argue that media of all kinds have been transformed to include automatic selection and ranking as a basic part of their operation, that this transformation is significant, and that it carries significant dangers that are currently not well-understood.

Some highlights: I worry that algorithmic filtering as it is currently implemented suppresses the dissemination of important news, distorts our interactions with friends and family, disproportionately deprives some people of opportunity, and that Internet platforms intentionally obscure the motives and processes by which algorithms effect these consequences.

I say that users and platforms co-produce relevance in social media. I note that the ascendant way to reason about communication and information is actuarial, which I call “actuarial media.”  I discuss “corrupt personalization,” previously a topic on this blog. I propose that we are seeing a new kind of “algorithmic determinism” where cause and effect are abandoned in reasoning about the automated curation of content.

I also mention the anti-News Feed (or anti-filtering) backlash, discuss whether or not Penn State dorms have bathrooms, and talk about how computers recognize cat faces.

Penn State was a great audience, and the excellent question and answer session is not captured here.  Thanks so much to PSU for having me, and for allowing me to post this recording. A particularly big thank you to Prof. Matthew McAllister and the Pockrass committee, and to Jenna Grzeslo for the very kind introduction.

I welcome your thoughts!

 

Algorithms, clickworkers, and the befuddled fury around Facebook Trends

The controversy about the human curators behind Facebook Trends has grown, since the allegations made last week by Gizmodo. Besides being a major headache for Facebook, it has helped prod a growing discussion about the power of Facebook to shape the information we see and what we take to be most important. But we continue to fail to find the right words to describe what algorithmic systems are, who generates them, and what they should do for users and for the public. We have to get this clear.

Here’s the case so far: Gizmodo says that Facebook hired human curators to decide which topics, identified by algorithms, would be listed as trending, how they should be named and summarized; one former curator alleged that his fellow curators often overlooked or suppressed conservative topics. This came too close on the heels of a report a few weeks back that Facebook employees had asked internally if the company had a responsibility to slow Donald Trump’s momentum. Angry critics have noted that Zuckerberg, Search VP Tom Stocky, and other FB execs are liberals. Facebook has vigorously disputed the allegation, saying that they have guidelines in place to insure consistency and neutrality, asserting that there’s no evidence that it happened, distributing their guidelines for how Trending topics are selected and summarized, after they were leaked, inviting conservative leaders in for a discussion, and pointing out their conservative bona fides. The Senate’s Commerce Committee, chaired by Republican Senator John Thune, issued a letter demanding answers from Facebook about it. Some wonder if the charges may have been overstated. Other Facebook news curators have spoken up, some to downplay the allegations and defend the process that was in place, others to highlight the sexist and toxic work environment they endured.

Commentators have used the controversy to express a range of broader concerns about Facebook’s power and prominence. Some argue it is unprecedented: “When a digital media network has one billion people connected to entertainment companies, news publications, brands, and each other, the right historical analogy isn’t television, or telephones, or radio, or newspapers. The right historical analogy doesn’t exist.” Others have made the case that Facebook is now as powerful as the media corporations, which have been regulated for their influence; that their power over news organizations and how they publish is growing; that they could potentially and knowingly engage in political manipulation; that they are not transparent about their choices; that they have become an information monopoly.

This is an important public reckoning about Facebook, and about social media platforms more generally, and it should continue. We clearly don’t yet have the language to capture the kind of power we think Facebook now holds. But it would be great if, along the way, we could finally mothball some foundational and deeply misleading assumptions about Facebook and social media platforms, assumptions that have clouded our understanding of their role and responsibility. Starting with the big one:

Algorithms are not neutral. Algorithms do not function apart from people.

 

We prefer the idea that algorithms run on their own, free of the messy bias, subjectivity, and political aims of people. It’s a seductive and persistent myth, one Facebook has enjoyed and propagated. But its simply false.

I’ve already commented on this, and many of those who study the social implications of information technology have made this point abundantly clear (including Pasquale, Crawford, Ananny, Tufekci, boyd, Seaver, McKelvey, Sandvig, Bucher, and nearly every essay on this list). But it persists: in statements made by Facebook, in the explanations offered by journalists, even in the words of Facebook’s critics.

If you still think algorithms are neutral because they’re not people, here’s a list, not even an exhaustive one, of the human decisions that have to be made to produce something like Facebook’s Trending Topics (which, keep in mind, pales in scope and importance to Facebook’s larger algorothmic endeavor, the “news feed” listing your friends’ activity). Some are made by the engineers designing the algorithm, others are made by curators who turn the output of the algorithm into something presentable. If your eyes start to glaze over, that’s the point; read any three points and then move on, they’re enough to dispel the myth. Ready?

(determining what activity might potentially be seen as a trend)
– what data should be counted in this initial calculation of what’s being talked about (all Facebook users, or subset? English language only? private posts too, or just public ones?)
– what time frame should be used in this calculation — both for the amount of activity happening “now” (one minute, one hour, one day?) and to get a baseline measure of what’s typical (a week ago? a different day at the same time, or a different time on the same day? one point of comparison or several?)
– should Facebook emphasize novelty? longevity? recurrence? (e.g., if it has trended before, should it be easier or harder for it to trend again?)
– how much of a drop in activity is sufficient for a trending topic to die out?
– which posts actually represent a single topic (e.g., when do two hashtags referring to the same topic?)
– what other signals should be taken into account? what do they mean? (should Facebook measure posts only, or take into account likes? how heavily should they be weighed?)
– should certain contributors enjoy some privileged position in the count? (corporate partners, advertisers, high-value users? pay-for-play?)

(from all possible trends, choosing which should be displayed)
– should some topics be dropped, like obscenity or hate speech?
– if so, who decides what counts as obscene or hateful enough to leave off?
– what topics should be left off because they’re too generic? (Facebook noted that it didn’t include “junk topics” that do not correlate to a real world event. What counts as junk, case by case?)

(designing how trends are displayed to the users)
– who should do this work? what expertise should they have? who hires them?
– how should a trend be presented? (word? title? summary?)
– what should clicking on a trend headline lead to? (some form of activity on Facebook? some collection of relevant posts? an article off the platform, and if so, which one?)
– should trends be presented in single list, or broken into categories? if so, can the same topic appear in more than one category?
– what are the boundaries of those categories (i.e. what is or isn’t “politics”?)
– should trends be grouped regionally or not? if so, what are the boundaries of each region?
– should trends lists be personalized, or not? If so, what criteria about the user are used to make that decision?

(what to do if the list is deemed to be broken or problematic in particular ways)
– who looks at this project to assess how its doing? how often, and with what power to change it?
– what counts as the list being broken, or off the mark, or failing to meet the needs of users or of Facebook?
– what is the list being judged against, to know when its off (as tested against other measures of Facebook activity? as compared to Twitter? to major news sites?)
– should they re-balance a Trends list that appears unbalanced, or leave it? (e.g. what if all the items in the list at this moment are all sports, or all celebrity scandal, or all sound “liberal”?)
– should they inject topics that aren’t trending, but seem timely and important?
– if so, according to what criteria? (news organizations? which ones? how many? US v. international? partisan vs not? online vs off?)
– should topics about Facebook itself be included?

These are all human choices. Sometimes they’re made in the design of the algorithm, sometimes around it. The result we see, a changing list of topics, is not the output of “an algorithm” by itself, but rather of an effort that combined human activity and computational analysis, together, to produce it.

So algorithms are in fact full of people and the decisions they make. When we let ourselves believe that they’re not, we let everyone — Zuckerberg, his software engineers, regulators, and the rest of us — off the hook for actually thinking out how they should work, leaving us all unprepared when they end up in the tall grass of public contention. “Any algorithm that has to make choices has criteria that are specified by its designers. And those criteria are expressions of human values. Engineers may think they are “neutral”, but long experience has shown us they are babes in the woods of politics, economics and ideology.” Calls for more public accountability, like this one from my colleague danah boyd, can only proceed once we completely jettison the idea that algorithms are neutral — and replace it with a different language that can assess the work that people and systems do together.

The problem is not algorithms, it’s that Facebook is trying to clickwork the news.

 

It is certainly in Facebook’s interest to obscure all the people involved, so users can keep believing that a computer program is fairly and faithfully hard at work. Dismantling this myth raises the kind of hard questions Facebook is fielding. But, once we jettison this myth, what’s left? It’s easy to despair that with so many human decisions involved, how could we ever get a fair and impartial measure of what matters? And forget the handful of people that designed the algorithm and the handful of people that select and summarize from it: Trends are themselves a measure of the activity of Facebook users. These trending topics aren’t produced by dozens of people but millions. Their judgment of what’s worth talking about, in each case and in the aggregate, may be so distressingly incomplete, biased, skewed, and vulnerable to manipulation, that it’s absurd to pretend it can tell us anything at all.

But political bias doesn’t come from the mere presence of people. It comes from how those people organized to do what they’re asked to do. Along with our assumption that algorithms are neutral is a matching and equally misleading assumption that people are always and irretrievably biased. But human endeavors are organized affairs, and can organized to work against bias. Journalism is full of people too, making all sorts of just as opaque, limited, and self-interested decisions. What we hope keeps journalism from slipping into bias and error is the well-established professional norms and thoughtful oversight.

The real problem here is not the liberal leanings of Facebook’s news curators. If conservative news topics were overlooked, it’s only a symptom of the underlying problem. Facebook wanted to take surges of activity that its algorithms could identify and turn them into news-like headlines. But it treated this as an information processing problem, not an editorial one. They’re “clickworking” the news.

Clickwork begins with the recognition that computers are good at some kinds of tasks, and humans others. The answer, it suggests, is to break the task at hand down into components and parcel them out to each accordingly. For Facebook’s trending topics, the algorithm is good at scanning an immense amount of data and identifying surges of activity, but not at giving those surges a name and a coherent description. That is handled by people — in industry parlance, this is the “human computation” part. These identified surges of activities are delivered to a team of curators, each one tasked with following a set of procedures to identify and summarize them. The work is segmented into simple and repetitive tasks, and governed by a set of procedures such that, even though different people are doing it, their output will look the same. In effect, the humans are given tasks that only humans can do, but they are not invited to do them in a human way: they are “programmed” by the modularized work flow and the detailed procedures so that they do the work like computers would. As Lilly Irani put it, clickwork “reorganizes digital workers to fit them both materially and symbolically within existing cultures of new media work.”

This is apparent in the guidelines that Facebook gives to their Trends curators. The documents, leaked to The Guardian then released by Facebook, did not reveal some bombshell about political manipulation, nor did they do much to demonstrate careful guidance on the part of Facebook around the issue of political bias. What’s most striking is that they are mind-numbingly banal: “Write the description up style, capitalizing the first letter of all major words…” “Do not copy another outlet’s headline…” “Avoid all spoilers for descriptions of scripted shows…” “After identifying the correct angle for a topic, click into the dropdown menu underneath the Unique Keyword fielding select the Unique Keyword that best fits the topic…” “Mark a topic as ‘National Story’ importance if it is among the 1-3 top stories of the day. We measure this by checking if it is leading at least 5 of the following 10 news websites…”  “Sports games: rewrite the topic name to include both teams…” This is not the news room, it’s the secretarial pool.

Moreover, these workers were kept separate from the rest of the full-time employees, worked under quotas for how many trends to identify and summarize that were increased as the project went on. As one curator noted, “The team prioritizes scale over editorial quality and forces contractors to work under stressful conditions of meeting aggressive numbers coupled with poor scheduling and miscommunication. If a curator is underperforming, they’ll receive an email from a supervisor comparing their numbers to another curator.” All were hourly contractors, were kept under non-disclose agreements and asked not to mention that they worked for Facebook. “’It was degrading as a human being,’ said another. ‘We weren’t treated as individuals. We were treated in this robot way.’” A new piece in The Guardian from one such news curator insists that it was also a toxic work environment, especially for women. These “data janitors” are rendered so invisible in the images of Silicon Valley and how tech works that, when we suddenly hear from one, we’re surprised.

Their work was organized to quickly produce capsule descriptions of bits of information that are styled the same — as if they were produced by an algorithm. (this lines up with other concerns about the use of algorithms and clickworkers to produce cheap journalism at scale, and the increasing influence of audience metrics about what’s popular on news judgment.)  It was not, however, organized to thoughtfully assemble a vital information resource that some users treat as the headlines of the day. It was not organized to help these news curators develop experience together on how to do this work well, or handle contentious topics, or reflect on the possible political biases in their choices. It was not likely to foster a sense of community and shared ambitions with Facebook, which might lead frustrated and over-worked news curators to indulge in their own political preferences. And I suspect it was not likely to funnel any insights they had about trending topics back to the designers of the algorithms they depended on.

Trends are not the same as news, but Facebook kinda wants them to be.

 

Part of why charges of bias are so compelling is that we have a longstanding concern about the problem of bias in news. For more than a century we’ve fretted about the individual bias of reporters, the slant of news organizations, and the limits of objectivity [http://jou.sagepub.com/content/2/2/149.abstract]. But is a list of trending topics a form of news? Are the concerns we have about balance and bias in the news relevant for trends?

“Trends” is a great word, the best word to have emerged amidst the social media zeitgeist. In a cultural moment obsessed with quantification, defended as being the product of an algorithm, “trends” is a powerfully and deliberately vague term that does not reveal what it measures. Commentators poke at Facebook for clearer explanations of how they choose trends, but “trends” could mean such a wide array of things, from the most activity to the most rapidly rising to a completely subjective judgment about what’s popular.

But however they are measured and curated, Facebook’s Trends are, at their core, measures of activity on the site. So, at least in principle, they are not news, they are expressions of interest. Facebook users are talking about some things, a lot, for some reason. This has little to do with “news” which implies an attention to events in the world and some judgment of importance. Of course, many things Facebook users talk about, though not all, are public events. And it seems reasonable to assume that talking about a topic represents some judgment of its importance, however minimal. Facebook takes these identifiable surges of activity as proxies for importance. Facebook users “surface” the news… approximately. The extra step and “injecting” stories drawn from the news that were for whatever reason not surging among Facebook users goes a step further, to turn their proxy of the news into a simulation of it. Clearly this was an attempt to best Twitter, may also have played into their effort to persuade news organizations to partner with them and take advantage of their platform as a means of distribution. But it also encouraged us to hold Trends accountable for news-like concerns, like liberal bias.

We could think about Trends differently, not as approximating the news but as taking the public’s pulse. If Trends were designed to strictly represent “what are Facebook users talking about a lot,” presumably there is some scientific value, or at least cultural interest, in knowing what (that many) people are actually talking about. If that were its understood value, we might still worry about the intervention of human curators and their political preferences, but not because their choices would shape users’  political knowledge or attitudes, but because e’d want this scientific glimpse to be unvarnished by misrepresentation.

But that is not how Facebook has invited us to think about its Trending topics, and it couldn’t do so if it wanted: its interest in Trending topics is neither as a form of news production nor as a pulse of the public, but as a means to keep users on the site and involved. The proof of this, and the detail that so often gets forgotten in these debates, is that the Trending Topics are personalized. Here’s Facebook’s own explanation: “Trending shows you a list of topics and hashtags that have recently spiked in popularity on Facebook. This list is personalized based on a number of factors, including Pages you’ve liked, your location and what’s trending across Facebook.” Knowing what has “spiked in popularity” is not the same as news; a list “personalized based on… Pages you’ve liked” is no longer a site-wide measure of popular activity; an injected topic is no longer just what an algorithm identified.

As I’ve said elsewhere, “trends” are not a barometer of popular activity but a hieroglyph, making provocative but oblique and fleeting claims about “us” but invariably open to interpretation. Today’s frustration with Facebook, focused for the moment on the role their news curators might have played in producing these Trends, is really a discomfort with the power Facebook seems to exert — a kind of power that’s hard to put a finger on, a kind of power that our traditional vocabulary fails to capture. But across the controversies that seem to flare again and again, a connecting thread is Facebook’s insistence on colonizing more and more components of social life (friendship, community, sharing, memory, journalism), and turning the production of shared meaning so vital to sociality into the processing of information so essential to their own aims.

Facebook Trending: It’s made of people!! (but we should have already known that)

Gizmodo has released two important articles (1, 2) about the people who were hired to manage Facebook’s “Trending” list. The first reveals not only how Trending topics are selected and packaged on Facebook, but also the peculiar working conditions this team experienced, the lack of guidance or oversight they were provided, and the directives they received to avoid news that addressed Facebook itself. The second makes a more pointed allegation: that along the way, conservative topics were routinely ignored, meaning the trending algorithm had identified user activity around a particular topic, but the team of curators chose not to publish it as a trend.

This is either a boffo revelation, or an unsurprising look at how the sausage always gets made, depending on your perspective. The promise of “trends” is a powerful one. Even as the public gets more and more familiar with the way social media platforms work with data, and even with more pointed scrutiny of trends in particular, it is still easy to think that “trends” means an algorithm is systematically and impartially uncovering genuine patterns of user activity. So, to discover that a handful of j-school graduates were tasked with surveying all the topics the algorithm identified, choosing just a handful of them, and dressing them up with names and summaries, feels like a unwelcome intrusion of human judgment into what we wish were analytic certainty. Who are these people? What incredible power they have to dictate what is and is not displayed, what is and is not presented as important! Wasn’t this  supposed to be just a measure of what users were doing, what the people important! Downplaying conservative news is the most damning charge possible, since it has long been a commonplace accusation leveled at journalists. But the revelation is that there’s people in the algorithm at all.

But the plain fact of information algorithms like the ones used to identify “trends” is that they do not work alone, they cannot work alone — in so many ways that we must simply discard the fantasy that they do, or ever will. In fact, algorithms do surprisingly little, they just do it really quickly and with a whole lot of data. Here’s some of what they can’t do:

Trending algorithms identify patterns in data, but they can’t make sense of it. The raw data is Facebook posts, likes, and hashtags. Looking at this data, there will certainly be surges of activity that can be identified and quantified: words that show up more than other words, posts that get more likes than other posts. But there is so much more to figure out
(1) What is a topic? To decide how popular a topic is, Facebook must decide which posts are about that topic. When do two posts or two hashtags represent the same story, such that they should be counted together? An algorithm can only do so much to say whether a post about Beyonce and a post about Bey and a post about Lemonade and a post about QueenB and the hashtag BeyHive are all the same topic. And that’s an easy one, a superstar with a distinctive name, days after a major public event. Imagine trying to determine algorithmically if people are talking about European tax reform, enough to warrant calling it a trend.
(2) Topics are also composed of smaller topics, endlessly down to infinity. Is the Republican nomination process a trending topic, or the Indiana primary, or Trump’s win in Indiana, or Paul Ryan’s response to Trump’s win in Indiana? According to one algorithmic threshold these would be grouped together, by another would be separate. The problem is not that an algorithm can’t tell. It’s that it can tell both interpretations, all interpretations equally well. So, an algorithm could be programmed to decide,to impose a particular threshold for the granularity of topics. But would that choice make sense to readers, would it map onto their own sense of what’s important, and would it work for the next topic, and the next?
(3) How should a topic be named and described, in a way that Facebook users would appreciate or even understand? Computational attempts to summarize are notoriously clunky, and often produce the kind of phrasing and grammar that scream “a computer wrote this.”
What trending algorithms can identify isn’t always what a platform wants to identify. Facebook, unlike Twitter, chose to display trends that identify topics, rather than single hashtags. This was already a move weighted towards identifying “news” rather than topics. It already strikes an uneasy balance between the kind of information they have — billions and posts and likes surging through their system — and the kind they’d like to display — a list of the most relevant topics. And it already sets up an irreconcilable tension: what should they do when user activity is not a good measure of public importance? It is not surprising the, that they’d try to focus on articles being circulated and commented on, and from the most reputable sources, as a way to lean on their curation and authority to pre-identify topics. Which opens up, as Gizmodo identifies, the tendency to discount some sources as non-reputable, which can have unintentionally partisan implications.
“Trending” is also being asked to do a lot of things for Facebook: capture the most relevant issues being discussed on Facebook, and conveniently map onto the most relevant topics in the worlds of news and entertainment, and keep users on the site longer, and keep up with Twitter, and keep advertisers happy. In many ways, a trending algorithm can be an enormous liability, if allowed to be: it could generate a list of dreadful or depressing topics; it could become a playground for trolls who want to fill it with nonsense and profanity; it could reveal how little people use Facebook to talk about matters of public importance; it could reveal how depressingly little people care about matters of public importance; and it could help amplify a story critical of Facebook itself. It would take a whole lot of bravado to set that loose on a system like Facebook, and let it show what it shows unmanaged. Clearly, Facebook has a lot more at stake in producing a trending list that, while it should look like an unvarnished report of what users are discussing, must also massage it into something that represents Facebook well at the same time.

So: people are in the algorithm because how could they not be? People produce the Facebook activity being measured, people design the algorithms and set their evaluative criteria, people decide what counts as a trend, people name and summarize them, and people look to game the algorithm with their next posts.

The thing is, these human judgments are all part of traditional news gathering as well. Choosing what to report in the news, how to describe it and feature it, and how to honor both the interests of the audience and the sense of importance, has always been a messy, subjective process, full of gaps in which error, bias, self-interest, and myopia can enter. The real concern here is not that there are similar gaps in Facebook’s process as well, or that Facebook hasn’t yet invented an algorithm that can close those gaps. The real worry is that Facebook is being so unbelievably cavalier about it.

Traditional news organizations face analogous problems and must make analogous choices, and can make analogous missteps. And they do. But two countervailing forces work against this, keep them more honest than not, more on target than not: a palpable and institutionalized commitment to news itself, and competition. I have no desire to glorify the current news landscape, which in many ways produces news that is disheartening less than what journalism should be. But there is at least a public, shared, institutionally rehearsed, and historical sense of purpose and mission, or at least there’s one available. Journalism schools teach their students about not just how to determine and deliver the news, but why. They offer up professional guidelines and heroic narratives that position the journalist as a provider of political truths and public insight. They provide journalists with frames that help them identify the way news can suffer when it overlaps with public relations, spin, infotainment, and advertising. There are buffers in place to protect journalists from the pressures that can come from the upper management, advertisers, or newsmakers themselves, because of a belief that independence is an important foundation for newsgathering. Journalists recognize that their choices have consequences, and they discuss those choices. And there are stakeholders for regularly checking these efforts for possible bias and self-interest: public editors and ombudspeople, newswatch organizations and public critics,  all trying to keep the process honest. Most of all, there are competitors who would gleefully point out a news organization’s mistakes and failures, which gives editors and managers real incentive to work against the temptations to produce news that is self-serving, politically slanted, or commercially craven.

Facebook seemed to have thought of absolutely none of these. Based on the revelations in the two Gizmodo articles, it’s clear that they hired a shoestring team, lashed them to the algorithm, offered little guidance for what it meant to make curatorial choices, provided no ongoing oversight as the project progressed, imposed self-interested guidelines to protect the company, and kept the entire process inscrutable to the public, cloaked in the promise of an algorithm doing its algorithm thing.

The other worry here is that Facebook is engaged in a labor practice increasingly common among Silicon Valley: hiring information workers through third parties, under precarious conditions and without access to the institutional support or culture their full-time employees enjoy, and imposing time and output demands on them that can only fail a task that warrants more time, care, expertise, and support. This is the troubling truth about information workers in Silicon Valley and around the world, who find themselves “automated” by the gig economy — not just clickworkers on Mechanical Turk and drivers on Uber, but even “inside” the biggest and most established companies on the plant. It also is a dangerous tendency for the kind and scale of information projects that tech companies are willing to take on, without having the infrastructure and personnel to adequately support them. It is not uncommon now for a company to debut a new feature or service, only weeks in development and supported only by its design team, with the assumption that it can quickly hire and train a team of independent, hourly workers. Not only does this put a huge onus on those workers, but it means that, if the service finds users and begins to scales up quickly, little preparation was in place, and the overworked team must quickly make some ad hoc decisions about what are often tricky cases with real, public ramifications.

Trending algorithms are undeniably becoming part of the cultural landscape, and revelations like Gizmodo’s are helpful steps in helping us shed the easy notions of what they are and how they work, notions the platforms have fostered. Social media platforms must come to fully realize that they are newsmakers and gatekeepers, whether they intend to be or not, whether they want to be or not. And while algorithms can chew on a lot of data, it is still a substantial, significant, and human process to turn that data into claims about importance that get fed back to millions of users. This is not a realization that they will ever reach on their own — which suggests to me that they need the two countervailing forces that journalism has: a structural commitment to the public, imposed if not inherent, and competition to force them to take such obligations seriously.

Addendum: Techcrunch is reporting that Facebook has responded to Gizmodo’s allegations, suggesting that it has “rigorous guidelines in place for the review team to ensure consistency and neutrality.” This makes sense. But consistency and neutrality are fine as concepts, but they’re vague and insufficient in practice. There could have been Trending curators at Facebook who deliberately tanked conservative topics and knew that doing so violated policy. But (and this has long been known in the sociology of news) the greater challenge in producing the news, whether generating it or just curating it, is how to deal with the judgments that happen while being consistent and neutral. Making the news always requires judgments, and judgements always incorporate premises for assessing the relevance, legitimacy, and coherence of a topic. Recognizing bias in our own choices or across an institution is extremely difficult, but knowing whether you have produced a biased representation of reality is nearly impossible, as there’s nothing to compare it to — even setting aside that Facebook is actually trying to do something even harder, produce a representation of the collective representations of reality of their users, and ensure that somehow it also represents reality, as other reality-representers (be they CNN or Twitter users) have represented it. Were social media platforms willing to acknowledge that they constitute public life rather than hosting or reflecting it, they might look to those who produce news, educate journalists, and study news as a sociological phenomenon, for help thinking through these challenges.

Addendum 2 (May 9): The Senate Committee on Commerce, Science, and Transportation has just filed an inquiry with Facebook, raising concerns about their Trending Topics based on the allegations in the Gizmodo report. The letter of inquiry is available here, and has been reported by Gizmodo and elsewhere. In the letter they ask Mark Zuckerberg and Facebook to respond to a series of questions about how Trending Topics works, what kind of guidelines and oversight they provided, and whether specific topics were sidelined or injected. Gizmodo and other sites are highlighting the fact that this Committee is run by a conservative and has a majority of members who are conservative. But the questions posed are thoughtful ones. What they make so clear is that we simply do not have a vocabulary with which to hold these services accountable. For instance, they ask “Have Facebook news curators in fact manipulated the content of the Trending Topics section, either by targeting news stories related to conservative views for exclusion or by injecting non-trending content?” Look at the verbs. “Manipulated” is tricky, as it’s not exactly clear what the unmanipulated Trending Topics even are. “Targeting” sounds like they excluded stories, when what Gizmodo reports is that some stories were not selected as trending, or not recognized as stories. If trending algorithms can only highlight possible topics surging in popularity, but Facebook and its news curators constitute that data into a list of topics, then language that takes trending to be a natural phenomenon, that Facebook either accurately reveals or manipulates, can’t quite grip how this works and why it is so important. It is worth noting, though, that the inquiry pushes on how (whether) Facebook is keeping records of what is selected: “Does Facebook maintain a record ,of curators’ decisions to inject a story into the Trending Topics section or target a story for removal? If such a record. is not maintained, can such decisions be reconstructed or determined based on an analysis of the Trending Topics product? a. If so, how many stories have curators excluded that represented conservative viewpoints or topics of interest to conservatives? How many stories did curators inject that were not, in fact, trending? b. Please provide a list of all news stories removed from or injected into the Trending Topics section since January 2014.” This approach I think does emphasize to Facebook that these choices are significant, enough so that they should be treated as part of the public record and open to scrutiny by policymakers or the courts. This is a way of demanding Facebook take role in this regard more seriously.