Time: Oct 11-14, 2016 (Just after the AoIR conference in Berlin)
Place: Aarhus University and DOKK 1, Aarhus, Denmark
Online: We’ll post an online participation option soon. Check back!
Nancy Baym (Microsoft Research New England and MIT);
Annette Markham (Aarhus University);
Katrin Tiidenberg (Aarhus University and Tallinn University).
Description: This course introduces participants to contemporary concepts for studying how self, identity, and contexts are negotiated through interactive processes involving visuality, relationality, and emotionality. The metaphor of labor is used to highlight how these practices are constrained and enabled by economic rationalities, affordances of digital technologies, and contemporary norms around building identity through social media.
1. Emotional Labor was developed as a sociological concept to understand certain workplace practices. This theory usefully addresses how, within an economic framework of producing the self as a ‘brand’ via social media, a labor model of controlled emotionality is invoked. This critical stance toward identity performance is a useful lens for studying how people perform and negotiate identity in social media contexts.
2. Relational labor, a term developed by Nancy Baym to illustrate how performers build ongoing connections with disparate audiences, is an extension of emotional labor. This concept helps us consider the neoliberal frames within which our identity practices are caught, when using social media platforms geared toward audience building, and how the issues raised by emotional labor play out when moved from particular interactions to the unending connectivity social media demand.
3. Visual labor is a concept that, like the previous two, can help researchers consider issues and practices around the digitally saturated self as a product of a visual economy.
Who can attend? Course is appropriate for PhD students, postdocs, and early career researchers in media studies, information studies, anthropology, sociology, political science, and other fields addressing social media practices or negotiation of identity. No prerequisite knowledge is necessary.
Hochschild, A. R. (1983). The managed heart: Commercialization of human feeling. Berkeley: University of California Press.
Tracy, J. S. (2000). Becoming a character for commerce: emotion labor, self-subordination, and discursive construction of identity in a total institution. Management Communication Quarterly, 14(1), 90–128.
Kang, M. (2003). The managed hand: the commercialization of bodies and emotions in Korean immigrant-owned nail salons. Gender and Society, 17(6), 820–839.
Baym, N. K. (2012). Fans or friends?: seeing social media audiences as musicians do. Participations: Journal of Audience and Reception Studies, 9(2), 286–316.
Baym, N. K. (2014). Connect with your audience! the relational labor of connection. The Communication Review, 18(1), 14-22.
Bounded rationality/bounded emotionality:
Mumby, D. K., & Putnam, L. L. (1992). The politics of emotion: a feminist reading of bounded rationality. The Academy of Management Review, 17(3), 465–486.
Baxter, L. A., & Montgomery, B. M. (1996). Chapter 1 “Thinking dialectically about communication in personal relationships.” In Relating: dialogues and dialectics. New York: The Guilford Press.
Gergen, K. (2000). Chapter “Truth in trouble” and chapter “From self to relationship.“ In The saturated self: dilemmas of identity in contemporary life (pp. 81 –110). New York: Basic Books.
Goffman, E. (1966) Chapter “Interpretations”. In Behavior in public places (pp. 193–242). New York: The Free Press.
Goffman, E. (1981). Footing. In Forms of talk (pp. 124–159). Philadelphia: University of Pennsylvania Press. (we assume that participants have read Erving Goffman’s Presentation of Self in Everyday Life).
Markham, A. (2013). The dramaturgy of digital experience. In C. Edgley (Ed.), The drama of social life: a dramaturgical handbook (pp. 279–293). Farnham: Ashgate.
Tiidenberg, K, & Gomez Cruz, E. (2015). Selfies, image and the re-making of the body. Body & Society, 1–26.
Abidin, C. (2016). “Aren’t these just young, rich women doing vain things online?”: influencer selfies as subversive frivolity. Social Media + Society, 2(2), 1–17.
Tuesday, October 11, 2016:
09:30-12:00: Introduction to the course and discussion
12:00 – 13:00 Lunch
13:00 – 14:30: Public Lecture by Annette Markham on Emotional Labor
Casual (self funded) dinner with the seminar participants, location TBA
Wednesday, October 12, 2016:
09:30 – 12:00: Discuss emotional labor (previous day’s lecture plus texts)
12:00 – 13:00 Lunch
13:00 – 14:30 Public Lecture by Nancy Baym on Relational Labor
15:00-16:30: QTC Wednesdays at the DLRC (Digital Living Research Commons). Informal conversation with Nancy Baym
Dinner with Media Studies and Information Studies faculty: Location TBA
Thursday, October 13, 2016:
9:30- 12:00: Discuss relational labor (previous day’s lecture plus texts)
12:00 – 13:00 Lunch
13:00 – 14:30 Public Lecture by Katrin Tiidenberg on Embodiment and Visual Labor
15:00-16:00 Discussion of issues, ethics, and concerns
16:00-17:00 wrap-up and evaluation
Organized dinner with participants, location TBA
Last week, the Daily Beast published an article by one of its editors who sought to report about how dating apps were facilitating sexual encounters in Rio’s Olympic Village. Instead, his story focused mainly on athletes using Grindr, an app for men seeking men, and included enough personal information about individuals to identify and out them. After the article was criticized as dangerous and unethical across media outlets and social media, the Daily Beast replaced it with an apology. However, decisions to publish articles like this are made based on assumptions about who uses dating apps and how people share information on them. These assumptions are visible not only in how journalists act but also in the approaches that researchers and app companies take when it comes to users’ personal data. Ethical breeches like the one made by the Daily Beast will continue unless we address the following three (erroneous) assumptions:
Assumption 1. Data on dating apps is shareable like a tweet or a Facebook post
Since dating apps are a hybrid between dating websites of the past and today’s social media, there is an assumption that the information users generate on dating apps should be shared. Zizi Papacharissi and Paige Gibson have written about ‘shareability’ as the built-in way that social network sites encourage sharing and discourage withholding information. This is evident within platforms like Facebook and Twitter, through ‘share’ and ‘retweet’ buttons, as well as across the web as social media posts are formatted to be easily embedded in news articles and blog posts.
Dating apps provide many spaces for generating content, such as user profiles, and some app architectures are increasingly including features geared toward shareability. Tinder, for example, provides users with the option of creating a ‘web profile’ with a distinct URL that anyone can view without even logging into the app. While users determine whether or not to share their web profiles, Tinder also recently experimented with a “share” button allowing users to send a link to another person’s profile by text message or email. This creates a platform-supported means of sharing profiles to individuals who may never have encountered them otherwise.
The problem with dating apps adopting social media’s tendency toward sharing is that dating environments construct particular spaces for the exchange of intimate information. Dating websites have always required a login and password to access their services. Dating apps are no different in this sense – regardless of whether users login through Facebook authentication or create a new account, dating apps require users to be members. This creates a shared understanding of the boundaries of the app and the information shared within it. Everyone is implicated in the same situation: on a dating app, potentially looking for sexual or romantic encounters. A similar boundary exists for me when I go to the gay bar; everyone I encounter is also in the same space so the information of my whereabouts is equally as implicating for them. However, a user hitting ‘share’ on someone’s Tinder profile and sending it to a colleague, family member, or acquaintance removes that information from the boundaries within which it was consensually provided. A journalist joining a dating app to siphon users’ information for a racy article flat out ignores these boundaries.
Assumption 2. Personal information on dating apps is readily available and therefore can be publicized
When the Daily Beast’s editor logged into Grindr and saw a grid full of Olympic athletes’ profiles, he likely assumed that if this information was available with a few taps of his screen then it could also be publicized without a problem. Many arguments about data ethics get stuck debating whether information shared on social media and apps is public or private. In actuality, users place their information in a particular context with a specific audience in mind. The violation of privacy occurs when another party re-contextualizes this information by placing it in front of a different audience.
Although scholars have pointed out that re-contextualization of personal information is a violation of privacy, this remains a common occurrence even across academia. We were reminded of this last May when 70,000 OkCupid users’ data was released without permission by researchers in Denmark. Annette Markham’s post on the SMC blog pointed out that “the expectation of privacy about one’s profile information comes into play when certain information is registered and becomes meaningful for others.” This builds on Helen Nissenbaum’s notion of “privacy in context” meaning that people assume the information they share online will be seen by others in a specific context. Despite the growing body of research confirming that this is exactly how users view and manage their personal information, I have come across many instances where researchers have re-published screenshots of user profiles from dating apps without permission. These screenshots are featured in presentations, blog posts, and theses with identifying details that violate individuals’ privacy by re-contextualizing their personal information for an audience outside the app. As an academic community, we need to identify this as an unethical practice that is potentially damaging to research subjects.
Dating app companies also perpetuate the assumption that user information can be shared across contexts through their design choices. Recently, Tinder launched a new feature in the US called Tinder Social, which allows users to join with friends and swipe on others to arrange group hangouts. Since users team up with their Facebook friends, activating this feature lets you see everyone else on your Facebook account who is also on Tinder with this feature turned on. While Tinder Social requires users to ‘unlock’ its functionality from their Settings screen, its test version in Australia automatically opted users in. When Australian users updated their app, this collapsed a boundary between the two platforms that previously kept the range of family, friends, and acquaintances accumulated on Facebook far, far away from users’ dating lives. While Tinder seems to have learned from the public outcry about this privacy violation, the company’s choice to overlap Facebook and Tinder audiences disregards how important solid boundaries between social contexts can be for certain users.
Assumption 3. Sexuality is no big deal these days
At the crux of the Daily Beast article was the assumption that it was okay to share potentially identifying details about people’s sexuality. As others have pointed out, just because same-sex marriage and other rights have been won by lesbian, bisexual, gay, trans, and queer (LGBTQ) people in some countries, many cultures, religions, and political and social groups remain extremely homophobic. Re-contextualization of intimate and sexual details shared within the boundaries of a dating app not only constitutes a violation of privacy, it could expose people to discrimination, abuse, and violence.
In my research with LGBTQ young people, I’ve learned that a lot of them are very skilled at placing information about their sexuality where they want it to be seen and keeping it absent from spaces where it may cause them harm. For my master’s thesis, I interviewed university students about their choices of whether or not to come out on Facebook. Many of them were out to a certain degree, posting about pro-LGBTQ political views and displaying their relationships in ways that resonated with friendly audiences but eluded potentially homophobic audiences like coworkers or older adults.
In my PhD, I’ve focused on how same-sex attracted women manage their self-representations across social media. Their practices are not clear-cut since different social media spaces mean different things to users. One interviewee talked about posting selfies with her partner to Facebook for friends and family but not to Instagram where she’s trying to build a network of work and church-related acquaintances. Another woman spoke about cross-posting Vines to friendly LGBTQ audiences on Tumblr but keeping them off of Instagram and Facebook where her acquaintances were likely to pick fights over political issues. Many women talked about frequently receiving negative, discriminatory, and even threatening homophobic messages despite these strategies, highlighting just how important it was for them to be able to curate their self-representations. This once again defies the tendency to designate some sites or pieces of information as ‘public’ and others as ‘private.’ We need to follow users’ lead by respecting the context in which they’ve placed personal information based on their informed judgments about audiences.
Journalists, researchers, and app companies frequently make decisions based on assumptions about dating apps. They assume that since the apps structurally resemble other social media then it’s permissible to carry out similar practices tending toward sharing user-generated information. This goes hand-in-hand with the assumption that if user data is readily available, it can be re-contextualized for other purposes. On dating apps, this assumes (at best) that user data about sexuality will be received neutrally across contexts and at its worst, this data is used without regard for the harm it may cause. There is ample evidence that none of these assumptions hold true when we look at how people create bounded spaces for exchanging intimate information, how users manage their personal information in particular contexts, and how LGBTQ people deal with enduring homophobia and discrimination. While the Daily Beast should not have re-contextualized dating app users’ identifying information in its article, this instance provides an opportunity to dispel these assumptions and change how we design, research, and report about dating apps in order to treat users’ information more ethically.
 Papacharissi, Z., & Gibson, P. L. (2011). Fifteen minutes of privacy: Privacy, sociality and publicity on social network sites. In S. Trepte & L. Reinecke (Eds.), Privacy Online (pp. 75–89). Berlin: Springer.
 Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford, CA: Standford University Press.
Last month I joined other social media researchers and the ACLU to file a lawsuit against the US Government to protect the legal right to conduct online research. This is newly relevant today because a community of devs interested in public policy started a petition in support of our court case. It is very nice of them to make this petition. Please consider signing it and sharing this link.
PETITION: Curiosity is (not) a crime
For more context, see last month’s post: Why I Am Suing the Government.
(or: I write scripts, bots, and scrapers that collect online data)
I never thought that I would sue the government. The papers went in on Wednesday, but the whole situation still seems unreal. I’m a professor at the University of Michigan and a social scientist who studies the Internet, and I ran afoul of what some have called the most hated law on the Internet.
Others call it the law that killed Aaron Swartz. It’s more formally known as the Computer Fraud and Abuse Act (CFAA), the dangerously vague federal anti-hacking law. The CFAA is so broad, you might have broken it. The CFAA has been used to indict a MySpace user for adding false information to her profile, to convict a non-programmer of “hacking,” to convict an IT administrator of deleting files he was authorized to access, and to send a dozen FBI agents to the house of a computer security researcher with their guns drawn.
Most famously, prosecutors used the CFAA to threaten Reddit co-founder and Internet activist Aaron Swartz with 50 years in jail for an act of civil disobedience — his bulk download of copyrighted scholarly articles. Facing trial, Swartz hung himself at age 26.
The CFAA is alarming. Like many researchers in computing and social science, writing scripts, bots, or scrapers that collect online data is a normal part of my work. I routinely teach my students how to do it in my classes. Now that all sorts of activities have moved online — from maps to news to grocery shopping — studying people means now means studying people online and thus gathering online data. It’s essential.
Image: Les raboteurs de parquet by Gustave Caillebotte (cropped)
Yet federal charges were brought against someone who was downloading publicly available Web pages.
People might think of the CFAA as a law about hacking with side effects that are a problem for computer security researchers. But the law affects anyone who does social research, or who needs access to public information.
I work at a public institution. My research is funded by taxes and is meant for the greater good. My results are released publicly. Lately, my research designs have been investigating illegal fraud and discrimination online, evils that I am trying to stop. But the CFAA made my research designs too risky. A chief problem is that any clause in a Web site’s terms of service can become enforceable under the CFAA.
I found that crazy. Have you ever read a terms of service agreement? Verizon’s terms of service prohibited anyone using a Verizon service from saying bad things about Verizon. As it says in the legal complaint, some terms of service prohibit you from writing things down (as in, with a pen) if you saw them on a particular — completely public — Web page.
These terms of service aren’t laws, they’re statements written by Web site owners describing what they’d like to happen if they ran the universe. But the current interpretation of the CFAA says that we must judge what is authorized on the Web by reading a site’s terms of service to see what has been prohibited. If you violate the terms of service, the current CFAA mindset is: you’re hacking.
That means anything a Web site owner writes in the terms of service effectively becomes the law, and these terms can change at any time.
Did you know that terms of service can expressly prohibit the use of a Web site by researchers? Sites effectively prohibit research by simply outlawing any saving or republication of their contents, even if they are public Web pages. Dice.com forbids “research or information gathering,” while LinkedIn says you can’t “copy profiles and information of others through any means” including “manual” means. You also can’t “[c]ollect, use, copy, or transfer any information obtained from LinkedIn,” or “use the information, content or data of others.” (This begs the question: How would the intended audience possibly use LindedIn and follow these rules? Memorization?)
As a researcher, I was appalled by the implications, once they sunk in. The complaint I filed this week has to do with my research on anti-discrimination laws, but it is not too broad to say this: The CFAA, as things stand, potentially blocks all online research. Any researcher who uses information from Web sites could be at risk from the provision in our lawsuit. That’s why others have called this case “key to the future of social science.”
— Phil Howard (@pnhoward) June 30, 2016
If you are a researcher and you think other researchers would be interested in this information, please share this information. We need to get the word out that the present situation is untenable.
The ACLU is providing my legal representation, and in spirit I feel that they have taken this case on behalf of all researchers and journalists. If you care about this issue and you’d like to help, I urge you to contribute.
Want more? Here is an Op-Ed that I co-authored with my co-plaintiff Prof. Karrie Karahalios:
Most of what you do online is illegal. Let’s end the absurdity.
Here is the legal complaint:
Sandvig v. Lynch
Here is a press release about the lawsuit:
ACLU Challenges Law Preventing Studies on “Big Data” Discrimination
Here is some of the news coverage:
Researchers Sue the Government Over Computer Hacking Law
New ACLU lawsuit takes on the internet’s most hated hacking law
Do Housing and Jobs Sites Have Racist Algorithms? Academics Sue to Find Out
When Should Hacking Be Legal?
Please note that I have filed suit as a private citizen and not as an employee of the University.
[Updated on 7/2 with additional links.]
[Updated on 8/3 with the online petition.]