Skip to content

MSR Faculty Summit 2014 Ethics Panel Recap

August 19, 2014

[Cross-posted to www.maryLgray.org]

When the Facebook Emotions Study first made international news, I felt strongly (still do) that researchers, from those honing algorithms to people like me studying the social impact of media and technologies, need to come together. There are no easy answers or obvious courses of action. But we all have a stake in understanding the ethical implications of studying social media as equal parts data analysis and human subjects research. And we need common ground.

At the end of the day, researchers are also well-positioned to change things for two simple reasons: 1) individual researchers design and execute research and data analysis for both corporations and universities. If we change how we do things, our institutions will follow suit. 2) Today’s social media researchers and corporate data scientists will mentor and train the next generation of data researchers. Our students will continue and advance the exploration of social media data at jobs based in industry and university settings. The ethical principles that they learn from us will define not only the future of this field but the general public’s relationship to it. But it’s not easy to bring together such a wide range of researchers. Social media researchers and data scientists are rarely all in the same place.

As luck would have it, Microsoft Research’s Faculty Summit, held annually on the MSR Redmond campus in the great state of Washington, USA, gathers just such a mixed scholarly audience. It was scheduled for July 14-15, a mere two weeks into the public fallout over the Study. Through the support of Microsoft Research and MSR’s Faculty Summit organizers, we organized an ad-hoc session for July 14, 2014, 11:30a-12:30p PT, entitled “When Data Science & Human Subject Research Collide: Ethics, Implications, Responsibilities.” Jeff Hancock, co-author of the Facebook Emotions Study, generously agreed to participate in the discussion. I scoured the list of Faculty Summit attendees and found three other participants to round out the conversation: Jeffrey Bigham, Amy Bruckman, and Christian Sandvig. These scholars (their bios are below) offer the expertise and range of perspectives we need to think through what to do next.

Below, you will find a transcript of the brief panel presentations and a long, long list of excellent questions generated by the more than 100 attendees. I have anonymized the sources of the questions, but if you contact me and would like your name attached to your comment or question, please let me know and I’ll edit this document.

I asked that the session not be recorded for public circulation because I wanted all those present to feel completely free to speak their minds. I encouraged everyone to “think before they tweet” which did not bar social media reports from the event (but, I was delighted to see how many of us focused on each other rather than our screens). We agreed early on that the best contribution we could, collectively, make was to generate questions rather than presume anyone had the answers. I hope that you find this document helpful as you work through your own thoughts on these issues. My thanks to MSR and the Faculty Summit organizers (particularly Jaya, who was so patient with the ever-changing details), the panelists for their participation, to the audience for their collegiality and kindness, and a special shout out to Liz Lawley for sharing her notes with me.

Sincerely,

Mary L. Gray

Session title: When Data Science & Human Subject Research Collide: Ethics, Implications, Responsibilities

Chair: Mary L. Gray, Microsoft Research

Abstract: Join us for a conversation to reflect on the ethics, implications, and responsibilities of social media research, in the wake of the Facebook emotion study. What obligations must researchers consider when studying human interaction online? When does data science become human subjects research? What can we learn as a collective from the public’s reaction to Facebook’s recent research as well as reflection on our own work? Mary L. Gray (Microsoft Research) and Jeff Hancock (Cornell University and co-author of the Facebook emotion study), will facilitate a panel discussion among researchers based at Microsoft Research and across academia from the fields of data science, computational social science, qualitative social science, and computer science.

Panel expertise:

–      anthropology

–      communication studies

–      data science

–      experimental research design

–      HCI

–      human computation

–      information sciences

–      social psychology

–      usability studies

 

Each panelist had 5 minutes to reflect on:

  1. What can we learn?
  2. Where do we go from here?
  3. What is one BURNING QUESTION we should address together?

House rules:

  • think B4 you tweet
  • not a “gotcha!” session
  • step up/step back (if you tend to talk a lot, let someone else take the mic first)

BIOs:

Christian Sandvig—Speaker 1 (able to speak from an Information Sciences perspective)

Associate Professor of Information, School of Information, Faculty Associate, Center for Political Studies, ISR and Associate Professor of Communication, College of Literature, Science, and the Arts. Sandvig is a faculty member at the School of Information specializing in the design and implications of Internet infrastructure and social computing. He is also a Faculty Associate at the Berkman Center for Internet & Society at Harvard University. Before moving to Michigan, Sandvig taught at the University of Illinois at Urbana-Champaign and Oxford University. Sandvig’s research has appeared in The Economist, The New York Times, The Associated Press, National Public Radio, CBS News, and The Huffington Post. His work has been funded by the National Science Foundation, the MacArthur Foundation, and the Social Science Research Council. He has consulted for Intel, Microsoft, and the San Francisco Public Library. Sandvig received his Ph.D. in Communication Research from Stanford University in 2002. https://www.si.umich.edu/people/christian-sandvig

Jeffrey P. Bigham—Speaker 2 (able to speak from a computer science/accessible technologies perspective)

Associate Professor of the Human-Computer Interaction Institute and Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Jeffrey’s work sits at the intersection of human-computer interaction, human computation, and artificial intelligence, with a focus on developing innovative technology that serves people with disabilities in their everyday lives. Jeffrey received his B.S.E degree in Computer Science from Princeton University in 2003. He received his M.Sc. degree in 2005 and his Ph.D. in 2009, both in Computer Science and Engineering from the University of Washington. http://www.cs.cmu.edu/~jbigham/

Amy Bruckman—Speaker 3 (able to speak from the builder/designer perspective)

Professor in the School of Interactive Computing in the College of Computing at Georgia Tech, and a member of the Graphics, Visualization, and Usability (GVU) Center. She received her PhD from the Epistemology and Learning Group at the MIT Media Lab in 1997, and a BA in physics from Harvard University in 1987. She does research on online communities and education, and is the founder of the Electronic Learning Communities (ELC) research group. Bruckman studies how people collaborate to create content online, with a focus on how the Internet can support constructionist, project-based learning. Her newer work focuses on the products of online collaboration as ends in themselves. How do we support people in this creative process, and what new kinds of collaborations might be possible? How do interaction patterns shape the final product? How do software features shape interaction patterns? How does Wikipedia really work, and why do people contribute to it? http://www.cc.gatech.edu/fac/Amy.Bruckman/

Jeff Hancock—Speaker 4 (co-author of the Facebook Emotions Study)

Dr. Jeffrey T. Hancock is a Professor in the Communications and Information Science departments at Cornell University and is the Co-Chair of the Information Science department. He is interested in social interactions mediated by information and communication technology, with an emphasis on how people produce and understand language in these contexts. His research has focused on two types of language, verbal irony and deception, and on a number of cognitive and social psychological factors affected by online communication. https://communication.cals.cornell.edu/people/jeffrey-hancock

Opening remarks (Mary L. Gray):

I asked each of our speakers to introduce themselves, tell us a little bit about the perspective they’re coming from. The goal of the panel was to bring together as many different disciplinary perspectives as possible among people who are studying what is perhaps best understood as a shared object: social media. We came together to think about the implications and ramifications of the public response to the Facebook study. I gave a special shoutout and thanks to Jeff Hancock for being willing to attend Faculty Summit at the very last minute. I want to publicly say how impressed I am by his collegiality and his willingness to engage. I think we are so lucky that this is the case that became the opportunity for us to talk about this. I think all of us researching social media can imagine really bad cases that could have come to light and instantly eroded public trust in our efforts to understand social media. So I’m really very happy that this opportunity to talk about how to move forward in our research was prompted by the work of a scholar who I really respect and admire. So with that, I handed it off to our first speaker, Christian Sandvig. Each person spoke for a little bit and then we had a chance for them to pose one burning question.

Panelist statements:

CHRISTIAN SANDVIG:

Thanks, Mary. Mary asked us to say a little bit where we might relate to this topic. I’m a Professor at the School of Information and the Communications Studies Department at the University of Michigan. I’m interested in information and public policy. I’m interested in this particular controversy because I have a forthcoming book about studying human behavior online. I’ve taught about applied ethics and research methods. I have a graduate class called Unorthodox Research Methods, about new research methods and the controversies they provoke. And I’m a former member of an Institutional Review Board. So that’s my background. I want to use my very brief time to mention a study that often comes up in historical reviews of psychology. It’s the Middlemist “bathroom study” (http://www2.uncp.edu/home/marson/Powerpoints/3610Bathroom1.pdf). It’s sometimes called the micturition study, if you have a preference for scientific terminology. To be clear: I’m not trying to say that the Facebook experiment is like the bathroom experiment. But there are some interesting parallels. So I’ll just give you a quick rundown of those parallels. This is a research study conducted by psychologists in a men’s restroom at a large Midwestern university. Basically the researchers built a small periscope-like device that allowed a professor sitting in a toilet stall to observe patrons at the urinals from a side angle. The reason that the researchers did this is that they had a hypothesis about physiologic excitation and personal space. So they designed an experimental design where a confederate, a student on the research team, would stand near or at a distance from an individual who came into the bathroom to use the urinal. They did this without consent and they didn’t have a debriefing process. They timed, with a stopwatch, the urination to help them draw conclusions about physiologic excitation and physical proximity of strangers.

The reason that the Middlemist “bathroom study” is a useful parallel to today’s uproar over the Facebook Emotions Study is that public criticism of the research did not focus on physical harms to human subjects but, rather, the perceived indignity and disregard for individual privacy that the study suggested. The researchers defended themselves and used reasonably sound logic, arguing that going to the bathroom is an everyday experience. They studied a public bathroom after all. The worst that could happen is that a subject feels a little weird that someone’s watching them in a public bathroom. And, in fact, they argued debriefing would have produced the harm in this study. If they’d told men that they’ve been watched in a public bathroom, it may then make them uncomfortable. So in fact, telling subjects about the study produces the only harm that could happen. So, they reasoned, we shouldn’t debrief subjects about the study. The debate about this study is extensive. But one of the conclusions that followed from it is that researchers in this case focused on the wrong harms. They argued that individuals in this study probably couldn’t be harmed because it’s only mildly embarrassing or creepy to be watched in a public bathroom. But the harm that the researchers should have addressed or considered was the potential harm to the image of the profession or all of science. Some research subjects were actually very upset about the study and felt it violated human decency and their individual dignity. They were not harmed individually, but found this study creepy and invasive. Avoiding telling people that you’re doing this kind of research because telling them would upset them doesn’t help at all. Researchers simply delay the harm that will follow when the public eventually find out how the study was conducted. Such delays only leave the public more angry that researchers didn’t tell subjects, at some point in the study, because it suggests that the researchers are hiding something. So the question I have for the panel and the audience is: Is it possible for us to anticipate this kind of harm? Is it possible for us as researchers to design research and say this is something that’s going to cause controversy because people are going to think it’s very creepy, versus this is something that no one’s going to have a problem with. That’s actually a difficult question to answer.

Some people have argued, well, you know, Facebook’s already done a variety of other studies that changed users’ information without their knowledge, so why does this one produce the controversy? I would argue that there research cases and topics where there are foreseeable harms because we know that people feel differently about certain areas of their lives. People feel differently about whether there’s an intervention or not. People feel differently about the valence of the intervention. For example, people will feel differently about whether an intervention or research experiment is done for science or for a corporation. But, really, the only way that we’re going to be able to predict whether the “creepiness factor” will register as a problem is to involve research participants in the research design at some level. Participants’ involvement could help researchers figure out the level of threat before we execute our research. Fundamentally, researchers aren’t the ones who decide what is threatening or crossing the line for the public. If participants feel our research methods are creepy and they hate it, we don’t want to be in the business of doing that research. We’re not going to be able to argue participants out of their feelings and say “no, it’s all right; people look at you in the bathroom all the time. We’re not going to be able to do that. We need a different approach and understanding of “harm” to conduct social research.

JEFF BINGHAM:

I’m Jeff Bingham from Carnegie Mellon University. I approached this research area a little bit differently. I work on building systems to support people with disabilities, often using human computation. Mary asked us to think about what skin we have in this game. So the skin I have in this game is that social media are the primary way that we recruit the people that power the systems we build for people with disabilities, via friend sourcing, community sourcing, citizen science, traditional crowdsourcing. It’s also the resource we have to understand those people using our systems. As Mary said in her talk earlier this morning, “crowds are people,” and it is really important for us to make these systems work well–make them sustainable and make them scalable.

We’re increasingly moving away from, say, Amazon Mechanical Turk, to services like Facebook, to power our systems for people with disabilities. Ultimately, we need users to trust the platforms on which we are recruiting workers. So if they don’t trust Facebook, for instance, they may not use it or they may move to closed systems that don’t allow us the kind of access or the ability to incorporate human work into our systems. I’ve tried bootstrapping sociotechnical systems on my own, and it’s actually really hard without piggybacking on existing platforms. So it’s really important that we have continued access to the general public using commercial platforms. I think that we can all agree this is about a lot more than one study or one research article. And so my fear is that, as a result of this experience, we will be more likely to miss out on the upsides and rewards that could come from engaging with users of these services in interesting ways. My hope is that we can find a way to preserve the utility of these sites and our ability to do important research and innovate on social media platforms. I also hope that researchers can continue partnering with industry while addressing the very real concerns of users. So my question is what practical steps should researchers take right now, while public opinion and corporate policies are still being sorted out, to help ensure our long-term ability to work with companies who are running these very interesting platforms?

AMY BRUCKMAN:

Thank you, Mary, so much for organizing this. It’s really timely. I launched an online, programmable virtual world for children in 1995. I got interested in Internet research ethics because I asked people what is the ethical way to do this and nobody knew. So I had to think ethically and invent the ethical things to do. In the 1990s, I was part of three different working groups focused on developing ethical policies for Internet research: One for the Association of Internet Researchers; another for AAS; and a third one for the APA. The APA group, led by Bob Kraut, resulted in a paper which you may find useful and is available on my website, along with a long list of other papers on research ethics. I think it may be time for us to have another round of working groups. It’s been a long time since the ’90s. There are some new issues emerging and we could use some updated statements of what the ethical issues are here and how to handle them. Several of my papers on research ethics have dealt with the issue of disguising subjects’ online identities.

I argue that, in many cases, contrary to the traditional approach of always disguising research subjects, if they are doing creative work on the Internet, for which they deserve credit, we are ethically obligated to ask them: “Do you want me to use your real name?” It would be unethical to hide their names without their consent. I want to be a little bit deliberately provocative here: I have done research on Internet users without their consent, and I would do it again. According to U.S. law, you can do work without consent. You can get a full waiver of consent if the research can’t be practicably done without a waiver, if the benefits outweigh the risk and if the risk is low. I have a post on my blog at nextbison.wordpress.com about a study that I did in 2003 where we walked into IRC chat rooms and recorded chat room participants’ reactions. Actually, we were really studying whether we would get kicked out of the chat room. We had four conditions: A control, where we walked in and didn’t say anything; a treatment where we walked in and said “Hi. I’m recording this for a study of language online;” an opt-in treatment; and an opt-out treatment. I know this gets very meta. And a little circular. But we found that people really didn’t want us to be in their IRC chat rooms. Almost no one opted in. And no one opted out. We have a colorful collection of the boot messages we received as we were kicked out of these chat rooms. My favorite is “Yo mama’s so ugly she turned Medusa to stone.” So ironically, despite the fact that our research documents that we made people angry, I still think the study itself was ethical. It’s certainly not something that we did lightly. But the level of disturbance we created was relatively small. I think what we learned from it was beneficial to people and to science in general. The original papers are available on my blog. And if you’re interested in more details, I’d be happy to discuss it with you. But my point in referencing this study is to argue that it is possible to do research that upsets people and we should be careful about overreactions to our work.

I want to say that the reaction to the Facebook study was out of proportion. And I hope that Jeff knows that we, his colleagues, are behind him. The reaction to Facebook, the company, also was excessive. I love a lot of the research that Facebook does. I’m not saying it’s perfect. There’s a lot that all of us have to learn about researching social media. And I will say there’s a lot we can learn from this incident. I’m glad it started these series of conversations. A couple of questions that I have for the future are: Should companies be required to have something more like a real IRB? That’s a tough one. It has a lot of complications. Distinguishing social science research from how companies do their business and make their sites usable is almost impossible. My other burning question, that I hope we can discuss, is should conferences and journals that do peer review also review the ethics of a study?

A while ago I reviewed an Internet-based study submitted to the conference, CHI. I objected to the ethics of the study, and objected violently. I was really offended by this study. I put my objections in my CHI review and I gave the paper a 1. I never give 1s; I’m nice. I got back a response from the program committee that year that the researchers had their study approved by their campus Institutional Review Board (IRB) and they proceeded in good faith; so, we declare this study to be ethical. Therefore, it’s not the reviewer’s place to question the ethics of the study. I’m not sure that’s how we should be handling things. I think we need to think about our ethics review as an incredibly complicated socio technical system, with tools and rules and divisions of labor and different activity systems run by different IRBs that come to different solutions. Somehow, there has to be some error correction when we come together to share our work. On the other hand, the practical question of how we do this without causing tremendous practical problems and unfairness in the meta review is difficult, too. So I don’t think it’s easy. But I don’t think the hand waving, “oh, it was approved, it’s not our business,” is the right answer, either. So I’m looking forward to more conversations from here. Thanks.

JEFF HANCOCK:

Thank you everybody for coming in today. Thank you Mary for organizing this. And for the fellow panelists for being part of this on pretty short notice. And thank you all for this morning. I’ve seen many colleagues and friends. It’s been great to feel supported and people reaching out to make sure I’m doing okay. It was my first experience with global worldwide Internet heat wrath, and it was very difficult. I will admit. My family paid a price for it. I paid a price, but I feel much better being amongst colleagues. Mostly because this is a really important conversation, and I feel now a privilege and a responsibility to be a part of that. I thought I would take a different approach from the rest of the panelists and describe a little bit of what I learned from the various e-mails I received from around the world in response to this. And I’ll keep it a little bit higher level, away from specific identities. Some of them are pretty intense. And I think that the intensity actually points to something important.

I received a couple hundred e-mails from people from around the world. The e-mails that I want to discuss with you are ones from the people using Facebook. This was their role as a stakeholder. These e-mails are distinct from those that I received from other academics with questions about ethical issues, around informed consent, around how IRB dealt with this, et cetera.

Facebook users’ emails tended to fall into three main categories. The first one was: How dare you manipulate my news feed! And this was a really fervent response—and very common. I think it points to something that Christian Sandvig and other scholars, thinking about algorithms and the social world have been taking up in their work. As Tarleton Gillespie puts it, we don’t have metaphors in place for what the news feed is. We have a metaphor for the postal service: messages are delivered without tampering from one person to the next. We have a metaphor from the newsroom: editors choose things that we think will be of interest. But there’s no stable metaphor that people hold for what the news feed is. I think this is a really important thing. I’m not sure whether this means we need to bring in an education component to help people understand that their news feeds are altered all the time by Facebook? but the huge number of e-mails about people’s frustration that researchers would change the news feed indicates that there’s just no sense that the news feed was anything other than an objective window into their social world.

The second category of e-mail that I received signals that the news feed is really important to people. I got a number of e-mails saying things like: “You know my good friend’s father just died. And if I didn’t have the news feed I may not have known about it.” This surfaced a theme that the news isn’t just about what people are having for breakfast or all the typical mass media put-downs of Twitter and Facebook. Rather, this thing that emerged about seven years ago [Facebook] is now really important to people’s lives. It’s central and integrated in their lives. And that was really important for me to understand. That was one of the things that caught me off guard, even though maybe in hindsight it shouldn’t have.

The last category of e-mail that I received: A lot of people asked me why I thought this study attracted this kind of attention and controversy, whereas other similar studies did not. I thought a lot about that. One of the things that came out of the e-mails is that, as Christian Sandvig argued earlier, we were looking at the wrong place for what would register as “harm.” People have a very strong sense of autonomy. We know that quite well from social psychology and from sociology. I think our study violated people’s sense of autonomy and the fact that they do not want their emotions manipulated or mood controlled. And I think it’s a separate issue whether we think emotions are being manipulated all the time, through advertising, et cetera. What became very clear in the e-mail was that emotions are special. And I think it’s one example of a class of things that will fall into some of the spaces that Christian Sandvig talked about. If we work on one of these special classes or categories of human experience, like emotion, without informed consent, without debriefing, we could do larger harm than just harm to participants.

I can now have some sense of humor around some of the hate mail. And it’s been an amazing learning experience for me. I hope that by turning it over to the floor here and having ongoing conversations, we can really move things forward. My burning question would be: I think that this is a huge turning point or advance for social sciences potentially in the same way that, say, evolutionary theory was important for biology or the microscope was for chemistry. And I would want us to think about how we would continue doing the research on social media platforms ethically. So in the same way that Stanley Milgram’s study caused us to rethink what ethical research practices are, in the same way that Amy Bruckman’s calling on us to return to reflecting on how we do Internet research, now that we can do social psychology essentially at scale, how do we bring ethics along with that?

MARY L. GRAY:

I think what we can do concretely, with the time we have left — we have a little bit of time remaining. But I think the most productive thing we could do, I would argue, is get a lot of questions on the table. Because we are recording this, I can get a transcript and we can collect all the questions. And I would honestly say I don’t really listen to anybody who tells me right now they have the answer, because we’ve only been studying this thing for about ten years. This is entirely new to us. I don’t know that our objective should be answering anything today. I think we should be listening to each other, hearing our concerns and hearing some really important questions. So with that in mind, let’s hear some questions.

Questions and comments generated by the audience:

  1. Where do you think this [conversation about what to do next] should happen? I don’t think it’s just a matter of us having a special issue of a journal and people publish their opinions, and I know that stuff like that is happening. But it feels like we have to have some real dialogue. Who are the people who you think need to be involved in these conversations and where do you think some of these conversations can happen?
  1. I think the value of this experiment and the reaction to it is that it has raised the awareness of the algorithmic power that these organizations [social media companies] have. What is the responsibility of the Facebooks and the Googles of the world to be aware of this?
  1. Do we all agree that corporations have a role in this conversation?
  1. Information is being presented and it’s being manipulated [through social media interfaces] by definition. If you’re working in a mass medium with a corporation, you’re changing the presentation of information all the time. How do we draw any lines about this to distinguish what is ethical or unethical presentation of this kind of information?
  1. How can we take this up to be a national and an international conversation. I think we need to be thinking [beyond] the campus level. The variability among IRBs is hopeless because if one campus IRB has approved something that doesn’t mean that meets some national/international level of standard. How can we think about this internationally, since these are international corporations and international data we’re talking about. These aren’t just Cornell, Berkeley or UCLA data.
  1. For the most part Facebook is occupied all time by highly vulnerable populations. Even if there was an open consent process there, how do you know the populations there really would have been in a position to fully give informed consent?
  1. Could there be something that companies with social media sites actually do to let end users know this is or specify how they want their information to be reused or it’s like the organic food sticker on foods? Could we create some way to very simple allow people to say to us, “sure, go ahead, modify my stuff, or don’t touch my stuff” or something like that? Maybe there’s some trigger especially for anything that’s private.
  1. How, as industrial researchers, do we maintain ethical obligations to our subjects similar to those of academic researchers?
  1. As a community how do we agree, when we acknowledge they’re going to be many, many different partners, some industry, some academia, doing lots of kinds of research who’s responsible for the ethical treatment of human subjects and their data?
  2. I think if you have a Ph.D., perhaps part of that professional training should mean that we can assume that you can behave ethically until it’s proven otherwise.
  3. What is the argument towards industry [for tighter ethical regulation] that’s going to make sense? And number one is losing your customer base. I’m sure Facebook has taken a hit and every single advertiser has taken a hit because you’re going to think twice about clicking on the button. How do we speak to corporate organizations and convince them that they should change their actions?
  4. So I’m somewhat still puzzled by what you [Jeff Hancock] think about your findings. Do you really feel like you imposed some sort of negative valence on people that hurt them, or is there a lot of uncertainty here? And how is this different than the day-to-day interactions we have? Why is this special?

Spatial metaphors of the internet: Resources

August 13, 2014

As someone who does work on online communities and spatial informatics, I’m very much aware of the extent to which we tend to use spatial metaphors to talk about web-based technologies. I asked around the Social Media Collective (including our mighty network of esteemed colleagues) for favorite critiques of the intertwining of space and the internet, and thought I’d share the list of sources here.

Read more…

Why We Like Pinterest for Fieldwork

July 14, 2014

(written up with Nikki Usher, GWU)

Anyone tackling fieldwork these days can chose from a wide selection of digital tools to put in their methodological toolkit.  Among the best of these tools are platforms that let you archive, analyze, and disseminate at the same time.  It used to be that these were fairly distinct stages of research, especially for the most positivist among us.  You came up with research questions, chose a field site, entered the field site, left the field site, analyzed your findings, got them published, and shared your research output with friends and colleagues.

But the post-positivist approach that many of us like involves adapting your research questions—reflexively and responsively—while doing fieldwork.  Entering and leaving your field site is not a cool, clean and complete process.  We analyze findings as we go, and involve our research subjects in the analysis.  We publish, but often in journals or books that can’t reproduce the myriad digital artifacts that are meaningful in network ethnography.  Actor network theory, activity theory, science and technology studies and several other modes of social and humanistic inquiry approach research as something that involves both people and devices. (Yes yes we know but these wikipedia entries aren’t bad.) Moreover, the dissemination of work doesn’t have to be something that happens after publication or even at the end of a research plan.

Nikki’s work involves qualitative ethnographic work at field sites where research can last from five months to a brief week visit to a quick drop in day. She learned the hard way from her research for Making News at The New York Times that failing to find a good way to organize and capture images was a missed opportunity post-data collection. Since then, Nikki’s been using Pinterest for fieldwork image gathering quite a bit.  Phil’s work on The Managed Citizen was set back when he lost two weeks of field notes on the chaotic floor of the Republican National Convention in 2000 (security incinerates all the detritus left by convention goers).  He’s been digitizing field observations ever since.

Some people put together personal websites about their research journey.  Some share over Twitter.  And there are plenty of beta tools, open source or otherwise, that people play with.  We’ve both enjoyed using Pinterest for our research projects.  Here are some points on how we use it and why we like it.

How To Use It

  1. When you start, think of this as your research tool and your resource.   If you dedicate yourself to this as your primary archiving system for digital artifacts you are more likely to build it up over time.  If you think of this as a social media publicity gimmick for your research, you’ll eventually lose interest and it is less likely to be useful for anyone else.
  2. Integrate it with your mobile phone because this amps up your capacity for portable, taggable, image data collection.
  3. Link the board posts to Twitter or your other social media feeds.  Pinterest itself isn’t that lively a place for researchers yet.  The people who want to visit your Pinterest page are probably actively following your activities on other platforms so be sure to let content flow across platforms.
  4. Pin lots of things, and lots of different kinds of things.  Include decent captions though be aware that if you are feeding Twitter you need to fit character limits.
  5. Use it to collect images you have found online, images you’ve taken yourself during your fieldwork, and invite the communities you are working with to contribute.
  6. Backup and export things once in a while for safe keeping.  There is no built-in export function, but there are a wide variety of hacks and workarounds for transporting your archive.

What You Get

  1. Pinterest makes it easy to track the progress of the image data you gather.  You may find yourself taking more photos in the field because they can be easily arranged, saved and categorized.
  2. Using it regularly adds another level of data as photos and documents captured on phone and then added on Pinterest can be quickly field captioned and then re-catalogued, giving you a chance to review the visual and built environment of your field site and interrogate your observations afresh.
  3. Visually-enhanced constant comparative methods: post-data collection, you can go beyond notes to images and captions that are easily scanned for patterns and points of divergence. This may be  going far beyond what Glaser and Strauss had imagined, of course.
  4. Perhaps most important, when you forget what something looks like when you’re writing up your results, you’ve got an instant, easily searchable database of images and clues to refresh your memory.

Why We Like It

  1. It’s great for spontaneous presentations.  Images are such an important part of presenting any research.  Having a quick publically accessible archive of content allows you to speak, on the fly, about what you are up to.  You can’t give a tour of your Pinterest page for a job talk.  But having the resource there means you can call on images quickly during a Q&A period, or quickly load something relevant on a phone or browser during a casual conversation about your work.
  2. It gives you a way to interact with subjects.  Having the Pinterest link allows you to show a potential research subject what you are up to and what you are interested in.  During interviews it allows you to engage people on their interpretation of things.  Having visual prompts handy can enrich and enliven any focus group or single subject interview.  These don’t only prompt further conversation, they can prompt subjects to give you even more links, images, videos and other digital artifacts.
  3. It makes your research interests transparent. Having the images, videos and artifacts for anyone to see is a way for us to show what we are doing.  Anyone with interest in the project and the board link is privy to our research goals. Our Pinterest page may be far less complicated than many of our other efforts to explain our work to a general audience.
  4. You can disseminate as you go.  If you get the content flow right, you can tell people about your research as you are doing it.  Letting people know about what you are working on is always a good career strategy.  Giving people images rather than article abstracts and draft chapters gives them something to visualize and improves the ambient contact with your research community
  5. It makes digital artifacts more permanent. As long as you keep your Pinterest, what you have gathered can become a stable resource for anyone interested in your subjects. As sites and material artifacts change, what you have gathered offers a permanent and easily accessible snapshot of a particular moment of inquiry for posterity.

Pinterest Wish-list

One of us is a Windows Phone user (yes really) and it would be great if there was a real Pinterest app for the Windows Phone. One touch integration from the iPhone, much like Twitter, Facebook, and Flicker from the camera roll would be great (though there is an easy hack).

We wish it would be easier to have open, collaborative boards. Right now, the only person who can add to a board is you, at least at first.  You can invite other people to join a “group board” via email, but Pinterest does not have open boards that allow anyone with a board link to add content.

Here’s a look at our Pinboards: Phil Howard’s Tech + Politics board, and Nikki Usher’s boards on U.S. Newspapers.  We welcome your thoughts…and send us images!

 

Must-reads for how to study people’s online behavior (and navigate the ethical challenges that entails!)

July 12, 2014

I realized after posting my thoughts on how to think about social media as a site of human interaction (and all the ethical and methodological implications of doing so) that I forgot to leave links to what are, bar none, the best resources on the planet for policy makers, researchers, and the general public thinking through all this stuff.

Run, don’t walk, to download copies of the following must-reads:

Charles Ess and the AOIR Ethics Committee (2002). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee. Approved by the Association of Internet Researchers, November 27, 2002. Available at: http://aoir.org/reports/ethics.pdf

Annette Markham and Elizabeth Buchanan (2012). Ethical decision-making and Internet research: Recommendations from the AoIR ethics working committee (version 2.0). Approved by the Association of Internet Researchers, December 2012. Available at: http://aoir.org/reports/ethics2.pdf

Social Media Collective weigh in on the debates about the Facebook emotions study

July 9, 2014

I have the privilege of spending the year as a visiting researcher with the social media researchers at Microsoft Research New England. And for the last two weeks or so, its been a particularly stimulating time to be among them. Spurred by the controversial Facebook emotions study and the vigorous debate surrounding it, there’s been a great deal of discussion in the lab and beyond it.

A number of us have also joined the debate publicly, posting here as well as in other places. Rather than re-post each one individually here, I thought I’d collect them into a single post, as they have in many ways emerged from our thinking together.

danah boyd: What does the Facebook experiment teach us?: “What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.”

Kate Crawford: The Test We Can—and Should—Run on Facebook (in The Atlantic): “It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent in order to produce good data science.”

Tarleton Gillespie: Facebook’s algorithm — why our assumptions are wrong, and our concerns are right (on Culture Digitally): “But a key issue, both in the research and in the reaction to it, is about Facebook and how it algorithmically curates our social connections, sometimes in the name of research and innovation, but also in the regular provision of Facebook’s service.”

Andrés Monroy-Hernández: A system designer’s take on the Facebook study – a response to danah boyd’s blog post: “…it’s important that we do not throw out the baby out with the bath water. I do not want to see the research community completely avoiding  experimental research in online systems.”

Mary L. Gray: When Science, Customer Service, and Human Subjects Research Collide. Now What? (on Ethnography Matters): “My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: we are now, officially, doing human subjects research.”

There have also been comments on the issue from other scholars at Microsoft Research, including:

Jaron Lanier, MSR Redmond. Should Facebook Manipulate Users? (The New York Times)

Duncan Watts, MSR New York. Stop complaining about the Facebook study. It’s a golden age for research (The Guardian) and Lessons learned from the Facebook study (Chronicle of Higher Ed)

 Matt Salganik, MSR New York. After the Facebook emotional contagion experiment: A proposal for a positive path forward (Freedom to Tinker)

A system designer’s take on the Facebook study – a response to danah boyd’s blog post

July 7, 2014

Last week I sent an email reply to danah boyd in response to her thoughtful post about the Facebook study. She encouraged me to post it publicly, but I was a bit scared by the viciousness and panic of the reactions. At the same time, I worried that the silence of people who do research in social computing (often relying on designing, building, and releasing systems for people to use) would be counterproductive in the long run.

Along with other colleagues in the social computing community who are writing their own take on this topic [1,2], my hope is that, together, our voices are heard along with the voices of those who have dominated the discussion so far, those whose research is mainly rhetorical.

So here is my (slightly edited) response to danah:

danah, I enjoyed your post. While critical, it didn’t have the panic tone that has bothered me so much from other articles. Also, I liked that it looks beyond the Facebook experiment itself.

I liked this part where you talk about beneficence and maleficence in research: Getting children to talk about these awful experiences can be quite psychologically tolling. Yet, better understanding what they experienced has huge benefits for society. So we make our trade-offs and we do research that can have consequences.

I liked it because it’s important that we do not throw out the baby out with the bath water. I do not want to see the research community completely avoiding  experimental research in online systems.  As a designer and someone who has done this type of work, I do want to engage in ethics discussions that go beyond IRBs and liability. I want to engage in discussions with my peers about the experiments I do. I don’t want to feel scared of proposing studies, or witch hunted like our colleagues in the Facebook Data Science team.  I want to work with colleagues in figuring out if the risks involved in my work are worth the knowledge we could obtain. I also don’t want to feel paralyzed and having to completely avoid risky but valuable research. The way the Facebook experiment has been framed, feels almost like we’re talking about Milgram or Tuskegee. To be honest, this whole experience made me wonder if I want to publish every finding we have in our work to the academic community, or to keep it internally within product teams.

If anything, studies like this one allow us to learn more about the power and limitations of these platforms. For that, I am grateful to the authors. But I am not going to defend the paper, as I have no idea what went through the researchers head when they were doing it. I do feel that it could be defended, and it’s a shame that the main author seems to have been forced to come out and apologize, without engaging in a discussion about the work and the thinking process that it went through.

The other piece of your post that left me thinking is the one about power, which echoes what Zeynep Tufekci had written about too:

This study isn’t really what’s at stake. What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.

I agree, every social computing system gives power to its designers. This power is also a function of scale and openness. It makes me wonder how one might take these two variables into consideration when assessing research in this space. For example, why did the Wikipedia  A/B testing of their fundraising banner did not seem to raise concerns ? Similarly, this experiment on Wikipedia without informed consent did not raise any flags either. Could it be partly because of how open the Wikipedia community is to making decisions about their internal processes? I think the publication of the Facebook emotion study is a step towards this openness, which is why I think the reaction to it is unfortunate.

What does the Facebook experiment teach us?

July 1, 2014

I’m intrigued by the reaction that has unfolded around the Facebook “emotion contagion” study. (If you aren’t familiar with this, read this primer.) As others have pointed out, the practice of A/B testing content is quite common. And Facebook has a long history of experimenting on how it can influence people’s attitudes and practices, even in the realm of research. An earlier study showed that Facebook decisions could shape voters’ practices. But why is it that *this* study has sparked a firestorm?

In asking people about this, I’ve been given two dominant reasons:

  1. People’s emotional well-being is sacred.
  2. Research is different than marketing practices.

I don’t find either of these responses satisfying.

The Consequences of Facebook’s Experiment

Facebook’s research team is not truly independent of product. They have a license to do research and publish it, provided that it contributes to the positive development of the company. If Facebook knew that this research would spark the negative PR backlash, they never would’ve allowed it to go forward or be published. I can only imagine the ugliness of the fight inside the company now, but I’m confident that PR is demanding silence from researchers.

I do believe that the research was intended to be helpful to Facebook. So what was the intended positive contribution of this study? I get the sense from Adam Kramer’s comments that the goal was to determine if content sentiment could affect people’s emotional response after being on Facebook. In other words, given that Facebook wants to keep people on Facebook, if people came away from Facebook feeling sadder, presumably they’d not want to come back to Facebook again. Thus, it’s in Facebook’s better interest to leave people feeling happier. And this study suggests that the sentiment of the content influences this. This suggests that one applied take-away for product is to downplay negative content. Presumably this is better for users and better for Facebook.

We can debate all day long as to whether or not this is what that study actually shows, but let’s work with this for a second. Let’s say that pre-study Facebook showed 1 negative post for every 3 positive and now, because of this study, Facebook shows 1 negative post for every 10 positive ones. If that’s the case, was the one week treatment worth the outcome for longer term content exposure? Who gets to make that decision?

Folks keep talking about all of the potential harm that could’ve happened by the study – the possibility of suicides, the mental health consequences. But what about the potential harm of negative content on Facebook more generally? Even if we believe that there were subtle negative costs to those who received the treatment, the ongoing costs of negative content on Facebook every week other than that 1 week experiment must be more costly. How then do we account for positive benefits to users if Facebook increased positive treatments en masse as a result of this study? Of course, the problem is that Facebook is a black box. We don’t know what they did with this study. The only thing we know is what is published in PNAS and that ain’t much.

Of course, if Facebook did make the content that users see more positive, should we simply be happy? What would it mean that you’re more likely to see announcements from your friends when they are celebrating a new child or a fun night on the town, but less likely to see their posts when they’re offering depressive missives or angsting over a relationship in shambles? If Alice is happier when she is oblivious to Bob’s pain because Facebook chooses to keep that from her, are we willing to sacrifice Bob’s need for support and validation? This is a hard ethical choice at the crux of any decision of what content to show when you’re making choices. And the reality is that Facebook is making these choices every day without oversight, transparency, or informed consent.

Algorithmic Manipulation of Attention and Emotions

Facebook actively alters the content you see. Most people focus on the practice of marketing, but most of what Facebook’s algorithms do involve curating content to provide you with what they think you want to see. Facebook algorithmically determines which of your friends’ posts you see. They don’t do this for marketing reasons. They do this because they want you to want to come back to the site day after day. They want you to be happy. They don’t want you to be overwhelmed. Their everyday algorithms are meant to manipulate your emotions. What factors go into this? We don’t know.

Facebook is not alone in algorithmically predicting what content you wish to see. Any recommendation system or curatorial system is prioritizing some content over others. But let’s compare what we glean from this study with standard practice. Most sites, from major news media to social media, have some algorithm that shows you the content that people click on the most. This is what drives media entities to produce listicals, flashy headlines, and car crash news stories. What do you think garners more traffic – a detailed analysis of what’s happening in Syria or 29 pictures of the cutest members of the animal kingdom? Part of what media learned long ago is that fear and salacious gossip sell papers. 4chan taught us that grotesque imagery and cute kittens work too. What this means online is that stories about child abductions, dangerous islands filled with snakes, and celebrity sex tape scandals are often the most clicked on, retweeted, favorited, etc. So an entire industry has emerged to produce crappy click bait content under the banner of “news.”

Guess what? When people are surrounded by fear-mongering news media, they get anxious. They fear the wrong things. Moral panics emerge. And yet, we as a society believe that it’s totally acceptable for news media – and its click bait brethren – to manipulate people’s emotions through the headlines they produce and the content they cover. And we generally accept that algorithmic curators are perfectly well within their right to prioritize that heavily clicked content over others, regardless of the psychological toll on individuals or the society. What makes their practice different? (Other than the fact that the media wouldn’t hold itself accountable for its own manipulative practices…)

Somehow, shrugging our shoulders and saying that we promoted content because it was popular is acceptable because those actors don’t voice that their intention is to manipulate your emotions so that you keep viewing their reporting and advertisements. And it’s also acceptable to manipulate people for advertising because that’s just business. But when researchers admit that they’re trying to learn if they can manipulate people’s emotions, they’re shunned. What this suggests is that the practice is acceptable, but admitting the intention and being transparent about the process is not.

But Research is Different!!

As this debate has unfolded, whenever people point out that these business practices are commonplace, folks respond by highlighting that research or science is different. What unfolds is a high-browed notion about the purity of research and its exclusive claims on ethical standards.

Do I think that we need to have a serious conversation about informed consent? Absolutely. Do I think that we need to have a serious conversation about the ethical decisions companies make with user data? Absolutely. But I do not believe that this conversation should ever apply just to that which is categorized under “research.” Nor do I believe that academe is necessarily providing a golden standard.

Academe has many problems that need to be accounted for. Researchers are incentivized to figure out how to get through IRBs rather than to think critically and collectively about the ethics of their research protocols. IRBs are incentivized to protect the university rather than truly work out an ethical framework for these issues. Journals relish corporate datasets even when replicability is impossible. And for that matter, even in a post-paper era, journals have ridiculous word count limits that demotivate researchers from spelling out all of the gory details of their methods. But there are also broader structural issues. Academe is so stupidly competitive and peer review is so much of a game that researchers have little incentive to share their studies-in-progress with their peers for true feedback and critique. And the status games of academe reward those who get access to private coffers of data while prompting those who don’t to chastise those who do. And there’s generally no incentive for corporates to play nice with researchers unless it helps their prestige, hiring opportunities, or product.

IRBs are an abysmal mechanism for actually accounting for ethics in research. By and large, they’re structured to make certain that the university will not be liable. Ethics aren’t a checklist. Nor are they a universal. Navigating ethics involves a process of working through the benefits and costs of a research act and making a conscientious decision about how to move forward. Reasonable people differ on what they think is ethical. And disciplines have different standards for how to navigate ethics. But we’ve trained an entire generation of scholars that ethics equals “that which gets past the IRB” which is a travesty. We need researchers to systematically think about how their practices alter the world in ways that benefit and harm people. We need ethics to not just be tacked on, but to be an integral part of how *everyone* thinks about what they study, build, and do.

There’s a lot of research that has serious consequences on the people who are part of the study. I think about the work that some of my colleagues do with child victims of sexual abuse. Getting children to talk about these awful experiences can be quite psychologically tolling. Yet, better understanding what they experienced has huge benefits for society. So we make our trade-offs and we do research that can have consequences. But what warms my heart is how my colleagues work hard to help those children by providing counseling immediately following the interview (and, in some cases, follow-up counseling). They think long and hard about each question they ask, and how they go about asking it. And yet most IRBs wouldn’t let them do this work because no university wants to touch anything that involves kids and sexual abuse. Doing research involves trade-offs and finding an ethical path forward requires effort and risk.

It’s far too easy to say “informed consent” and then not take responsibility for the costs of the research process, just as it’s far too easy to point to an IRB as proof of ethical thought. For any study that involves manipulation – common in economics, psychology, and other social science disciplines – people are only so informed about what they’re getting themselves into. You may think that you know what you’re consenting to, but do you? And then there are studies like discrimination audit studies in which we purposefully don’t inform people that they’re part of a study. So what are the right trade-offs? When is it OK to eschew consent altogether? What does it mean to truly be informed? When it being informed not enough? These aren’t easy questions and there aren’t easy answers.

I’m not necessarily saying that Facebook made the right trade-offs with this study, but I think that the scholarly reaction of research is only acceptable with IRB plus informed consent is disingenuous. Of course, a huge part of what’s at stake has to do with the fact that what counts as a contract legally is not the same as consent. Most people haven’t consented to all of Facebook’s terms of service. They’ve agreed to a contract because they feel as though they have no other choice. And this really upsets people.

A Different Theory

The more I read people’s reactions to this study, the more that I’ve started to think that the outrage has nothing to do with the study at all. There is a growing amount of negative sentiment towards Facebook and other companies that collect and use data about people. In short, there’s anger at the practice of big data. This paper provided ammunition for people’s anger because it’s so hard to talk about harm in the abstract.

For better or worse, people imagine that Facebook is offered by a benevolent dictator, that the site is there to enable people to better connect with others. In some senses, this is true. But Facebook is also a company. And a public company for that matter. It has to find ways to become more profitable with each passing quarter. This means that it designs its algorithms not just to market to you directly but to convince you to keep coming back over and over again. People have an abstract notion of how that operates, but they don’t really know, or even want to know. They just want the hot dog to taste good. Whether it’s couched as research or operations, people don’t want to think that they’re being manipulated. So when they find out what soylent green is made of, they’re outraged. This study isn’t really what’s at stake. What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.

I get the anger. I personally loathe Facebook and I have for a long time, even as I appreciate and study its importance in people’s lives. But on a personal level, I hate the fact that Facebook thinks it’s better than me at deciding which of my friends’ posts I should see. I hate that I have no meaningful mechanism of control on the site. And I am painfully aware of how my sporadic use of the site has confused their algorithms so much that what I see in my newsfeed is complete garbage. And I resent the fact that because I barely use the site, the only way that I could actually get a message out to friends is to pay to have it posted. My minimal use has made me an algorithmic pariah and if I weren’t technologically savvy enough to know better, I would feel as though I’ve been shunned by my friends rather than simply deemed unworthy by an algorithm. I also refuse to play the game to make myself look good before the altar of the algorithm. And every time I’m forced to deal with Facebook, I can’t help but resent its manipulations.

There’s also a lot that I dislike about the company and its practices. At the same time, I’m glad that they’ve started working with researchers and started publishing their findings. I think that we need more transparency in the algorithmic work done by these kinds of systems and their willingness to publish has been one of the few ways that we’ve gleaned insight into what’s going on. Of course, I also suspect that the angry reaction from this study will prompt them to clamp down on allowing researchers to be remotely public. My gut says that they will naively respond to this situation as though the practice of research is what makes them vulnerable rather than their practices as a company as a whole. Beyond what this means for researchers, I’m concerned about what increased silence will mean for a public who has no clue of what’s being done with their data, who will think that no new report of terrible misdeeds means that Facebook has stopped manipulating data.

Information companies aren’t the same as pharmaceuticals. They don’t need to do clinical trials before they put a product on the market. They can psychologically manipulate their users all they want without being remotely public about exactly what they’re doing. And as the public, we can only guess what the black box is doing.

There’s a lot that needs reformed here. We need to figure out how to have a meaningful conversation about corporate ethics, regardless of whether it’s couched as research or not. But it’s not so simple as saying that a lack of a corporate IRB or a lack of a golden standard “informed consent” means that a practice is unethical. Almost all manipulations that take place by these companies occur without either one of these. And they go unchecked because they aren’t published or public.

Ethical oversight isn’t easy and I don’t have a quick and dirty solution to how it should be implemented. But I do have a few ideas. For starters, I’d like to see any company that manipulates user data create an ethics board. Not an IRB that approves research studies, but an ethics board that has visibility into all proprietary algorithms that could affect users. For public companies, this could be done through the ethics committee of the Board of Directors. But rather than simply consisting of board members, I think that it should consist of scholars and users. I also think that there needs to be a mechanism for whistleblowing regarding ethics from within companies because I’ve found that many employees of companies like Facebook are quite concerned by certain algorithmic decisions, but feel as though there’s no path to responsibly report concerns without going fully public. This wouldn’t solve all of the problems, nor am I convinced that most companies would do so voluntarily, but it is certainly something to consider. More than anything, I want to see users have the ability to meaningfully influence what’s being done with their data and I’d love to see a way for their voices to be represented in these processes.

I’m glad that this study has prompted an intense debate among scholars and the public, but I fear that it’s turned into a simplistic attack on Facebook over this particular study rather than a nuanced debate over how we create meaningful ethical oversight in research and practice. The lines between research and practice are always blurred and information companies like Facebook make this increasingly salient. No one benefits by drawing lines in the sand. We need to address the problem more holistically. And, in the meantime, we need to hold companies accountable for how they manipulate people across the board, regardless of whether or not it’s couched as research. If we focus too much on this study, we’ll lose track of the broader issues at stake.

Follow

Get every new post delivered to your Inbox.

Join 1,143 other followers