A system designer’s take on the Facebook study – a response to danah boyd’s blog post

Last week I sent an email reply to danah boyd in response to her thoughtful post about the Facebook study. She encouraged me to post it publicly, but I was a bit scared by the viciousness and panic of the reactions. At the same time, I worried that the silence of people who do research in social computing (often relying on designing, building, and releasing systems for people to use) would be counterproductive in the long run.

Along with other colleagues in the social computing community who are writing their own take on this topic [1,2], my hope is that, together, our voices are heard along with the voices of those who have dominated the discussion so far, those whose research is mainly rhetorical.

So here is my (slightly edited) response to danah:

danah, I enjoyed your post. While critical, it didn’t have the panic tone that has bothered me so much from other articles. Also, I liked that it looks beyond the Facebook experiment itself.

I liked this part where you talk about beneficence and maleficence in research: Getting children to talk about these awful experiences can be quite psychologically tolling. Yet, better understanding what they experienced has huge benefits for society. So we make our trade-offs and we do research that can have consequences.

I liked it because it’s important that we do not throw out the baby out with the bath water. I do not want to see the research community completely avoiding  experimental research in online systems.  As a designer and someone who has done this type of work, I do want to engage in ethics discussions that go beyond IRBs and liability. I want to engage in discussions with my peers about the experiments I do. I don’t want to feel scared of proposing studies, or witch hunted like our colleagues in the Facebook Data Science team.  I want to work with colleagues in figuring out if the risks involved in my work are worth the knowledge we could obtain. I also don’t want to feel paralyzed and having to completely avoid risky but valuable research. The way the Facebook experiment has been framed, feels almost like we’re talking about Milgram or Tuskegee. To be honest, this whole experience made me wonder if I want to publish every finding we have in our work to the academic community, or to keep it internally within product teams.

If anything, studies like this one allow us to learn more about the power and limitations of these platforms. For that, I am grateful to the authors. But I am not going to defend the paper, as I have no idea what went through the researchers head when they were doing it. I do feel that it could be defended, and it’s a shame that the main author seems to have been forced to come out and apologize, without engaging in a discussion about the work and the thinking process that it went through.

The other piece of your post that left me thinking is the one about power, which echoes what Zeynep Tufekci had written about too:

This study isn’t really what’s at stake. What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.

I agree, every social computing system gives power to its designers. This power is also a function of scale and openness. It makes me wonder how one might take these two variables into consideration when assessing research in this space. For example, why did the Wikipedia  A/B testing of their fundraising banner did not seem to raise concerns ? Similarly, this experiment on Wikipedia without informed consent did not raise any flags either. Could it be partly because of how open the Wikipedia community is to making decisions about their internal processes? I think the publication of the Facebook emotion study is a step towards this openness, which is why I think the reaction to it is unfortunate.

5 thoughts on “A system designer’s take on the Facebook study – a response to danah boyd’s blog post

  1. M R

    Thank you for your thoughts. It is important to have these conversations because the cause of this outrage took Facebook by surprise and because somehow the difference between testing banner ads and tampering with newsfeeds is somehow elusive when really, it shouldn’t be. We are very used to the idea that marketers will test messaging, media, offers, and timing to obtain the best results. Wikipedia just changed the ad, they didn’t change the content of the entries. Users got exactly the content they expected and they knew that everyone else who looked at the same page viewed identical content. The experiment was orthogonal to the purpose of Wikipedia, it did not completely hijack it.

  2. Pingback: On the Facebook Emotion study | Vlad's Blog: East Coast Academia

  3. M

    Obviously you still don’t understand why researchers are upset. Here, in two simple sentences directly from the linked Wikipedia paper, is the reason why that paper did not raise flags:

    “This study’s research protocol was approved by the Committees on Research Involving Human Subjects (IRB) at the State University of New York at Stony Brook (CORIHS #2011-1394). Because the experiment presented only minimal risks to subjects, the IRB committee determined that obtaining prior informed consent from participants was not required. ”

    That’s it. That’s all that’s needed. It’s a question of ethics. The Wikipedia paper authors did their ethical duty and brought the issue forward to the committee. The (hopefully impartial) committee agreed that in that case, informed consent is not required.

    If you “don’t want to feel paralyzed and having to completely avoid risky but valuable research”, and “work with colleagues in figuring out if the risks involved in my work are worth the knowledge we could obtain”, you go to your local Helsinki committee. That’s what they’re FOR!

    Now contrast with the Facebook case, where authors took it upon themselves to decide everything. Worse, from what I understand, they even went as far as to talk to their IRB only after the fact, suggesting that the dataset was pre-existing. Yet authors were involved in the experiment design (i.e. before data was collected). This seems to me like tricking the IRB committee.

    From what I could piece from the events published so far by the author and editor, here’s what the authors did:
    (a) design the experiment without IRB (that’s completely OK, IRBs should be involved after this stage)
    (b) have Facebook run the experiment for them and collect data, without IRB approval nor informed consent (Ethical violation #1)
    (c) tell IRB committee that they want to use a “pre-existing” dataset from (b). (Ethical violation #2 – this seems to me like defrauding the committee).
    (d) profit!!

    If I were in that IRB committee, I’d be PISSED right now. Replace the word “Facebook” above with “GlaxoSmithKline” and you should be horrified.

    As you can see from the Wikipedia example, the idea that following research ethics is somehow antithetical to social and online research is a complete red herring. A fabrication breathlessly espoused mostly by people with no knowledge of how science is done, or with vested interests.
    Yes, ethical guidelines make science harder. But you don’t get to skip the rules just because they make it hard for you.

  4. Pingback: Social Media Collective weigh in on the debates about the Facebook emotions study | Social Media Collective

  5. Pingback: This Week in Review: Facebook and online control, and educating stronger data journalists

Comments are closed.