Congratulations to the incoming SMC interns for summer 2018!

Another stellar crop of applicants poured in for the SMC internships this year, and another three emerged as the best of the best. Thanks to everyone who applied, it was painful not to accept more of you! For summer 2018, we’re thrilled to have these three remarkable students joining us in the Microsoft Research lab in New England, to conduct their own original research and to be part of the SMC community. (Remember that we offer these internships every summer: if you’re an advanced graduate student in the areas of communication, the anthropology or sociology of new media, information science, and related fields, watch this page for the necessary information.)

 

Robyn Caplan is a doctoral candidate at Rutgers University’s School of Communication and Information under the supervision of Professor Philip Napoli. For the last three years, she has also been a Researcher at the Data & Society Research Institute, working on projects related to platform accountability, media manipulation, and data and civil rights. Her most recent research explores how platforms and news media associations navigate content moderation decisions regarding trustworthy and credible content, and how current concerns regarding the rise of disinformation across borders are impacting platform governance, and national media and information policy. Previously she was a Fellow at the GovLab at NYU, where she worked on issues related to open data policy and use. She holds an MA from New York University in Media, Culture, and Communication, and a Bachelor of Science from the University of Toronto.

 

Michaelanne Dye is a Ph.D. candidate in Human-Centered Computing in the School of Interactive Computing at Georgia Tech. She also holds an M.A. in Cultural Anthropology. Michaelanne uses ethnographic methods to explore human-computer interaction and development (HCID) issues within social computing systems, paying attention to the complex factors that afford and constrain meaningful engagements with the internet in resource-constrained communities. Through fieldwork in Havana, Cuba, Michaelanne’s dissertation work examines how new internet infrastructures interact with cultural values and local constraints. Moreover, her research explores community-led information networks that have evolved in absence of access to the world wide web – in order to explore ways to design more meaningful and sustainable engagements for users in both “developing” and “developed” contexts. Michaelanne’s work has been published in the conference proceedings of Human Factors in Computing Systems (CHI) and Computer-Supported Cooperative Work and Social Computing (CSCW).

 

Penny Trieu is a PhD candidate in the School of Information at the University of Michigan. She is a member of the Social Media Research Lab, where she is primarily advised by Nicole Ellison. Her research concerns how people can use communication technologies, particularly social media, to better support their interpersonal relationships. She also looks at identity processes, notably self-presentation and impression management, on social media. Her research has appeared in venues such as Information, Communication, and Society; Social Media + Society, and at the International Communication Association conference. At the Social Media Collective, she will work on the dynamics of interpersonal feedback and self-presentation around ephemeral sharing via Instagram and Snapchat Stories.

Announcing our SMC PhD interns for 2016!!

Well, it was another exciting season of reviewing a rich batch of applications for our 2016 PhD Internship Program. We love reading about all the great work out there but really, really, really hate that we have just a few seats for our intern program. Please spread the word about this program and throw your hat into the ring next year! We’ll put the call out for interns again in mid-October, 2016.

For this year, we are pleased to announce that the following emerging scholars will join us as our 2016 Microsoft Research SMC PhD intern cohort:

At Microsoft Research, New England

Update:
Ming Yin

MingYin_headshot

Ming Yin is a computer science Ph.D. student at Harvard University, supervised by Professor Yiling Chen. Her research interests lie in the emerging area of human computation and crowdsourcing, and her goal is to better understand crowdsourcing as both a new form of production and an exciting opportunity for online experimentation. Her work is published in top venues like AAAI, IJCAI and WWW, and she has received Best Paper Honorable Mention at the ACM Conference on Human Factors in Computing Systems (CHI’16). Before graduate school, Ming obtained a bachelor degree from Tsinghua University, Beijing, China.

 

Stefanie Duguay StephDuguay

Stefanie Duguay is a Ph.D. Candidate in the Digital Media Research Centre at the Queensland University of Technology (QUT) and holds an M.Sc. in Social Science of the Internet from the Oxford Internet Institute. She has also worked professionally as a Strategic Advisor in Digital Services for the Canadian federal government. Her research focuses on the everyday identity performances and interactions of people with diverse sexual and gender identities on social media. Her doctoral thesis examines the way that same-sex attracted women’s identities are constructed, shaped, and received across platforms, such as Instagram, Vine, and Tinder, with attention to the influence of both user and platform dynamics. Stefanie is the recipient of a QUT Postgraduate Research Award and her work has been published in New Media & Society, the International Journal of Communication, Disability & Society, and the Canadian Review of Sociology. She will be working with Mary L. Gray, Nancy Baym, and Tarleton Gillespie to examine the off label uses and user-led economies of mobile apps.

 

Caroline JackCarolineJack

Caroline Jack is a Ph.D. Candidate in Communication at Cornell University and an Exchange Scholar in Comparative Media Studies/Writing at MIT. She also holds an M.B.A. and an M.A. from Saint Louis University. Caroline’s scholarly work focuses on: the public communication of economics and capitalism in the American past and present; social imaginaries of the American economy; and understandings of the economic self in networked culture. Her research on the public communication of science and economics in the United States during the Cold War era has been published in Enterprise & Society and The Appendix. Caroline will be working with Mary L. Gray, investigating social imaginaries of self, market, place and property that emerge in and around peer economy platforms.

 

Shannon McGregorShannonMcGregor

Shannon McGregor (M.A. University of Florida) is a third-year doctoral student (soon to be doctoral candidate!) in the School of Journalism at the University of Texas – Austin. Her research interests center on political communication, social media, gender, and public opinion. She has presented her work at International Communication Association (ICA), the American Political Science Association (APSA), the Association for Education in Journalism and Mass Communication (AEJMC), and the Midwest Association for Public Opinion Research (MAPOR). Her work has been published in the Journal of Communication, International Journal of Communication, Journal of Broadcasting and Electronic Media, and Journal of Media Ethics. Twitter @shannimcg

 

At Microsoft Research, New York City

Aaron Plasekplasek.bio.pic

Aaron Plasek works at the intersection of the historyof science, new media, and computation, and is writing a history of machine learning that examines the ways in which algorithms have been deployed in (ethical) arguments. He is currently a doctoral student in History at Columbia University and an MA candidate in the Draper Interdisciplinary Masters Program at NYU, and holds an MFA from the School of the Art Institute of Chicago and undergraduate degrees from Drake University in physics, astronomy, and writing.

 

How Do Users Take Collective Action Against Online Platforms? CHI Honorable Mention

What factors lead users in an online platform to join together in mass collective action to influence those who run the platform? Today, I’m excited to share that my CHI paper on the reddit blackout has received a Best Paper Honorable Mention! (Read the pre-print version of my paper here)

When users of online platforms complain, we’re often told to leave if we don’t like how a platform is run. Beyond exit or loyalty, digital citizens sometimes take a third option, organizing to pressure companies for change. But how does that come about?

I’m seeking reddit moderators to collaborate on the next stage of my research: running experiments together with subreddits to test theories of moderation. If you’re interested, you can read more here. Also, I’m presenting this work as part of larger talks at the Berkman Center on Feb 23 and the Oxford Internet Institute on March 16. I would love to see you there!

Having a formalized voice with online platforms is rare, though it has happened with San Francisco drag queens, the newly-announced Twitter Trust and Safety Council or the EVE player council, where users are consulted about issues a platform faces. These efforts typically keep users in positions of minimal power on the ladder of citizen participation, but they do give some users some kind of voice.

Another option is collective action, leveraging the collective power of users to pressure a platform to change how that platform works. To my knowledge, this has only happened four times on major U.S. platforms: when AOL community leaders settled a $15 million class action lawsuit for unpaid wages, when DailyKos writers went on strike in 2008, the recent Uber class action lawsuit, and the reddit blackout of July 2015, when moderators of 2,278 subreddits shut down their communities to pressure the company for better coordination and better moderation tools. They succeeded.

What factors lead communities to participate in such a large scale collective action? That’s the question that my paper set out to answer, combining statistics with the “thick data” of qualitative research.

The story of how I answered this question is also a story about finding ways to do large-scale research that include the voices and critiques of the people whose lives we study as researchers. In the turmoil of the blackout, amidst volatile and harmful controversies around hate speech, harassment, censorship, and the blackout itself, I made special effort to do research that included redditors themselves.

Theories of Social Movement Mobilization

Social movement researchers have been asking how movements come together for many decades, and there are two common schools, responding to early work to quantify collective action (see Olson, Coleman):

Political Opportunity Theories argue that social movements need the right people and the right moment. According to these theories, a movement happens when grievances are high, when social structure among potential participants is right, and when the right opportunity for change arises. For more on political opportunity theory, see my Atlantic article on the Facebook Equality Meme this past summer.

Resource Mobilization Theories argue that successful movements are explained less by grievances and opportunities and more by the resources available to movement actors. In their view, collective action is something that groups create out of their resources rather than something that arises out of grievances. They’re also interested in social structure, often between groups that are trying to mobilize people (read more).

A third voice in these discussions are the people who participate in movements themselves, voices that I wanted to have a primary role in shaping my research.

How Do You Study a Strike As It Unfolds?

I was lucky enough to be working with moderators and collecting data before the blackout happened. That gave me a special vantage for combining interviews and content analysis with statistical analysis of the reddit blackout.

Together with redditors, I developed an approach of “participatory hypothesis testing,” where I posed ideas for statistics on public reddit threads and worked together with redditors to come up with models that they agreed were a fair and accurate analysis of their experience. Grounding that statistical work involved a whole lot of qualitative research as well.

If you like that kind of thing, here are the details:

In the CHI paper, I analyzed 90 published interviews with moderators from before the blackout, over 250 articles outside reddit about the blackout, discussions in over 50 subreddits that declined to join the blackout, public statements by over 200 subreddits that joined the blackout, and over 150 discussions in blacked out subreddits after their communities were restored. I also read over 100 discussions in communities that chose not to join. Finally, I conducted 90 minute interviews with 13 subreddit moderators of subreddits of all sizes, including those that joined and declined to join the blackout.

To test hypotheses developed with redditors, I collected data from 52,735 non-corporate subreddits that received at least one comment in June 2015, alongside a list of blacked-out subreddits. I also collected data on moderators and comment participation for the period surrounding the blackout.

So What’s The Answer? What Factors Predict Participation in Action Against Platforms?

In the paper, I outline major explanations offered by moderators and translate them into a statistical model that corresponds to major social movement theories. I found evidence confirming many of redditor’s explanations across all subreddits, including aspects of classic social movement theories. These findings are as much about why people choose *not* to participate as much as they are about what factors are involved in joining:

    • Moderator Grievances were important predictors of participation. Subreddits with greater amounts of work, and whose work was more risky were more likely to join the blackout
    • Subreddit Resources were also important factors. Subreddits with more moderators were more likely to join the blackout. Although “default” subreddits played an important role in organizing and negotiating in the blackout, they were no more or less likely to participate, holding all else constant.
    • Relations Among Moderators were also important predictors, and I observed several cases where “networks” of closely-allied subreddits declined to participate.
    • Subreddit Isolation was also an important factor, with more isolated subreddits less likely to join, and moderators who participate in “metareddits” more likely to join.
    • Moderators Relations Within Their Groups were also important; subreddits whose moderators participated more in their groups were less likely to join the blackout.

Many of my findings go into details from my interviews and observations, well beyond just a single statistical model; I encourage you to read the pre-print version of my paper.

What’s Next For My reddit Research?

The reddit blackout took me by surprise as much as anyone, so now I’m back to asking the questions that brought me to moderators in the first place:

THANK YOU REDDIT! & Acknowledgments

CHI_Banner

First of all, THANK YOU REDDIT! This research would not have been possible without generous contributions from hundreds of reddit users. You have been generous all throughout, and I deeply appreciate the time you invested in my work.

Many other people have made this work possible; I did this research during a wonderful summer internship at the Microsoft Research Social Media Collective, mentored by Tarleton Gillespie and Mary Gray. Mako Hill introduced me to social movement theory as part of my general exams. Molly Sauter, Aaron Shaw, Alex Leavitt, and Katherine Lo offered helpful early feedback on this paper. My advisor Ethan Zuckerman remains a profoundly important mentor and guide through the world of research and social action.

Finally, I am deeply grateful for family members who let me ruin our Fourth of July weekend to follow the reddit blackout closely and set up data collection for this paper. I was literally sitting at an isolated picnic table ignoring everyone and archiving data as the weekend unfolded. I’m glad we were able to take the next weekend off! ❤

Reminder! MSR Social Media Collective PhD Intern Call, Cycle 2016!

APPLICATION DEADLINE: JANUARY 29, 2016

Microsoft Research New England (MSRNE) is looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week 2016 Intern Program. The Social Media Collective scholars at MSRNE bring together empirical and critical perspectives to address complex socio-technical issues. Our research agenda draws on a social scientific/humanistic lens to understand the social meanings and possible futures of media and communication technologies. The ideal candidate may be trained in any number of disciplines (including anthropology, communication, information studies, media studies, sociology, science and technology studies, or a related field), but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to media or communication technologies and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.

MSRNE internships are 12-week paid internships in Cambridge, Massachusetts. PhD interns are expected to be on-site for the duration of their internship. Primary mentors for this year will be Nancy Baym, Tarleton Gillespie, and Mary L. Gray, with additional guidance offered by our lab postdocs and visiting scholars.

PhD interns at MSRNE are expected to devise and execute a research project (see project requirements below), based on their application project proposals, during their internships. The expected outcome of an internship at MSRNE is a draft of a publishable scholarly paper for an academic journal or conference of the intern’s choosing. Our goal is to help the intern advance their own career; interns are strongly encouraged to work towards a creative outcome that will help them on the academic job market. Interns are also expected to collaborate on projects or papers with full-time researchers and visitors, contribute to the SMC blog, give short presentations, attend the weekly lab colloquia, and contribute to the life of the community through weekly lunches with fellow PhD interns and the broader lab community. While this is not an applied program, MSRNE encourages interdisciplinary collaboration with computer scientists, economists, and mathematicians.

PEOPLE AT MSRNE SOCIAL MEDIA COLLECTIVE

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:

  • How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)
  • How are social media platforms, through algorithmic design and user policies, adopting the role of intermediaries for public discourse? (Tarleton Gillespie)
  • What are the cultural, political, and economic implications of crowdsourcing as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)
  • How are predictive analytics used by law enforcement and what are the implications of new data-driven surveillance practices? (Sarah Brayne)
  • What are the social and political consequences of popular computing folklore? (Kevin Driscoll)
  • How are the technologies of money changing and what are the social implications of those changes? (Lana Swartz)

SMC PhD interns may have the opportunity to connect with our sister Social Media Collective members in New York City. Related projects in New York City include:

  • What are the politics, ethics, and policy implications of big data science? (Kate Crawford, MSR-NYC)
  • What are the social and cultural issues arising from data-centric technological development? (danah boyd, Data & Society Research Institute)

We are looking for applicants to focus their proposals on one of the following seven areas (though, you may propose a project that speaks to more than one of these):

  1. Personal relationships and digital media
  2. Audiences and the shifting landscapes of socially mediated entertainment
  3. Affective, immaterial, and other frameworks for understanding digital labor
  4. The social and political consequences of popular computing folklore
  5. The politics of big data, algorithms, and computational culture
  6. How emerging technologies shape countercultures, identities, and communities of difference
  7. Histories of computing and the internet that focus on the experiences of people from marginalized social, economic, racial, or geographic groups

Applicants should have advanced to candidacy in their PhD program by the time they start their internship (unfortunately, there are no opportunities for Master’s students or early PhD students at this time). Interns will benefit most from this opportunity if there are natural opportunities for collaboration with other researchers or visitors currently working at MSRNE. Applicants from historically marginalized communities, underrepresented in higher education, and students from universities outside of the United States are encouraged to apply.

For a complete list of all permanent researchers and current postdocs based at the New England lab see:

Which is: http://research.microsoft.com/en-us/labs/newengland/people/bios.aspx

Previous MSRNE interns in the Collective have included Amelia Abreu (UWashington, information), Stacy Blasiola (University of Illinois, Chicago, communication), Jed Brubaker (UC-Irvine, informatics), Aleena Chia (Indiana U. communication and culture), Jade Davis (University of North Carolina, communication), Brittany Fiore-Silfvast (University of Washington, communication), Scott Golder (Cornell, sociology), Germaine Halegoua (U. Wisconsin, communications), Tero Karppi (University of Turku, media studies), Airi Lampinen (HIIT, information), Jessa Lingel (Rutgers, library and information science), Joshua McVeigh-Schultz (University of Southern California, interactive media), Alice Marwick (NYU, media culture communication), J. Nathan Matias (MIT Media Lab), Jolie Matthews (Stanford, learning sciences), Tressie McMillan Cottom (Emory, sociology), Andrés Monroy-Hernandez (MIT, Media Lab), Laura Noren (NYU, sociology), Nick Seaver (UC Irvine, anthropology), Jaroslav Svelch (Charles University, media studies), Katrin Tiidenberg (Tallinn University, Institute of International and Social Studies), Shawn Walker (UWashington, information), Omar Wasow (Harvard, African-American studies), Sarita Yardi (GeorgiaTech, HCI), and Kathryn Zyskowski (University of Washington, anthropology).

For more information about the Social Media Collective, visit our blog: https://socialmediacollective.org/

APPLICATION PROCESS

To apply for a PhD internship with the social media collective:

  1. Fill out the online application form: https://research.microsoft.com/apps/tools/jobs/intern.aspx

On the application website, indicate that your research area of interest is “Anthropology, Communication, Media Studies, and Sociology” and that your location preference is “New England, MA, U.S.” in the pull down menus. Also enter the name of a mentor (Nancy Baym, Tarleton Gillespie, or Mary Gray) whose work most directly relates to your own in the “Microsoft Research Contact” field. IF YOU DO NOT MARK THESE PREFERENCES WE WILL NOT RECEIVE YOUR APPLICATION. So, please, make sure to follow these detailed instructions.

Your application will need to include:

  1. A brief description of your dissertation project.
  2. An academic article-length manuscript (~7,000 or more) that you have authored or co-authored (published or unpublished) that demonstrates your writing skills.
  3. A copy of your CV.
  4. The names and contact information for 3 references (one contact name must be your dissertation advisor).
  5. A pointer to your website or other online presence (if available; not required).
  6. A short description (no more than 2 pages, single spaced) of 1 or 2 projects that you propose to do while interning at MSRNE, independently and/or in collaboration with current SMC researchers. The project proposals can be related to but must be distinct from your dissertation research. Be specific and tell us: 1) What is the research question animating your proposed project? 2) What methods would you use to address your question? 3) How does your research question speak to the interests of the SMC? and 4) Who do you hope to reach (who are you engaging) with this proposed research? This is important – we really want to know what it is you want to work on with us and we need to know that it is not, simply, a continuation of your dissertation project.

On Letters of Reference:

After you submit your application, a request for letters will be sent to your list of referees, on your behalf. NOTE: THE APPLICATION SYSTEM WILL NOT REQUEST REFERENCE LETTERS UNTIL AFTER YOU HAVE SUBMITTED YOUR APPLICATION! Please warn your letter writers in advance so that they will be ready to submit them when they receive the prompt. The email they receive will automatically tell them they have two weeks to respond but that an individual call for applicants may have an earlier deadline. Please ensure that they expect this email (tell them to check their spam folders, too!) and are prepared to submit your letter by our application deadline of Friday 29 January, 2016. Please make sure to check back with your referees if you have any questions about the status of your requested letters of recommendation. You can check the progress on individual reference requests at any time by clicking the status tab within your application page. Note that a complete application must include three submitted letters of reference.

TIMELINE

Due to the volume of applications, late submissions (including submissions with late letters of reference) will not be considered. We will not be able to provide specific feedback on individual applications. Finalists will be contacted the last week in February to arrange a Skype interview before the internship slots available to us are assigned (note: number of available slots changes year-to-year). Please keep an eye on the socialmediacollective.org blog as we announce the 2016 PhD Interns on the blog by the end of March.

If you have any questions about the application process, please contact Mary Gray at mLg@microsoft.com and include “SMC PhD Internship” in the subject line.

PREVIOUS INTERN TESTIMONIALS

“The internship at Microsoft Research was all of the things I wanted it to be – personally productive, intellectually rich, quiet enough to focus, noisy enough to avoid complete hermit-like cave dwelling behavior, and full of opportunities to begin ongoing professional relationships with other scholars who I might not have run into elsewhere.”
— Laura Noren, Sociology, New York University

“If I could design my own graduate school experience, it would feel a lot like my summer at Microsoft Research. I had the chance to undertake a project that I’d wanted to do for a long time, surrounded by really supportive and engaging thinkers who could provide guidance on things to read and concepts to consider, but who could also provoke interesting questions on the ethics of ethnographic work or the complexities of building an identity as a social sciences researcher. Overall, it was a terrific experience for me as a researcher as well as a thinker.”
— Jessica Lingel, Library and Information Science, Rutgers University

“Spending the summer as an intern at MSR was an extremely rewarding learning experience. Having the opportunity to develop and work on your own projects as well as collaborate and workshop ideas with prestigious and extremely talented researchers was invaluable. It was amazing how all of the members of the Social Media Collective came together to create this motivating environment that was open, supportive, and collaborative. Being able to observe how renowned researchers streamline ideas, develop projects, conduct research, and manage the writing process was a uniquely helpful experience – and not only being able to observe and ask questions, but to contribute to some of these stages was amazing and unexpected.”
— Germaine Halegoua, Communication Arts, University of Wisconsin-Madison

“Not only was I able to work with so many smart people, but the thoughtfulness and care they took when they engaged with my research can’t be stressed enough. The ability to truly listen to someone is so important. You have these researchers doing multiple, fascinating projects, but they still make time to help out interns in whatever way they can. I always felt I had everyone’s attention when I spoke about my project or other issues I had, and everyone was always willing to discuss any questions I had, or even if I just wanted clarification on a comment someone had made at an earlier point. Another favorite aspect of mine was learning about other interns’ projects and connecting with people outside my discipline.”
–Jolie Matthews, Education, Stanford University

Big Data, Context Cultures

The latest issue of Media, Culture, and Society features an open-access discussion section responding to SMC all-stars danah boyd and Kate Crawford‘s “Critical Questions for Big Data.” Though the article is only a few years old, it’s been very influential and a lot has happened since it came out, so editors Aswin Punathambekar and Anastasia Kavada commissioned a few responses from scholars to delve deeper into danah and Kate’s original provocations.

The section features pieces by Anita Chan on big data and inclusion, André Brock on “deeper data,” Jack Qiu on access and ethics, Zizi Papacharissi on digital orality, and one by me, Nick Seaver, on varying understandings of “context” among critics and practitioners of big data. All of those, plus an introduction from the editors, are open-access, so download away!

My piece, titled “The nice thing about context is that everyone has it,” draws on my research into the development of algorithmic music recommenders, which I’m building on during my time with the Social Media Collective this fall. Here’s the abstract:

In their ‘Critical Questions for Big Data’, danah boyd and Kate Crawford warn: ‘Taken out of context, Big Data loses its meaning’. In this short commentary, I contextualize this claim about context. The idea that context is crucial to meaning is shared across a wide range of disciplines, including the field of ‘context-aware’ recommender systems. These personalization systems attempt to take a user’s context into account in order to make better, more useful, more meaningful recommendations. How are we to square boyd and Crawford’s warning with the growth of big data applications that are centrally concerned with something they call ‘context’? I suggest that the importance of context is uncontroversial; the controversy lies in determining what context is. Drawing on the work of cultural and linguistic anthropologists, I argue that context is constructed by the methods used to apprehend it. For the developers of ‘context-aware’ recommender systems, context is typically operationalized as a set of sensor readings associated with a user’s activity. For critics like boyd and Crawford, context is that unquantified remainder that haunts mathematical models, making numbers that appear to be identical actually different from each other. These understandings of context seem to be incompatible, and their variability points to the importance of identifying and studying ‘context cultures’–ways of producing context that vary in goals and techniques, but which agree that context is key to data’s significance. To do otherwise would be to take these contextualizations out of context.

Presentation; Between Platforms and Community: Moderators on Reddit

Presentation by intern Nathan Matias on the project he worked on during the summer at the SMC. He has continued to work on his research, so in case you have not read it here is a more updated post on his work:

Followup: 10 Factors Predicting Participation in the Reddit Blackout. Building Statistical Models of Online Behavior through Qualitative Research

Below is the presentation he did for MSR earlier this month:

(Part1)

(Part 2)

(Part 3)

(Part 4)

Co-creation and Algorithmic Self-Determination: A study of player feedback on game analytics in EVE Online

We are happy to share SMC’s intern Aleena Chia’s presentation of her summer project titled “Co-creation and Algorithmic Self-Determination: A study of player feedback on game analytics in EVE Online”.  

Aleena’s project summary and the videos of her presentation below:

Digital games are always already information systems designed to respond to players’ inputs with meaningful feedback (Salen and Zimmerman 2004). These feedback loops constitute a form of algorithmic surveillance that have been repurposed by online game companies to gather information about player behavior for consumer research (O’Donnell 2014). Research on player behavior gathered from game clients constitutes a branch of consumer research known as game analytics (Seif et al 2013).[1] In conjunction with established channels of customer feedback such as player forums, surveys, polls, and focus groups, game analytics informs companies’ adjustments and augmentations to their games (Kline et al 2005). EVE Online is a Massively Multiplayer Online Game (MMOG) that uses these research methods in a distinct configuration. The game’s developers assemble a democratically elected council of players tasked with the filtration of player interests from forums to inform their (1) agenda setting and (2) contextualization of game analytics in the planning and implementation of adjustments and augmentations.

This study investigates the council’s agenda setting and contextualization functions as a form of co-creation that draws players into processes of game development, as interlocutors in consumer research. This contrasts with forms of co-creation that emphasize consumers’ contributions to the production and circulation of media content and experiences (Banks 2013). By qualitatively analyzing meeting minutes between EVE Online’s player council and developers over seven years, this study suggests that co-creative consumer research draws from imaginaries of player governance caught between the twin desires of corporate efficiency and democratic efficacy. These desires are darned together through a quantitative public sphere (Peters 2001) that is enabled and eclipsed by game analytics. In other words, algorithmic techniques facilitate collective self-knowledge that players seek for co-creative deliberation; these same techniques also short circuit deliberation through claims of neutrality, immediacy, and efficiency.

The significance of this study lies in its analysis of a consumer public’s (Arvidsson 2013) ambivalent struggle for algorithmic self-determination – the determination by users through deliberative means of how their aggregated acts should be translated by algorithms into collective will. This is not primarily a struggle of consumers against corporations; nor of political principles against capitalist imperatives; nor of aggregated numbers against individual voices. It is a struggle within communicative democracy for efficiency and efficacy (Anderson 2011). It is also a struggle for communicative democracy within corporate enclosures. These struggles grind on productive contradictions that fuel the co-creative enterprise. However, while the founding vision of co-creation gestured towards a win-win state, this analysis concludes that algorithmic self-determination prioritizes efficacy over efficiency, process over product. These commitments are best served by media companies oriented towards user retention rather than recruitment, business sustainability rather than growth, and that are flexible enough to slow down their co-creative processes.

[1] Seif et al (2013) maintain that player behavior data is an important component of game analytics, which includes the statistical analysis, predictive modeling, optimization, and forecasting of all forms of data for decision making in game development. Other data include revenue, technical performance, and organizational process metrics.

(Video 1)

(Video 2)

(Video 3)

(Video 4)

Followup: 10 Factors Predicting Participation in the Reddit Blackout. Building Statistical Models of Online Behavior through Qualitative Research

Three weeks ago, I shared dataviz and statistical models predicting participation in the Reddit Blackout in July 2015. Since then, many moderators have offered feedback and new ideas for the data analysis, alongside their own stories. Earlier today, I shared this update with redditors.

UPDATE, Sept 16, 9pm ET: Redditors brilliantly spotted an important gap in my dataset and worked with me to resolve it. After taking the post down for two days, I am posting the corrected results. Thanks to their quick work, the graphics and findings in this post are more robust.


This July, moderators of 2,278 subreddits joined a “blackout,” demanding better communication and improved moderator tools. As part of my wider research on the work and position of moderators in online communities, I have also been asking the question: who joined the July blackout, and what made some moderators and subs more likely to participate?

Reddit Moderator Network July 2015, including NSFW Subs, with Networks labeled

Academic research on the work of moderators would expect that the most important predictor of blackout participation would be the workload, which creates common needs across subs. Aaron Shaw and Benjamin Mako Hill argue, based on evidence from Wikia, that as the work of moderating becomes more complex within a community, moderators grow in their own sense of common identity and common needs as distinct from their community (read Shaw and Hill’s Wikia paper here). Postigo argues something similar in terms of moderators’ relationship to a platform: when moderators feel like they’re doing huge amounts of work for a company that’s not treating them well, they can develop common interests and push back (read my summary of Postigo’s AOL paper here).

Testing Redditors’ Explanations of The Blackout

After posting an initial data analysis to reddit three weeks ago, dozens of moderators generously contacted me with comments and offers to let me interview them. In this post, I test hypotheses straight from redditors’ explanations of what led different subreddits to join the blackout. By putting all of these hypotheses into one model, we can see how important they were across reddit, beyond any single sub. (see my previous post) (learn more about my research ethics and my promises to redditors)

TLDR:

  • Subs who shared mods with other blackout subs were more likely to join the blackout, but controlling for that:
  • Default subs were more likely to join the blackout
  • NSFW subs were more likely to join the blackout
  • Subs with more moderators were slightly more likely to join the blackout
  • More active subs were more likely to join the blackout
  • More isolated subs were less likely to join the blackout
  • Subs whose mods participate in metareddits were more likely to join the blackout
  • Subs whose mods get and give help in moderator-specific subs were no more or less likely to join the blackout

In my research I have read over a thousand reddit threads, interviewed over a dozen moderators, archived discussions in hundreds of subreddits, and collected data from the reddit API— starting before the blackout. Special thanks to everyone who has spoken with me and shared data.

Improving the Blackout Dataset With Comment Data

Based on conversations with redditors, I collected more data:

  • Instead of the top 20,000 subreddits by subscribers, I now focus on the top subreddits by number of comments in June 2015, thanks to a comment dataset collected by /u/Stuck_In_the_Matrix
  • I updated my /u/GoldenSights amageddon dataset to include 400 additional subs, after feedback from redditors on /r/TheoryOfReddit
  • I include “NSFW” subreddits intended for people over 18
  • I account for more bots thanks to redditor feedback
  • I account for changes in subreddit leadership (with some gaps for subreddits that have experienced substantial leadership changes since July) In this dataset, half of the 10 most active subs joined the blackout, 24% of the 100 most active, 14.2% of the 1,000 most active, and 4.7% of the 20,000 most active subreddits.

To illustrate the data, here are two charts of the top 52,754 most active subreddits as they would have stood at the end of June. The font size and node size are related to the log-transformed number of comments from June. Ties between subreddits represent shared moderators. The charts are laid out using the ForceAtlas2 layout on Gephi, which has separated out some of the more prominent subreddit networks, including the ImaginaryNetwork, the “SFW Porn” Network, and several NSFW networks (I’ve circled notable networks in the network graph at the top of this post).

Reddit Blackout July 2015: Top 20,000 Subreddits by comments

Redditors’ Explanations Of Blackout Participation

With 2,278 subreddits joining the blackout, redditors have many theories for what experiences and factors led subs to join the blackout. In the following section, I share these theories and then test one big logistic regression model that accounts for all of the theories together. In these tests, I consider 52,745 subreddits that had at least one comment in June 2015. A total of 1,342 of these subreddits joined the blackout.

The idea of blacking out had come up before. According to one moderator, blacking out was first discussed by moderators three years ago as a way to protest Gawker’s choice to publish details unmasking a reddit moderator. Although some subs banned Gawker URLs from being posted to their communities, the blackout didn’t take off. While some individual subreddits have blacked out in the intervening years, this was the first time that many subs joined together.

I tested these hypotheses with the set of (firth) logistic regression models shown below. The final model (on the right) offers the best fit of all the models, with a McFadden R2 of 0.123, which is pretty good.

PREDICTING PARTICIPATION IN THE REDDIT BLACKOUT JULY 2015
Preliminary logistic regression results, J. Nathan Matias, Microsoft Research
Published on September 14, 2015
More info about this research: bit.ly/1V7c9i4
Contact: /u/natematias

N = top 52,745 subreddits in terms of June 2015 comments, including NSFW, for subreddits still available on July 2
Comment dataset: https://www.reddit.com/r/datasets/comments/3bxlg7/i_have_every_publicly_available_reddit_comment/
List of subreddits "going private": https://www.reddit.com/r/GoldTesting/wiki/amageddon 
Moderator network queried in June 2015, with gap filling in July 2015 and September 2015

==================================================================================================================
                                                                  Dependent variable:                             
                                      ----------------------------------------------------------------------------
                                                                        blackout                                  
                                         (1)        (2)        (3)        (4)        (5)        (6)        (7)    
------------------------------------------------------------------------------------------------------------------
default sub                             3.161***   1.065***   1.070***   0.814**    0.720**    0.693**    0.705**  
                                       (0.294)    (0.305)    (0.317)    (0.336)    (0.337)    (0.337)    (0.339)  
                                                                                                                  
NSFW sub                                0.179*     0.235**    0.268***   0.291***   0.288***   0.314***   0.313*** 
                                       (0.098)    (0.099)    (0.099)    (0.101)    (0.101)    (0.102)    (0.102)  
                                                                                                                  
log(comments in june 2015)                         0.263***   0.268***   0.246***   0.258***   0.256***   0.257*** 
                                                  (0.009)    (0.010)    (0.011)    (0.011)    (0.011)    (0.011)  
                                                                                                                  
moderator count                                               0.066***   0.055***   0.053***   0.051***   0.051*** 
                                                             (0.007)    (0.008)    (0.008)    (0.008)    (0.008)  
                                                                                                                  
log(comments):moderator count                                -0.006***  -0.005***  -0.005***  -0.004***  -0.004*** 
                                                             (0.001)    (0.001)    (0.001)    (0.001)    (0.001)  
                                                                                                                  
log(mod roles in other subs)                                            -0.293***  -0.328***  -0.334***  -0.332*** 
                                                                        (0.033)    (0.033)    (0.033)    (0.033)  
                                                                                                                  
log(mod roles in blackout subs)                                          2.163***   2.134***   2.134***   2.133*** 
                                                                        (0.096)    (0.096)    (0.096)    (0.096)  
                                                                                                                                                                                                                              
log(mod roles in other subs):log(mod roles in blackout subs)            -0.255***  -0.248***  -0.254***  -0.254*** 
                                                                        (0.017)    (0.017)    (0.017)    (0.017)  

log(sub isolation, by comments)                                                    -2.608***  -2.568***  -2.569*** 
                                                                                   (0.347)    (0.345)    (0.345)  
                                                                                                                  
log(metareddit participation per mod in june 2015)                                             0.100***   0.103*** 
                                                                                              (0.036)    (0.036)  
                                                                                                                  
log(mod-specific sub participation per mod in june 2015)                                                 -0.024  
                                                                                                         (0.063)  
                                                                                                                  
Constant                               -3.608***  -4.517***  -4.677***  -4.655***  -4.467***  -4.469***  -4.469*** 
                                       (0.028)    (0.050)    (0.054)    (0.058)    (0.060)    (0.060)    (0.060)  
                                                                                                                  
------------------------------------------------------------------------------------------------------------------
Observations                            52,745     52,745     52,745     52,745     52,745     52,745     52,745  
Log Likelihood                        -6,520.505 -6,171.874 -6,130.725 -5,861.099 -5,806.916 -5,803.188 -5,803.098
Akaike Inf. Crit.                     13,047.010 12,351.750 12,273.450 11,740.200 11,633.830 11,628.380 11,630.200
==================================================================================================================
Note:                                                                                  *p<0.1; **p<0.05; ***p

 

The network of moderators who moderate blackout subs is the strongest predictor in this model. At a basic level, it makes sense that moderators who participated in the blackout in one subreddit might participate in another. Making sense of this kind of network relationship is a hard problem in network science, and this model doesn’t include time as a dimension, so we don’t consider which subs went dark before which others. If I had data on the time that subreddits went dark, it might be possible to better research this interesting question, like Bogdan State and Lada Adamic did with their paper on the Facebook equality meme.

Hypothesis 1: Default subs were more likely to join the blackout

In interviews, some moderators pointed out that “most of the conversation about the blackout first took place in the default mod irc channel.” Moderators of top subs described the blackout as mostly an issue concerning default or top subreddits.

This hypothesis supported in the final model. For example, while a non-default subreddit with 4 million monthly comments had a 32.9% chance of joining the blackout (holding all else at their means), a default subreddit of the same size had a 48.6% chance of joining the blackout, on average in the population of subs.

Hypothesis 2: Subs with more comment activity were more likely to join the blackout

Moderators of large, non-default subreddits also had plenty of reasons to join the blackout, either because they also shared the need for better moderating tools, or because they had more common contact and sympathy with other moderators as a group.

Even among subreddits that declined to joint the blackout, many moderators described feeling obligated to make a decision one way or an other. This surprised moderators of large subreddits, who saw it as an issue for larger groups. Size was a key issue in the hundreds of smaller groups that discussed the possibility, with many wondering if they had much in common with larger subs, or whether blacking out their smaller sub would make any kind of dent in reddit’s advertising revenue.

In the final model, larger subs were more likely to join the blackout, a logarithmic relationship that is mediated by the number of moderators. When we set everything else to its mean, we can observe how this looks for subs of different sizes. In the 50th percentile, subreddits with 6 comments per month had a 1.6% chance of joining the blackout — a number that adds up with so many small subs. In the 75th percentile, subs with 46 comments a month had a 2.5% chance of joining the blackout. Subs with 1,000 comments a month had a 5.4% chance of joining, while subs with 100,000 comments a month had a 15.8% chance of joining the blackout, on average in the population of subs, holding all else constan.

Hypothesis 3: NSFW subs were more likely to join the blackout

In interviews, some moderators said that they declined to join the blackout because they saw it as something associated with support for hate speech subreddits taken down by the company in June or other parts of reddit they preferred not to be associated with. Default moderators denied this flatly, describing the lengths they went to dissociate from hate speech communities and sentiment against then-CEO Ellen Pao. Nevertheless, many journalists drew this connection, and moderators were worried that they might become associated with those subs despite their efforts.

Another possibility is that NSFW subs have to do more work to maintain subs that offer high quality NSFW conversations without crossing lines set by reddit and the law. Perhaps NSFW subs just have more work, so they were more likely to see the need for better tools and support from reddit.

In the final model, NSFW subs were more likely to join the blackout than non-NSFW subs. For example, while a non-default, non-NSFW subreddit with 22,800 of comments had a 11.4% chance of joining the blackout (holding all else at their means), an NSFW subreddit of the same size had a 15.3% chance of joining the blackout, on average in the population of subs. Among less popular subs, a non-NSFW sub with 1,000 comments per month had a 5.4% chance of joining the blackout, while an NSFW sub of the same size had a 7.5% chance of joining, holding all else constant, on average in the population of subs.

Hypothesis 4: More isolated subs were less likely to join the blackout

In the interviews I conducted, as well as the 90 or so interviews I read on /r/subredditoftheday, moderators often contrasted their communities with “the rest of reddit.” When I asked one moderator of a support-oriented subreddit about the blackout, they mentioned that “a lot of the users didn’t really identify with the rest of reddit.” Subscribers voted against the blackout, describing it as “a movement we didn’t identify with,” this moderator said.

To test hypotheses about more isolated subs, I parsed all comments in every public subreddit in June 2015, generating an “in/out” ratio. This ratio consists of the total comments within the sub divided by the total comments made elsewhere by the sub’s commenters. A subreddit whose users stayed in one sub would have a ratio above 1, while a subreddit whose users commented widely would have a ratio below 1. I tested other measures, such as the average of per-user in/out ratios, but the overall in/out ratio seems the best.

In the final model, more isolated subs were less likely to join the blackout, on a logarithmic scale. Most subreddit’s commenters participate actively elsewhere on reddit, at a mean in/out ratio of 0.24. That means that on average, a subreddit’s participants make 4 times more comments outside a sub than within it. At this level, holding everything else at their means, a subreddit with 1,000 comments a month had a 4.0% chance of joining the blackout. A similarly-sized subreddit whose users made half of their comments within the sub (in/out ratio of 1.0) had just a 1% chance of joining the blackout. Very isolated subs whose users commented twice as much in-sub had a 0.3% chance of joining the blackout, on average in the population of subs, holding all else constant.

Hypothesis 5: Subs with more moderators were more likely to join the blackout

This one was my hypothesis, based on a variety of interview details. Subs with more moderators tend to have more complex arrangements for moderating and tend to encounter limitations in mod tools. Sums with more mods also have more people around, so their chances of spotting the blackout in time to participate was also probably higher. On the other hand, subs with more activity tend to have more moderators, so it’s important to control for the relationship between mod count and sub activity.

I was wrong. In the final model, subs with more moderators were LESS likely to join the blackout. There is a very small relationship here, and the relationship is mediated by the number of comments. For a sub with 1000 comments per month, with everything else at its average, a subreddit with 3 moderators (the average) had 5.4% chance of joining the blackout. A subreddit with 8 moderators had a 6% chance of joining the blackout, on average in the population of subs.

Hypothesis 6: Subs with admins as mods were more (or less) likely to join the blackout

I heard several theories about admins. During the blackout, some redditors claimed that admins were preventing subs from going private. In interviews, moderators tended to voice the opposite opinion. They argued that subs with admin contact were joining the blackout in order to send a message to the company, urging it to pay more attention to employees who advocated for moderator interests. Moderators at smaller subs said, “we felt 100% independent from admin assistance so it really wasn’t our fight.”

None of my hypothesis tests showed any statistically significant relationship between current or past admin roles as moderators and participation in the blackout, either way. For that reason, I omit it from my final model.

Hypothesis 7: Subs with moderators who moderated other subs were more likely to join the blackout

I’ve been wondering if moderators with multiple mod roles elsewhere on reddit would be more likely to join the blackout, perhaps because they had greater “solidarity” with other subreddits, or because they were more likely to find out about the blackout.

In the final model, the reverse is supported. Subs that shared moderators with other subs were actually less likely to join the blackout, a relationship that is mediated by the by the number of moderators who also modded blackout subs. Holding blackout sub participation constant, a sub of 1,000 comments per month and 3 moderator roles shared with other subs had a 5.7% chance of joining the blackout, while a more connected sub with 6 shared moderator roles (in the 4th quantile) had a 4.2% chance of joining the blackout, on average in the population of subs, holding all else constant.

Hypothesis 8: Subreddits with mods who also moderate other blackout subs were more likely to join the blackout.

This hypothesis is also a carry-over from my previous analysis, where I found a statistically-significant relationship. Note that making sense of this kind of network relationship is a hard problem in network science, and that we can’t use this to test “influence.”

In the final model, subreddits with mods with roles in other blackout subs were more likely to join the blackout, a relationship on a log scale that is mediated by the number of moderator roles shared with other subs more generally. 19% of subs in the sample share at least one moderator with a blackout sub, after removing moderator bots. A sub with 1,000 comments per month that didn’t have any overlapping moderators with blackout subs had a 3.2% chance of joining the blackout, while a sub with one overlapping moderator had an 11.1% chance to join, and a sub with 2 overlapping moderators had a 21.1% chance to join. For a sub with 6 overlapping moderators with blackout subs, a sub had a 57.2% chance of joining the blackout.

I tend to see the network of co-moderation as a control variable. We can expect that moderators who joined the blackout would be likely to support it across the many subs they moderate. By accounting for this in the model, we get a clearer picture on the other factors that were important.

Hypothesis 9: Subs with moderators who participate in metareddits were more likely to join the blackout

In interviews, several moderators described learning about the blackout from “meta-reddits” which cover major events on the site, and which mostly stayed up during the blackout. Just like we might expect more isolated subs to stay out of the blackout, we might expect moderators who get involved in reddit-wide meta-discussion to join the blackout. I took my list of metareddits from this TheoryOfReddit wiki post.

In the final model, subs with moderators who participate in metareddits were more likely to join the blackout, on a logarithmic scale. Most moderators on the site do not participate in metareddits. A sub of 1,000 comments per month with no metareddit participation by its moderators had a 5.3% chance of joining the blackout, while a similar sub whose moderators made 5 comments on any metareddit per month had a 6.3% chance of joining the blackout.

Hypothesis 10: Subs with mods participating in moderator-focused subs were more likely to join the blackout

Although key moderator subs like /r/defaultmods and /r/modtalk are private and inaccessible to me, I could still test a “solidarity” theory. Perhaps moderators who participate in mod-specific subs, who have helped and been helped by other mods, would be more likely to join the blackout?

Although this predictor is significant in a single-covariate model, when you account for all of the other factors, mod participation in moderator-focused subs is not a significant predictor of participation in the blackout.

This surprises me. I wonder: since moderator-specific subs tend to have low volume, one month of comments may just not be enough to get a good sense of which moderators participate in those subs. Also, this dataset doesn’t include IRC discussions (nor will it ever), where moderators seem mostly to hang out with and help each other. But from the evidence I have, it looks like help from moderator-focused subs played no part to sway moderators to join the blackout.

So, how DID solidarity develop in the blackout?

The question is still open, but from these statistical models, it seems clear that factors beyond moderator workload had a big role to play, even when controlling for mods of multiple subs that joined the blackout.

In further analysis in the next week, I’m hoping to include:

  • Activity by mods in each sub (comments, deletions)
  • Comment karma, as another measure of activity (still making sense of the numbers to see if they are useful here)
  • The complexity of the subreddit, as measured by things in the sidebar (possibly)

Building Statistical Models of Online Behavior through Qualitative Research

The process of collaborating with redditors on my statistical models has been wonderful. As I continue this work, I’m starting to think more and more about the idea of participatory hypothesis testing, in parallel with work we do at MIT around a Freire-inflected practices of “popular data“. If you’ve seen other examples of this kind of thing, do send them my way!

Introducing the 2015 MSR SMC PhD Interns!

Well, after a truly exciting spell of reviewing an AMAZING set of applications for our 2015 PhD Internship Program, we had the absolutely excruciating task of selecting just a few from the pool (note: this is our Collective’s least favorite part of the process).

Without further ado, we are pleased to announce our 2015 Microsoft Research SMC PhD interns:

At MSR New England:

Aleena Chia

Aleena

Aleena Chia is a Ph.D. Candidate in Communication and Culture at Indiana University. Her ethnographic research investigates the affective politics and moral economics of participatory culture, in the context of digital and live-action game worlds. She is a recipient of the Wenner-Gren Dissertation Fieldwork grant and has published work in American Behavioral Scientist. Aleena will be working with Mary L. Gray, researching connections between consumer protests, modularity of consumer labor, and portability of compensatory assets in digital and live-action gaming communities.

 

 

 

Stacy Blasiola

Stacy

Stacy Blasiola is a Ph.D. Candidate in the Department of Communication at the University of Illinois at Chicago and also holds an M.A. in Media Studies from the University of Wisconsin at Milwaukee. Stacy uses a mixed methods approach to study the social impacts of algorithms. Using the methods of big data she examines how news events appear in newsfeeds, and using qualitative methods she investigates how the people that use digital technologies understand, negotiate, and challenge the algorithms that present digital information. As a recipient of a National Science Foundation IGERT Fellowship in Electronic Security and Privacy, her work includes approaching algorithms and the databases that enable them from a privacy perspective. Stacy will be working with Nancy Baym and Tarleton Gillespie on a project that analyzes the discursive work of Facebook in regards to its social newsfeed algorithm.

 

J. Nathan Matias

NathanNathan Matias is a Ph.D. Student at the MIT Media Lab Center for Civic Media, a fellow at the Berkman Center for Internet and Society, and a DERP Institute fellow. Nathan researches technology for civic cooperation, activism, and expression through qualitative action research with communities, data analysis, software design, and field experiments.  Most recently, Nathan has been conducting large-scale studies and interventions on the effects of gender bias, online harassment, gratitude, and peer thanks in social media, corporations, and creative communities like Wikipedia. Nathan was a MSR Fuse Labs intern in 2013 with Andrés Monroy Hernández, where he designed NewsPad, a collaborative technology for neighborhood blogging. Winner of the ACM’s Nelson Prize, Nathan has published data journalism, technology criticism, and literary writing for the Atlantic, the Guardian, and PBS. Before MIT, he worked at technology startups Texperts and SwiftKey, whose products have reached over a hundred million people worldwide. At MSR, Nathan will be working with Tarleton Gillespie and Mary L. Gray, studying the professionalization of digital labor among community managers and safety teams in civic, microwork, and peer economy platforms. He will also be writing about ways that marginalized communities use data and code to respond and reshape their experience of harassment and hate speech online.

 

At MSR New York City:

Ifeoma Ajunwa

ajunwa

Ifeoma Ajunwa is a Paul F. Lazersfeld Fellow in the Sociology Department at the University of Columbia. She received her MPhil in Sociology from Columbia University in 2012. She was the recipient of the AAUW Selected Professions Fellowship in law school after which she practiced business law, international law, and intellectual property law. She has also conducted research for such organizations as the NAACP, the United Nations Human Rights Council, the ACLU of NY (the NYCLU), and UNESCO. Her prior independent research before graduate school include a pilot study at Stanford Law School where she interrogated the link between stereotype threat and the intersecting dynamics of gender, race, and economic class in relation to Bar exam preparation and passage. Ifeoma’s writing has also been published in the NY Times, the HuffingtonPost, and she has been interviewed for Uptown Radio in NYC. She will be working with Kate Crawford at MSR-NYC on data discrimination.