Night modes and the new hue of our screens

Information & Culture just published (paywall; or free pre-print) an article I wrote about “night modes,” in which I try to untangle the history of light, screens, sleep loss, and circadian research. If we navigate our lives enmeshed with technologies and their attendant harms, I wanted to know how we make sense of our orientation to the things that prevent harm. To think, in other words, of the constellation of people and things that are meant to ward off, prevent, stave off, or otherwise mitigate the endemic effects of using technology.

If you’re not familiar with “night modes”: in recent years, hardware manufacturers and software companies have introduced new device modes that shift the color temperature of screens during evening hours. To put it another way: your phone turns orange at night now. Perhaps you already use f.lux, or Apple’s “Night Shift,” or “Twilight” for Android.

All of these software interventions come as responses to the belief that untimely light exposure closer to bedtime will result in less sleep or a less restful sleep. Research into human circadian rhythms has had a powerful influence on how we think and talk about healthy technology use. And recent discoveries in the human response to light, as you’ll learn in the article, are based on a tiny subset of blind persons who lack rods and cones. As such, it’s part of a longer history of using research on persons with disabilities to shape and optimize communication technologies – a historical pattern that the media and disability studies scholar, Mara Mills, has documented throughout her career.

 apple night shift

Continue reading “Night modes and the new hue of our screens”

How machine learning can amplify or remove gender stereotypes

TLDR: It’s easier to remove gender biases from machine learning algorithms than from people.

In a recent paper, Saligrama, Bolukbasi, Chang, Zou, and I stumbled across some good and bad news about Word Embeddings. Word Embeddings are a wildly popular tool of the trade among AI researchers. They can be used to solve analogy puzzles. For instance, for man:king :: woman:x, AI researchers celebrate when the computer outputs xqueen (normal people are surprised that such a seemingly trivial puzzle could challenge a computer). Inspired by our social scientist colleagues (esp. Nancy Baym, Tarleton Gillespie and Mary Gray), we dug a little deeper and wrote a short program that found the “best” he:x :: she:y analogies, where best is determined according to the embedding of common words and phrases in the most popular publicly available Word Embedding (trained using word2vec on 100 billion words from Google News articles).

The program output a mixture of x-y pairs ranging from definitional, like brother-sister (i.e. he is to brother as she is to sister), to stereotypical, like blue-pink or guitarist-vocalist, to blatantly sexist, like surgeon-nurse, computer programmer-homemaker, and brilliant-lovely. There were also some humorous ones like he is to kidney stone as she is to pregnancysausages-buns, and WTF-OMG. For more analogies and an explanation of the geometry behind them, read more below or see our paper, Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.

Bad news: the straightforward application of Word Embeddings can inadvertently *amplify* biases. These Word Embeddings are being used in increasingly many applications. Among the countless papers that discuss Word Embeddings for use in searching the web, processing resumes, chatbots, etc., etc., hundreds of articles mention the king-queen analogy while none of them notice the blatant sexism present.

Say someone searches for computer programmer. A nice paper has shown how to improve search results using the knowledge in Word Embeddings that the term computer programmer is related to terms like javascript. Using this, search results containing these related terms can bubble up and it was shown that the average results of such a system are statistically more relevant to the query.  However, it also happens that the name John has a stronger association with programmer than the name Mary. This means that, between two identical results that differed only in the names John/Mary, John’s would be ranked first. This would *amplify* the statistical bias that most programmers are male by moving the few female programmers even lower in the search results.

Now you might think that we could solve this problem by simply removing names from embeddings – but there are still subtle indirect biases: the term computer programmer is also closer to baseball than to gymnastics, and as you can imagine, removing names wouldn’t entirely solve the problem.

Good news: biases can easily be reduced/removed from word embeddings. With a touch of a button, we can remove all gender associations between professions, names, and sports in a word embedding. In fact, the word embedding itself captures these concepts so you only have to give a few examples of the kinds of associations you want to keep and the kind you want to remove, and the machine learning algorithms do the rest. Think about how much easier this is for a computer than a human. Men and women have all been shown to have implicit gender associations. And the Word Embeddings also surface shocking gender associations implicit in the text on which they were trained.

People can try to ignore these associations when doing things like evaluating candidates for hiring, but it is a constant uphill battle. A computer, on the other hand, can be programmed to remove associations between different sets of words once, and with ease it will continue along with its work. Of course, we machine learning researchers still need to be careful — depending on the application, biases can creep in other ways. Also, I mention that we are providing tools that others can use to define, remove, negate, but also possibly even amplify biases as they choose for their applications.

As machine learning and AI become ever more ubiquitous, there have been growing pubic discussions about the social benefits and possible dangers of AI. Our research gives insight into a concrete example where a popular, unsupervised machine learning algorithm, when trained over a large corpus of text, reflects and crystallizes the stereotypes in the data and in our society. Wide-spread adoptions of such algorithms can greatly amplify such stereotypes with damaging consequences. Our work highlights the importance to quantify and understand such biases in machine learning and also how machine learning algorithms may be used to reduce bias.

Future work: This work focused on gender biases, specifically male-female biases, but we are now working on techniques for identifying and removing all sorts of biases such as racial biases from Word Embeddings.

#trendingistrending: when algorithms become culture

trendingistrending_frontpage_Page_01I wanted to share a new essay, “#Trendingistrending: When Algorithms Become Culture” that I’ve just completed for a forthcoming Routledge anthology called Algorithmic Cultures: Essays on Meaning, Performance and New Technologies, edited by Robert Seyfert and Jonathan Roberge. My aim is to focus on the various “trending algorithms” that populate social media platforms, consider what they do as a set, and then connect them to a broader history of metrics used in popular media, to both assess audience tastes and portray them back to that audience, as a cultural claim in its own right and as a form of advertising.

The essay is meant to extend the idea of “calculated publics” I first discussed here and the concerns that animated  this paper. But more broadly I hope it pushes us to think about algorithms not as external forces on the flow of popular culture, but increasingly as elements of popular culture themselves, something we discuss as culturally relevant, something we turn to face so as to participate in culture in particular ways. It also has a bit more to say about how we tend to think about and talk about “algorithms” in this scholarly discussion, something I have more to say about here.

I hope it’s interesting, and I really welcome your feedback. I already see places where I’ve not done the issue justice: I should connect the argument more to discussions of financial metrics, like credit ratings, as another moment when institutions have reason to turn such measures back as meaningful claims. I found the excellent essay (journal; academia.edu), where Jeremy Morris writes about what he calls “infomediaries,” late in my process, so while I do gesture to it, it could have informed my thinking even more. There are a dozen other things I wanted to say, and the essay is already a little overstuffed.

I do have some opportunity to make specific changes before it goes to press, so I’d love to hear any suggestions, if you’re inclined to read it.

See you at IR 16!

The Social Media Collective is showing up in force at Internet Research 16 in Phoenix, Arizona starting next week. Along with many friends of the SMC, there will be some of our permanent researchers (Nancy Baym, Tarleton Gillespie), postdocs current and past (Kevin Driscoll, Lana Swartz, Mike Ananny), past & present interns (Stacy Blasiola, Brittany Fiore-Gartland, Germaine Halegoua, Tero Karppi, J. Nathan Matias, Kat Tiidenberg,  Shawn Walker, Nick Seaver), past and future Visiting Researchers (Jean Burgess, Annette Markham, Susanna Paasonen, Hector Postigo, TL Taylor), and our past Research Assistants (Kate Miltner and Alex Leavitt). Hope to see you there!

Below is a list of papers and panels they will be presenting:

——————————————————————————————————————
WEDNESDAY, 21 OCT

——————————————————————————————————————

Workshop:

Digital Methods in Internet Research

Axel Bruns, Jean Burgess, Tim Highfield, Tama Leaver, Ben Light, Patrik Wikstrom

——————————————————————————————————————

THURSDAY, 22 OCT

——————————————————————————————————————

Beyond Big Bird: The Role of Humor in the Aggregate Interpretation of Live-Tweeted Events

11:00 am – 12:20 pm

Alex Leavitt, Kristen Guth, Kevin Driscoll, François Bar

ROUNDTABLE: Teaching Ethics in Big Data and Social Media: Bridging Theory and Practice in the Classroom

11:00 am- 12:20 pm

Shawn Thomas Walker, Anna Lauren Hoffmann, Jim Thatcher

You [Don’t] Gotta Pay the Troll Toll: A Transaction Costs Model of Online Harassment

1:30 pm – 2:50 pm

Stacy Blasiola

PANEL: Facebook’s Futures

1:30 pm, -2:50 pm

Tero Jukka Karppi, Andrew Richard Schrock, Andrew Herman, Fenwick McKelvey

ROUNDTABLE: Unpacking the Black Box of Qualitative Analysis: Exploring How the Imaginaries of Digital Inquiry are Constructed through Everyday Research Practice

3:10 pm -4:30 pm

Annette N Markham, Nancy K. Baym, T.L. Taylor, Lynn Schofield Clark, Jill Walker Rettberg

ROUNDTABLE: It’s Really About Ethics in Games Research: Reflections on #GamerGate

3:10 pm- 4:30 pm

Shira Chess, Adrienne Shaw, Adrienne Massanari, Christopher Paul, Kate Miltner, Casey O’Donnell

The Role of Breakdown in Imagining Big Data: Impediment to Insight to Innovation

3:10 pm- 4:30 pm

Anissa Tanweer, Brittany Fiore-Gartland, Cecilia Aragon

***The Nancy Baym Book Award will be  presented to Robert Gehl for Reverse Engineering Social Media at the banquet on Thursday night

——————————————————————————————————————

FRIDAY, 23, OCT

——————————————————————————————————————

Singing Data Over the Phone: A Social History of the Modem

9:00 am – 10:20 am

Kevin Driscoll

PANEL: Karma Policing: Re-imagining what we can (and can’t) post on the Internet

9:00 am – 10:20 am

Michael Burnam-Fink, Katrin Tiidenberg, John Carter McKnight, Cindy Tekobbe

ROUNDTABLE: Real and Imagined Boundaries: Building Connections Between Social Justice Activists and Internet Researchers

10:40 am – 12:00 pm

Catherine Knight Steele, Andre Brock, Annette Markham

—-

ROUNDTABLE: Private Platforms under Public Pressure

10:40 am – 12:00 pm

Tarleton Gillespie, Mike Ananny, Christian Sandvig & J. Nathan Matias

ROUNDTABLE: Histories of Hating

10:40 am- 12:00 pm

Tamara Shepherd, Sam Srauy, Kevin Driscoll, Lana Swartz, Hector Postigo

The Challenges of Weibo for Data-Driven Digital Media Research

10:40 am – 12:00 pm

Jing Zeng, Jean Burgess, Axel Bruns

PANEL: Economies of the Internet II: Affect

1:00 pm – 2:20 pm

Sharif Mowlabocus, Nancy Baym, Susanna Paasonen, Dylan Wittkower, Kylie Jarrett

PANEL: Internet Research Ethics: New Contexts, New Challenges – New (Re)solutions?

Charles Melvin Ess, Annette Markham, Mark D. Johns, Yukari Seko, Katrin Tiidenberg, Camilla Granholm, Ylva Hård af Segerstad, Dick Kasperowski

Parks and Recommendation: Spatial Imaginaries in Algorithmic Systems

2:40 pm – 4:00 pm

Nick Seaver

Re-placeing the City: Digital Navigation Technologies and the Experience of Urban Place

2:40 pm – 4:00 pm

Germaine R. Halegoua

—-

ROUNDTABLE: Compromised Data? Research on Social media platforms

4:20 pm- 5:40 pm

Greg Elmer, Ganaele Langlois, Joanna Redden, Axel Bruns, Jean Burgess, Robert Gehl

FISHBOWL: Exploring “Internet Culture”: Discourses, Boundaries, and Implications

4:20 pm – 5:40 pm

Kate Miltner, Ryan M. Milner, Whitney Phillips, Megan Sapnar Ankerson

——————————————————————————————————————

SATURDAY 24, OCT

——————————————————————————————————————

FISHBOWL: The Quantified Imaginary

9:00 am – 10:20 am

Lee Humphreys, Jean Burgess, Joseph Turow

Imaginary Inactivity and the Share Button

9:00 am- 10: 20 am

Airi-Alina Allaste, Katrin Tiidenberg

ROUNDTABLE: ‘Black Box’ Data and ‘Flying Furball’ Networks: Challenges and Opportunities in Doing and Communicating Social Media Analytics

1:30 pm – 2:50 pm

Axel Bruns, Anders Olof Larsson, Katrin Weller

ROUNDTABLE: Ethics and Social Justice Meeting: Discussing AOIR Committees and Mission

3:10 pm – 4:30 pm

Annette N Markham, Jenny Stromer Galley, Catherine Knight Steele

Big Data, Context Cultures

The latest issue of Media, Culture, and Society features an open-access discussion section responding to SMC all-stars danah boyd and Kate Crawford‘s “Critical Questions for Big Data.” Though the article is only a few years old, it’s been very influential and a lot has happened since it came out, so editors Aswin Punathambekar and Anastasia Kavada commissioned a few responses from scholars to delve deeper into danah and Kate’s original provocations.

The section features pieces by Anita Chan on big data and inclusion, André Brock on “deeper data,” Jack Qiu on access and ethics, Zizi Papacharissi on digital orality, and one by me, Nick Seaver, on varying understandings of “context” among critics and practitioners of big data. All of those, plus an introduction from the editors, are open-access, so download away!

My piece, titled “The nice thing about context is that everyone has it,” draws on my research into the development of algorithmic music recommenders, which I’m building on during my time with the Social Media Collective this fall. Here’s the abstract:

In their ‘Critical Questions for Big Data’, danah boyd and Kate Crawford warn: ‘Taken out of context, Big Data loses its meaning’. In this short commentary, I contextualize this claim about context. The idea that context is crucial to meaning is shared across a wide range of disciplines, including the field of ‘context-aware’ recommender systems. These personalization systems attempt to take a user’s context into account in order to make better, more useful, more meaningful recommendations. How are we to square boyd and Crawford’s warning with the growth of big data applications that are centrally concerned with something they call ‘context’? I suggest that the importance of context is uncontroversial; the controversy lies in determining what context is. Drawing on the work of cultural and linguistic anthropologists, I argue that context is constructed by the methods used to apprehend it. For the developers of ‘context-aware’ recommender systems, context is typically operationalized as a set of sensor readings associated with a user’s activity. For critics like boyd and Crawford, context is that unquantified remainder that haunts mathematical models, making numbers that appear to be identical actually different from each other. These understandings of context seem to be incompatible, and their variability points to the importance of identifying and studying ‘context cultures’–ways of producing context that vary in goals and techniques, but which agree that context is key to data’s significance. To do otherwise would be to take these contextualizations out of context.

Co-creation and Algorithmic Self-Determination: A study of player feedback on game analytics in EVE Online

We are happy to share SMC’s intern Aleena Chia’s presentation of her summer project titled “Co-creation and Algorithmic Self-Determination: A study of player feedback on game analytics in EVE Online”.  

Aleena’s project summary and the videos of her presentation below:

Digital games are always already information systems designed to respond to players’ inputs with meaningful feedback (Salen and Zimmerman 2004). These feedback loops constitute a form of algorithmic surveillance that have been repurposed by online game companies to gather information about player behavior for consumer research (O’Donnell 2014). Research on player behavior gathered from game clients constitutes a branch of consumer research known as game analytics (Seif et al 2013).[1] In conjunction with established channels of customer feedback such as player forums, surveys, polls, and focus groups, game analytics informs companies’ adjustments and augmentations to their games (Kline et al 2005). EVE Online is a Massively Multiplayer Online Game (MMOG) that uses these research methods in a distinct configuration. The game’s developers assemble a democratically elected council of players tasked with the filtration of player interests from forums to inform their (1) agenda setting and (2) contextualization of game analytics in the planning and implementation of adjustments and augmentations.

This study investigates the council’s agenda setting and contextualization functions as a form of co-creation that draws players into processes of game development, as interlocutors in consumer research. This contrasts with forms of co-creation that emphasize consumers’ contributions to the production and circulation of media content and experiences (Banks 2013). By qualitatively analyzing meeting minutes between EVE Online’s player council and developers over seven years, this study suggests that co-creative consumer research draws from imaginaries of player governance caught between the twin desires of corporate efficiency and democratic efficacy. These desires are darned together through a quantitative public sphere (Peters 2001) that is enabled and eclipsed by game analytics. In other words, algorithmic techniques facilitate collective self-knowledge that players seek for co-creative deliberation; these same techniques also short circuit deliberation through claims of neutrality, immediacy, and efficiency.

The significance of this study lies in its analysis of a consumer public’s (Arvidsson 2013) ambivalent struggle for algorithmic self-determination – the determination by users through deliberative means of how their aggregated acts should be translated by algorithms into collective will. This is not primarily a struggle of consumers against corporations; nor of political principles against capitalist imperatives; nor of aggregated numbers against individual voices. It is a struggle within communicative democracy for efficiency and efficacy (Anderson 2011). It is also a struggle for communicative democracy within corporate enclosures. These struggles grind on productive contradictions that fuel the co-creative enterprise. However, while the founding vision of co-creation gestured towards a win-win state, this analysis concludes that algorithmic self-determination prioritizes efficacy over efficiency, process over product. These commitments are best served by media companies oriented towards user retention rather than recruitment, business sustainability rather than growth, and that are flexible enough to slow down their co-creative processes.

[1] Seif et al (2013) maintain that player behavior data is an important component of game analytics, which includes the statistical analysis, predictive modeling, optimization, and forecasting of all forms of data for decision making in game development. Other data include revenue, technical performance, and organizational process metrics.

(Video 1)

(Video 2)

(Video 3)

(Video 4)

Should You Boycott Traditional Journals?

(Or, Should I Stay or Should I Go?)

Is it time to boycott “traditional” scholarly publishing? Perhaps you are an academic researcher, just like me. Perhaps, just like me, you think that there are a lot of exciting developments in scholarly publishing thanks to the Internet. And you want to support them. And you also want people to read your research. But you also still need to be sure that your publication venues are held in high regard.

Or maybe you just receive research funding that is subject to new open access requirements.

Ask me about OPEN ACCESS

Academia is a funny place. We are supposedly self-governing. So if we don’t like how our scholarly communications are organized we should be able to fix this ourselves. If we are dissatisfied with the journal system, we’re going to have to do something about it. The question of whether or not it is now time to eschew closed access journals is something that comes up a fair amount among my peers.

It comes up often enough that a group of us at Michigan decided to write an article on the topic. Here’s the article.  It just came out yesterday (open access, of course):

Carl Lagoze, Paul Edwards, Christian Sandvig, & Jean-Christophe Plantin. (2015). Should I stay or Should I Go? Alternative Infrastructures in Scholarly Publishing. International Journal of Communication 9: 1072-1081.

The article is intended for those who want some help figuring out the answer to the question the article title poses: Should I stay or should I go? It’s meant help you decipher the unstable landscape of scholarly publishing these days. (Note that we restrict our topic to journal publishing.)

Researching it was a lot of fun, and I learned quite a bit about how scholarly communication works.

  • It contains a mention of the first journal. Yes, the first one that we would recognize as a journal in today’s terms. It’s Philosophical Transactions published by the Royal Society of London. It’s on Volume 373.
  • It should teach you about some of the recent goings-on in this area. Do you know what a green repository is? What about an overlay journal? Or the “serials crisis“?
  • It addresses a question I’ve had for a while: What the heck are those arXiv people up to? If it’s so great, why hasn’t it spread to all disciplines?
  • There’s some fun discussion of influential experiments in scholarly publishing. Remember the daring foundation of the Electronic Journal of Communication? Vectors? Were you around way-back-in-the-day when the pioneering, Web-based JCMC looked like this hot mess below? Little did we know that we were actually looking at the future.(*)

jcmc-1-1

(JCMC circa 1995)

(*): Unless we were looking at the Gopher version, then in that case we were not looking at the future.

Ultimately, we adapt a framework from Hirschman that we found to be an aid to our thinking about what is going on today in scholarly communication. Feel free to play the following song on a loop as you read it.

(This post has been cross-posted on multicast.)

New Report Released: Few Legal Remedies for Victims of Online Harassment

For the last year, I’ve been working with Fordham’s Center on Law and Information Policy to research what legal remedies are available to victims of online harassment. We investigated cyberharassment law, cyberstalking law, defamation law, hate speech, and cyberbullying statutes. We found that although online harassment and hateful speech is a significant problem, there are few legal remedies for victims.

Report Highlights

  • Section 230 of the Communications Decency Act provides internet service providers(including social media sites, blog hosting companies, etc.) with broad immunity from liability for user-generated content.
  • Given limited resources, law enforcement personnel prioritize other cases over
    prosecuting internet-related issues.
  • Similarly, there are often state jurisdictional issues which make successful prosecution
    difficult, as victim and perpetrator are often in different states, if not different countries.
  • Internet speech is protected under the First Amendment. Thus, state laws regarding online
    speech are written to comply with First Amendment protections, requiring fighting
    words, true threats, or obscene speech (which are not protected). This generally means
    that most offensive or obnoxious online comments are protected speech.
  • For an online statement to be defamatory, it must be provably false rather than a matter of
    opinion. This means that the specifics of language used in the case are extremely
    important.
  • While there are state laws for harassment and defamation, few cases have resulted in
    successful prosecution. The most successful legal tactic from a practical standpoint has
    been using a defamation or harassment lawsuit to reveal the identities of anonymous
    perpetrators through a subpoena to ISPs then settling. During the course of our research,
    we were unable to find many published opinions in which perpetrators have faced
    criminal penalties, which suggests that the cases are not prosecuted, they are not appealed
    when they are prosecuted, or that the victim settles out of court with the perpetrator and
    stops pressing charges.
  • In offline contexts, hate speech laws seem to only be applied by courts as penalty
    enhancements; we could locate no online-specific hate speech laws.
  • Given this landscape, the problem of online harassment and hateful speech is unlikely to
    be solved solely by victims using existing laws; law should be utilized in combination
    with other practical solutions.

The objective of the project is to provide a resource that may be used by the general public, and in particular, researchers, legal practitioners, Internet community moderators, and victims of harassment and hateful speech online. If you’re working on online harassment, cyberbullying, revenge porn, or a host of related issues, we hope this will be of service to you.

Also, read it to find out the difference between calling someone a “bitch” and a “skank” online, what a “true threat” is, and why students are probably at the most risk of being prosecuted for online speech acts.

Download the report from SSRN

The main Whoo.ly interface

Whoo.ly: Facilitating Information Seeking For Hyperlocal Communities Using Social Media

You hear sirens blaring in your neighborhood and, naturally, you are curious about the cause of commotion. Your first reaction might be to turn on the local TV news or go online and check the local newspaper. Unfortunately, unless the issue is of significant importance, your initial search of these media will be probably be fruitless. But, if you turn to social media, you are likely to find other neighbors reporting relevant information, giving firsthand accounts, or, at the very least, wondering what is going on as well.

Social media allows people to quickly spread information and, in urban environments, its presence is ubiquitous. However, social media is also noisy, chaotic, and hard to understand for those unfamiliar with, for example, the intricacies of hashtags and social media lingo. It should be no surprise that, regardless of the popularity of social media, people are still using TV and newspapers as their main sources for local information, while social media is just beginning to emerge as a useful information source.  We created Whoo.ly to address this issue.

Continue reading “Whoo.ly: Facilitating Information Seeking For Hyperlocal Communities Using Social Media”

Addressing Human Trafficking: Guidelines for Technological Interventions

Two years ago, when I started working on issues related to human trafficking and technology, I was frustrated by how few people recognized the potential of technology to help address the commercial sexual exploitation of children. With the help of a few colleagues at Microsoft Research, I crafted a framework document to think through the intersection of technology and trafficking. After talking with Mark Latonero at USC (who has been writing brilliant reports on technology and human trafficking), I teamed up with folks at MSR Connections and Microsoft’s Digital Crimes Unit to help fund research in this space. Over the last year, I’ve been delighted to watch a rich scholarly community emerge that takes seriously the importance of data for understanding and intervening in human trafficking issues that involve technology.

Meanwhile, to my delight, technologists have started to recognize that they can develop innovative systems to help address human trafficking. NGOs have started working with computer scientists, companies have started working with law enforcement, and the White House has started bringing together technologists, domain experts, and policy makers to imagine how technology can be used to combat human trafficking. The potential of these initiatives tickles me pink.

Watching this unfold, one thing that I struggle with is that there’s often a disconnect between what researchers are learning and what the public thinks is happening vis-a-vis the commercial sexual exploitation of children (CSEC). On too many occasions, I’ve watched well-intentioned technologists approach the space with a naiveté that comes from only knowing about human trafficking through media portrayals. While the portraits that receive widespread attention are important for motivating people to act, understanding the nuance and pitfalls of the space are critical for building interventions that will actually make a difference.

To bridge the gap between technologists and researchers, I worked with a group of phenomenal researchers to produce a simple 4-page fact sheet intended to provide a very basic primer on issues in human trafficking and CSEC that technologists need to know before they build interventions:

How to Responsibly Create Technological Interventions to Address the Domestic Sex Trafficking of Minors

Some of the issues we address include:

  1. Youth often do not self-identify themselves as victims.
  2. “Survival sex” is one aspect of CSEC.
  3. Previous sexual abuse, homelessness, family violence, and foster care may influence youth’s risk of exploitation.
  4. Arresting victims undermines efforts to combat CSEC.
  5. Technologies should help disrupt criminal networks.
  6. Post-identification support should be in place before identification interventions are implemented.
  7. Evaluation, assessment, and accountability are critical for any intervention.
  8. Efforts need to be evidence-based.
  9. The cleanliness of data matters.
  10. Civil liberties are important considerations.

This high-level overview is intended to shed light on some of the most salient misconceptions and provide some key insights that might be useful for those who want to make a difference. By no means does it cover everything that experts know, but it provides some key touchstones that may be useful. It is limited to the issues that are most important for technologists, but those who are working with technologists may also find it to be valuable.

As researchers dedicated to addressing human trafficking and the commercial sexual exploitation of children, we want to make sure that the passion that innovative technologists are bringing to the table is directed in the most helpful ways possible. We hope that what we know can be of use to those who are also looking to end exploitation.

(Flickr image by Martin Gommel)