kudos for upcoming SMC visitor Paul Dourish

Now that we have said goodbye to our most recent long-term visitor, Henry Jenkins, we can look ahead to our next, Paul Dourish. It’s a fine time to do so, as he has just been awarded two honors that testify to his important contributions to the study of computation and society. His 1992 paper “Awareness and Coordination in Shared Workspaces,” co-authored with Victorial Bellotti, was awarded the Lasting Impact award at the upcoming CSCW conference. And he was just chosen as a 2015 ACM Fellow, “for contributions in social computing and human-computer interaction.” Paul will be visiting the SMC group and the Microsoft Research New England lab in spring 2016.

Henry Jenkins, on “Comics and Stuff”

We have had the distinct privilege of having Henry Jenkins visit our research group for the past few months. Give the immense impact of his work on the study of digital culture and digital industries, fan communities and the creative repurposing of media texts, and political participation and new forms of online activism, it was an enormous treat to have him with us. The semester was an opportunity for him to make progress on his latest book project, provisionally titled Comics and Stuff. As we say goodbye to him today and send him back to sunny Los Angeles, we thought we would offer a recap of the colloquium he gave on the topic. This is in the style of liveblogging (though its hardly live, given that we sat on it for two weeks) so any lack of clarity is likely ours rather than Henry’s. Nevertheless, it offers a glimpse of the thinking that is animating this latest project. (Many thanks to Nathan Matias, who took copious notes and drafted this post; I merely proofed and posted.)

 

Comics and Stuff: An Introduction

Henry starts out by explaining that the use of the term “stuff” in his title is not a casual one, that it does actually matter for his early explorations of comics culture. He opens up by showing a comic strip from Mr X that shows an apartment with so much stuff that the background detail overwhelmes the rest of the scene.

== Comics ==

Henry begins by noting that comics have become an increasingly specialized medium, having at times been a mass one. He quotes Art Spiegelman in an interview in Critical Inquiry, referencing Marshall McLuhan, who said “when something matures, it either becomes art or it dies….. I thought of it very literally as a Faustian deal that had to be made with the culture, and it was fraught one and a dangerous one…. I figured it was necessary.” [[ A participant asks: are there any media that failed to become art and died? Jenkins argues that there are no dead media, just dead delivery platforms. Vaudeville and Burlesque might be examples of something that died and is coming back as an art form. ]]

Henry goes on to detail the historical trajectory of U.S. comics, beginning with the immense newspaper comics of in “The Yellow Kid” and “Little Nemo in Slumberland.” At the time, comic artists owned their page — they could do whatever they wanted. The Yellow Kid was full-page, while Little Nemo used panels. Next, Henry talks about comic strips emerging as comics pages, then printed monthly magazines, often published by people who were publishing pulp magazines.

The first “killer app” of comics is Superman: the thing that makes comics commercial viable. In the 1930s, 97% of girls and 98% of boys in the U.S., were reading comics book. As these young fans grew up, they expected that comics would grow with them and take on their concerns, with things like crime or horror comics. The comics panics of the 1950s came from the mismatch that followed, between the assumption that comics were for kids, and the way they were increasingly addressing and being marketed to adults. The industry responded by self-regulating, but also prices went up (from 12 cents to 2 dollars, due to rising paper, ink, and shipping costs) pricing many kids out. In response, comics increasingly became limited to specialty shops. The result is that now comics are almost exclusively sold in specialty shops, that are cut off from ordinary markets. The market shifted entirely to adults, but it was constrained by the codes that required comics to be for kids.

People also started to buy comics as an investment– because comics had been made to read and discard. Every mom who threw away their kids’ comics made everyone’s comics more valuable. But in response, comics were created to be collected, making the bottom fall out of the market. And that’s the context in which Spiegelman is writing, asking how comics will survive.

In parallel, we saw the rise a network of alternative comics self-published by artists, part of the rock and drug culture of the 1960s and 1970s, largely regulated and taking part in a period of experimentation with what comics could do. Emerging from this context, Art Spiegelman published Raw, a yearly comics anthology that introduced waves of new artists across different cultures. It was in Raw that Spiegelman first published Maus, the piece that many people think of as the pinnacle Spiegelman’s vision, and the explosion of graphic novels into mainstream public awareness. Before that came Will Eisner’s “A Contract with God,” which while it wasn’t the first graphic novel, is seen as the first of that time.

Spiegelman and Eisner offered one vision of how comics could go forward into the future. Will Eisner argued a different direction, arguing that comics should be totally reconfigured by being published online, without requiring paper or ink. Online, there has been a remarkable flowering of online comics, but most of them still look to print as a way to reach commercial markets. Yet another future is represented by Marvel: the idea that comics are run as research and development wings of the entertainment industry, or as places to test ideas that then get translated to other screens– the comics run at a loss, but the films make up the profit.

Henry argues that there are now parallel comics markets. Today, the top selling comics are almost all from DC or Marvel. The top selling graphic novels, on the ither hand, include a wider range of second-tier publishers. Compared to the New York Times list of bestsellers are a completely different picture: featuring Fun Home, Persepolis, Maus, and some media tie-ins (like Mad Max). Seven of the top 10 titles in the New York Times list are by women. Beyond the comics shop and bookstore, there’s a different kind of thing, something more diverse represented by the graphic novel market, Henry tells us. Almost all of this wider work has been conceived of as graphic novels developed from end-to-end rather than a serial.

What happened? Graphic novelists courted librarians, who are now key advocates of graphic novels and building new readers of contents. Much of the young adult content is by women, which is diversifying comics.

Henry notes that there are two other cultural configurations of comics beyond this U.S. story: Japanese and Euro comics (French/Belgian). Many of these models don’t apply to Japan or Europe. In Europe, for example, its cultural status has never been in crisis in France or Belgium, where they have been seen as art all along. However, it’s possible to talk about American, Canadian, British, New Zealand, and Australian comics.

Henry argues that we can see a shift from comics to graphic novels; from floppy to bound; from a disposable medium to one intended to be collected and preserve; from specialty shops to bookstores and libraries; from the idea of specialty fans to a wider public audience; from “trash” to art; from a focus on superheros to a new attention to everyday life; and from mostly masculine perspectives to a greater diversity.

As that happens, contemporary comics are not just looking forward, but also looking backwards. Jenkins cites Bill Watterson, who argued that “much of the best cartoon work was done early on in the medium’s history…. it increasingly seems to me that cartoon evolution is working backward…..Not only can comics be more than we’re getting today, but the comics already have been more than we’re getting today” (1989). Comic artists like Watterson are advocating for an earlier tradition in order to position themselves as part of an important tradition. In some cases, notable comics artists curate and support reprints and curatorial interventions to share and bring forward work from the past, positioning comics artists as people who do that kind of work.

Comic artists can feel ambivalent about this. What made Maus striking was not only that it was telling a story of serious import, but that it was using the form of funny animal comics– bidding for the status of pop culture, while also bidding for respectability. As they reach for a tradition, they find themselves drawing from and distancing them from this.

== Stuff ==

Henry notes that comics are stuff, in that they are objects we consume, keep, or diascard, and they represrent stuff, in that they include in their images the material element sof the life of their characters. Because of this dual relationship to stuff, comics can tell us a great deal about the lived materiality of contemporary sociality. The term “stuff,” says Jenkins, is part of a larger trend to consider the things in life, including Appadurai’s work on “the social life of things” and “the world of goods” discussed by Mary Douglas and Baron Isherwood, as well as Daniel Miller’s “Stuff” and “The Comfort of Things.” Anthropology and art criticism have also considered objects, in work by Orhan Pamuk (The Innocence of Objects) and Peter Schwenger, Bill Brown’s “A Sense of Things,” and Freedgood’s “The Ideas in Things,” as literary and art critics are responding to anthropologies of stuff.

Bill Brown frames his work by asking “why and how we use objects to make meaning, to make or re-make ourselves, to organize our anxieties and affections, to sublimate our fears and shape our fantasies.” Most of that work focuses on 19th century literature. In our current time, stuff isn’t something that will be passed along, itis designed to be discarded. Yet at the same time, we use our material things to make meaning around and of ourselves: adults are holding on to toys that were disposable, posing with images of their collections, sahring them online. This fits into a larger pattern of sentimentalizing stuff, to use it to remind ourselves of places we’ve lived and people we’ve known.

This valuing of stuff becomes a driver of sites like Ebay. Henry shows the following ad from Ebay: what nothing was ever forgotten or ever lost?

We can think of this stuff as something we’re trying to make sense of as a culture. We have things like Antique Roadshow, shows like Hoarders, or even books about “the life-changing magic of tidying up.” The result is that we end up with homes that are full of clutter, conflicting symbols of identity and history, bids for meaning jumbled together. We get the cultural call to keep and to discard. But tis doesn’t look like post-modernity, it’s not surface. People are deeply invested in stuff, and thinking about it as clutter or simply surface doesn’t get at it.

To summarize, “stuff” is a lifestyle choice. There’s been a shift from inherited objects (19th century writers) to personal selection — the stuff we own as a reflection of us rather than our family, tribal, or national history. There’s a shift from “possessions” to “belongings” — it belongs to us and signals what we belong to. There’s a shift from the disposable to the collectable, and then from trivia to expertise. Collectors are not just people who own stuff, they’re people who desire stuff and know stuff, creating forms of knowledge that former generations might have thought as trivial.

== Comics and Stuff ==

Comics artists are part of this. Jenkins notes the Canadian film “Seth’s Dominion,” as well as photographers hurriedly taking photos of art deco buildings that were about to be taken down, telling stories about those buildings, and then creating versions of those buildings in cardboard that then gets into art galleries.

The digital becomes a gathering point of this kind of thing. For example, Hip Hop Family Tree inspires media collectors to find and reassemble the pieces that are mentioned in it. Steampunk is another example of a collector culture that builds futuristic objects in the aesthetic of older objects. Retrofuturism is a similar example of this, people who are obsessed with 1930s images of tomorrow, with films like Terminal City, Tomorrowland, Sky Captain and the World of Tomorrow. These all represent an attempt to reclaim the material objects and material imagination of the 1939 world’s fair.

Henry talks about the emergence of the still life in the early modern world. There’s a shift from paintings that are epic in scope, works funded by church and state, to a focus on everyday stuff. In Dutch capitalism, wealth is growing, yet there’s not a tradition of civic giving or charity; wealthy merchants wanted something to spend their money on, so they paid for beautiful paintings of their objects. It was a way that people displayed their own stuff and showed their appreciation of all kinds of stuff, and images of people collecting and gathering stuff. Art historians of the still life argue that there’s a shift from the crown or church to private collectors, a shift from the epic to everyday life. In the case of early modern painting, the shift to everyday life was seen as a lowering of status, unlike comics, where the shift was from lower to higher status.

Now, so with graphic novels. Jenkins shows us panels from Asterios Polyp that were designed to tell us things about a person’s life from the state of someone’s living quarters: an apartment with modernist furniture piled high with stuff. Across the book, there are six panels that show the history of a relationship. In the beginning, it’s a purely modern place. And then, there’s a moment of crisis: will his girlfriend’s belongings fit? Does she have a place in his life, as we see encated through the placing of her stuff. They integrate her furnishings into the space, then we watch is her aesthetic begins to takes over. After the relationship sours, remanants of her things linger. Readers learn to make sense of the scene by flipping back and forth between these moments. And it’s also part of a narrative.

Jenkins next compares comics about collectors with early modern about paintings that they would like to own and bring together– very similar to the kind of thing that collector comics creators are doing. Remember, says Jenkins, that everything drawn by the artist has a personal cost — they don’t get paid any more or less based on how much detail they include.

In Alice in Sunderland, Talbot introduces “cabinets of curiosity,” bringing back to the early modern practice of collecting oddities in cabinets and displays that are cornucopias of fragmented curiosity, dispersed attention, and personality. Ghost World is very much about why people own things. In two poignant moments, a teenager weeps over stuff that is important to her. There are things about stuff that shows us aspects of character that become turning points.

Compared to comics by men, female comic artists are focusing less on collecting stuff and more on the burdens of stuff and the process of getting rid of it. In Fun Home, Bechdel has problems with her father’s aesthetics that hide who her father is. It then focuses on the mother’s willingness to get rid of all the things after her father’s death. Special Exits is similarly about the death of parents, the emotional demands of getting rid of clutter, what’s left behind, with a blind mother-in-law asking if there are still ink smudges on the door frame left by her husband’s routine. Even though she’s blind, she can still guess that the stain and removal of stain is still there — something that expresses the relations in the family. Roz Chast’s “Can’t we talk about something more pleasant” includes a section on Grime and what collects on stuff.

To wrap up, as Henry reads these books, he’ll be patying attention to “Mise-en-scene as a site of virtuoso performance,” the social skills of reading stuff as key to understanding characters, collecting stories, and stories about culling through stuff (especially as men and women write about them differently), and stuff as key for thinking about memory, nostalgia, and history that run through these books. Often there’s a kind of “critical nostalgia” running through them, that is revealing of the genre and the cultural moment.

“Critical algorithm studies” reading list

Nick Seaver and I have put together a list we wanted to share. It is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our aim was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

This area is growing in size and popularity so quickly that new contributions are often popping up without reference to work from disciplinary neighbors. One aim of this list is to help nascent scholars of algorithms to identify broader conversations across disciplines, and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified. We also thought it would be useful, especially for those teaching these materials, to try to loosely categorize it. The organization of the list is meant merely as a first-pass, provisional sense-making effort. Within categories the entries are offered in chronological order, to help make sense of these rapid developments.

In light of this, we encourage you to see it as an unfinished document. There are 132 citations on the list, but we suspect there are many more to include. We very much welcome comments, including recommendations of other work to include, suggestions on how to reclassify a particular entry, or ideas for reorganizing the categories themselves. Please use the comment space at the bottom of the page itself; we will try to update the list in light of your suggestions.

https://socialmediacollective.org/reading-lists/critical-algorithm-studies/

or

http://bit.ly/crit-algo

new essay from SMC visitor Tom Streeter, on the persistent fascination with Steve Jobs

SMC is excited to welcome Tom Streeter, who will be soon making occasional visits to our New England lab, beginning later this month. To mark his arrival, we wanted to highlight the essay he has just published in the International Journal of Communication“Steve Jobs, Romantic Individualism, and the Desire for Good Capitalism.” (Borrowing from the summary provided by IJOC here:)

The essay explains how that story and its repetition tell us more about the culture than the man. Building on previous work about the rise of “romantic individualism” as an organizing mechanism for high-tech capitalism, this essay focuses on the latest outpouring of discourse about Jobs since his death in 2011, analyzing both its continuities with past cultural forms and what it is about the present moment that has intensified the discourse—especially the post-2008 crisis of confidence in financial capitalism. Among other things, the tale offers the appealing, if ultimately unrealistic, hope of a capitalism with integrity, of a one-percenter who deserves it.

Facebook’s improved “Community Standards” still can’t resolve the central paradox

fb-policies1On March 16, Facebook updated its “Community Standards,” in ways that were both cosmetic and substantive. The version it replaced, though it had enjoyed minor updates, had been largely the same since at least 2011. The change comes on the heels of several other sites making similar adjustments to their own policies, including Twitter, YouTube, Blogger, and Reddit – and after months, even years of growing frustration and criticism on the part of social media users about platforms and their policies. This frustration and criticism is of two minds: sometimes, criticism about overly conservative, picky, vague, or unclear restrictions; but also, criticism that these policies fall far too short protecting users, particularly from harassment, threats, and hate speech.

“Guidelines” documents like this one are an important part of the governance of social media platforms; though the “terms of service” are a legal contract meant to spell out the rights and obligations of both the users and the company — often to impose rules on users and indemnify the company against any liability for their actions — it is the “guidelines” that are more likely to be read by users who have a question about the proper use of the site, or find themselves facing content or other users that trouble them. More than that, they serve a broader rhetorical purpose: they announce the platform’s principles and gesture toward the site’s underlying approach to governance.

Facebook described the update as a mere clarification: “While our policies and standards themselves are not changing, we have heard from people that it would be helpful to provide more clarity and examples, so we are doing so with today’s update.” Most of the coverage among the technology press embraced this idea (like here, here, here, here, here, and here). But while Facebook’s policies may not have changed dramatically, so much is revealed in even the most minor adjustments.

First, it’s revealing to look not just at what the rules say and how they’re explained, but how the entire thing is framed. While these documents are now ubiquitous across social media platforms, it is still a curiosity that these platforms so readily embrace and celebrate the role of policing their users – especially amidst the political ethos of Internet freedom, calls for “Net neutrality” at the infrastructural level, and the persistent dreams of the open Web. Every platform must deal with this contradiction, and they often do it in the way they introduce and describe guidelines. These guidelines pages inevitably begin with a paragraph or more justifying not just the rules but the platform’s right to impose them, including a triumphant articulation of the platform’s aspirations.

Before this update, Facebook’s rules were justified as follows: “To balance the needs and interests of a global population, Facebook protects expression that meets the community standards outlined on this page.” In the new version, the priority has shifted, from protecting speech to ensuring that users “feel safe:” “Our goal is to give people a place to share and connect freely and openly, in a safe and secure environment.” I’m not suggesting that Facebook has stopped protecting speech in order to protect users. All social media platforms struggle to do both. But which goal is most compelling, which is held up as the primary justification, has changed.

This emphasis on safety (or more accurately, the feeling of safety) is also evident in the way the rules are now organized. What were, in the old version, eleven rule categories are now fifteen, but they are now grouped into four broad categories – the first of which is, “ keeping you safe.” This is indicative of the effect of the criticisms of recent years: that social networking sites like Facebook and Twitter have failed users, particularly women, in the face of vicious trolling.

fb-policies2As for the rules themselves, it’s hard not to see them as the aftermath to so many particular controversies that have dogged the social networking site over the years. Facebook’s Community Standards increasingly look like a historic battlefield: while it may appear to be a bucolic pasture, the scars of battle remain visible, carved into the land, thinly disguised beneath the landscaping and signage. Some of the most recent skirmishes are now explicitly addressed: A new section on sexual violence and exploitation includes language prohibiting revenge porn. The rule against bullying and harassment now includes a bullet point prohibiting “Images altered to degrade private individuals,” a clear reference to the Photoshopped images of bruised and battered women that were deployed (note: trigger warning) against Anita Sarkessian and others in the Gamergate controversy. The section on self-injury now includes a specific caveat that body modification doesn’t count.

In this version, Facebook seems extremely eager to note that contentious material is often circulated for publicly valuable purposes, including awareness raising, social commentary, satire, and activism. A version of this appears again and again, as part of the rules against graphic violence, nudity, hate speech, self injury, dangerous organizations, and criminal activity. In most cases, these socially valuable uses are presented as a caveat to an otherwise blanket prohibition: even hate speech, which is almost entirely prohibited and in strongest terms, now has a caveat protecting users who circulate examples of hate speech for the purposes of education and raising awareness. It is clear that Facebook is ever more aware of its role as a public platform, where contentious politics and difficult debate can occur. Now it must offer to patrol the tricky line between the politically contentious and the culturally offensive.

Oddly, in the rule about nudity, and only there, the point about socially acceptable uses is not a caveat, but part of an awkward apology for imposing blanket restrictions anyway: “People sometimes share content containing nudity for reasons like awareness campaigns or artistic projects. We restrict the display of nudity because some audiences within our global community may be sensitive to this type of content – particularly because of their cultural background or age. In order to treat people fairly and respond to reports quickly, it is essential that we have policies in place that our global teams can apply uniformly and easily when reviewing content. As a result, our policies can sometimes be more blunt than we would like and restrict content shared for legitimate purposes.” Sorry, Femen. On the other hand, apparently its okay if its cartoon nudity: “Restrictions on the display of both nudity and sexual activity also apply to digitally created content unless the content is posted for educational, humorous, or satirical purposes.” A nod to Charlie Hebdo, perhaps? Or just a curious inconsistency.

The newest addition to the document, and the one most debated in the press coverage, is the new way Facebook now articulates its long-standing requirement that users use their real identity. The rule was recently challenged by a number of communities eager to use Facebook under aliases or stage names, as well as by communities (such as Native Americans) who find themselves on the wrong side of Facebook’s policy simply because the traditions of naming in their culture do not fit Facebook’s. After the 2014 scuffle with drag queens about the right to use a stage identity instead of or alongside a legal one, Facebook promised to make its rule more accommodating. in this update Facebook has adopted the phrase “ authentic identity,” their way of allowing adopted performance names but continuing to prohibit duplicate accounts. The update is also a chance for them to re-justify their rule: at more than one point in the document, and in the accompanying letter from Facebook’s content team, this “authentic identity” requirement is presented as assuring responsible and accountable participation: “Requiring people to use their authentic identity on Facebook helps motivate all of us to act responsibly, since our names and reputations are visibly linked to our words and actions.”

There is also some new language in an even older battle: for years, Facebook has been removing images of women breastfeeding, as a violation its rules against nudity. This has long angered a community of women who strongly believe that sharing such images is not only their right, but important for new mothers and for the culture at large (only in 2007, 2008, 2010, 20112012, 20132014, 2015…). After years of disagreements, protests, and negotiations, in 2014  published a special rule saying that it would allow images of breast-feeding so long as they did not include an exposed nipple. This was considered a triumph by many involves, though reports continue to emerge of women having photos removed and accounts suspended despite the promise. This assurance reappears in the new version of the community standards just posted: “We also restrict some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring.” The Huffington Post reads this as (still) prohibiting breastfeeding photos if they include an exposed nipple, but if the structure of this sentence is read strictly, the promise to “ always” allow photos of women breast-feeding seems to me to trump the previous phrase about exposed nipples. I may be getting nitpicky here, but it’s only as a result of years of back and forth about the precise wording of this rule, and Facebook’s willingness and ability to honor it in practice.

In my own research, I have tracked the policies of major social media platforms, noting both the changes and continuities, the justifications and the missteps. One could dismiss these guidelines as mere window dressing — as a performed statement of coherent values that do not in fact drive the actual enforcement of policy on the site, which so often turns out to be more slapdash or strategic or hypocritical. I find it more convincing to say that these are statements of both policy and principle that are struggled over at times, are deployed when they are helpful and can be sidestepped when they’re constraining, and that do important discursive work beyond simply guiding enforcement. These guidelines matter, and not only when they are enforced, and not only for lending strength to the particular norms they represent. Platforms adjust their guidelines in relation to each other, and smaller sites look to the larger ones for guidance, sometimes borrowing them wholesale. The rules as articulated by Facebook matter well beyond Facebook. And they perform, and therefore reveal in oblique ways, how platforms see themselves in the role of public arbiters of cultural value. They are also by no means the end of the story, as no guidelines in the abstract could possibly line up neatly with how they are enforced in practice.

Facebook’s newest update is consistent with changes over the past few years on many of the major sites, a common urge to both impose more rules and use more words to describe them clearly. This is a welcome adjustment, as so many of the early policy documents, including Facebook’s, were sparse, abstract, and unprepared for the variety and gravity of questionable content and a awful behavior they would soon face. There are some laudable principles made explicit here. On the other hand, adding more words, more detailed examples, and further clarifications does not – cannot – resolve the other challenge: these are still rules that must be applied in specific situations, requiring judgment calls made by overworked, freelance clickworkers. And, while it is a relief to see Facebook and other platforms taking a firmer stand on issues like misogyny, rape threats, trolling, and self-harm, they often are accompanied by ever more restriction not just of bad behavior but of questionable content, a place where the mode of ‘protection’ means something quite different, much more patronizing. The basic paradox remains: these are private companies policing public speech, and are often intervening according to a culturally specific or a financially conservative morality. It is the next challenge for social media to strike a better balance in this regard: more effectively intervening to protect users themselves, while intervening less on behalf of users’ values.

This is cross-posted on the Culture Digitally blog.

Social Media Collective weigh in on the debates about the Facebook emotions study

I have the privilege of spending the year as a visiting researcher with the social media researchers at Microsoft Research New England. And for the last two weeks or so, its been a particularly stimulating time to be among them. Spurred by the controversial Facebook emotions study and the vigorous debate surrounding it, there’s been a great deal of discussion in the lab and beyond it.

A number of us have also joined the debate publicly, posting here as well as in other places. Rather than re-post each one individually here, I thought I’d collect them into a single post, as they have in many ways emerged from our thinking together.

danah boyd: What does the Facebook experiment teach us?: “What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.”

Kate Crawford: The Test We Can—and Should—Run on Facebook (in The Atlantic): “It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent in order to produce good data science.”

Tarleton Gillespie: Facebook’s algorithm — why our assumptions are wrong, and our concerns are right (on Culture Digitally): “But a key issue, both in the research and in the reaction to it, is about Facebook and how it algorithmically curates our social connections, sometimes in the name of research and innovation, but also in the regular provision of Facebook’s service.”

Andrés Monroy-Hernández: A system designer’s take on the Facebook study – a response to danah boyd’s blog post: “…it’s important that we do not throw out the baby out with the bath water. I do not want to see the research community completely avoiding  experimental research in online systems.”

Mary L. Gray: When Science, Customer Service, and Human Subjects Research Collide. Now What? (on Ethnography Matters): “My brothers and sisters in data science, computational social science, and all of us studying and building the Internet of things inside or outside corporate firewalls, to improve a product, explore a scientific question, or both: we are now, officially, doing human subjects research.”

There have also been comments on the issue from other scholars at Microsoft Research, including:

Jaron Lanier, MSR Redmond. Should Facebook Manipulate Users? (The New York Times)

Duncan Watts, MSR New York. Stop complaining about the Facebook study. It’s a golden age for research (The Guardian) and Lessons learned from the Facebook study (Chronicle of Higher Ed)

 Matt Salganik, MSR New York. After the Facebook emotional contagion experiment: A proposal for a positive path forward (Freedom to Tinker)

A “pay it back tax” on data brokers: a modest (and also politically untenable and impossibly naïve) policy proposal

I’ve just returned from the “Social, Cultural, and Ethical Dimensions of Big Data” event, held by the Data & Society Initiative (led by danah boyd), and spurred by the efforts of the White House Office of Technology and Policy to develop a comprehensive report on issues of privacy, discrimination, and rights around big data. And my head is buzzing. (Oh boy. Here he goes.) There must be something about ma and workshops aimed at policy issues. Even though this event was designed to be wide-ranging and academic, I always get this sense of urgency or pressure that we should be working towards concrete policy recommendations. It’s something I rarely do in my scholarly work (to its detriment, I’d say, wouldn’t you?) But I don’t tend to come up with reasonable, incremental, or politically viable policy recommendations anyway. I get frustrated that the range of possible interventions feels so narrow, so many players that must be untouched, so many underlying presumptions left unchallenged. I don’t want to suggest some progressive but narrow intervention, and in the process confirm and reify the way things are – though believe me, I admire the people who can do this. I long for there to be a robust vocabulary for saying what we want as a society and what we’re willing to change, reject, regulate, or transform to get it. (But at some point, if it’s too pie in the sky, it ceases being a policy recommendation, doesn’t it?) And this is especially true when it comes to daring to restrain commercial actors who are doing something that can be seen as publicly detrimental, but somehow have this presumed right to engage in this activity because they have the right to profit. I want to be able to say, in some instances, “sorry, no, this simply isn’t a thing you get to profit on.”

All that said, I’m going to propose a policy recommendation. (It’s going to be a politically unreasonable one, you watch.)

I find myself concerned about this hazy category of stakeholders that, at our event, were generally called “data brokers.” There are probably different kinds of data brokers that we might think about: companies that buy up and combine data about consumers; companies that scrape public data from wherever it is available and create troves of consumer profiles. I’m particularly troubled by the kind of companies that Kate Crawford discussed in her excellent editorial for Scientific American a few weeks ago — like Turnstyle, a company that has set up dummy wifi transponders in major cities to pick up all those little pings your smartphone gives off when its looking for networks. Turnstyle coordinates those pings into a profile of how you navigated the city (i.e. you and your phone walked down Broadway, spent twenty minutes in the bakery, then drove to the south side), then aggregates those navigation profiles into data about consumers and their movements through the city and sells them to marketers. (OK, that is particularly infuriating.) What defines this category for me is that data brokers do not gather data as part of a direct service they provide to those individuals. Instead they gather at a point once removed from the data subjects: such as purchasing the data gathered by others, scraping our public utterances or traces, or tracking the evidence of our activity we give off. I don’t know that I can be much more specific than that, or that I’ve captured all the flavors, in part because I’ve only begun to think about them (oh good, then this is certain to be a well-informed suggestion!) and because they are a shadowy part of the data industry, relatively far with consumers, with little need to advertise or maintain a particularly public profile.

I think these stakeholders are in a special category, in terms of policy, for a number of reasons. First, they are important to questions of privacy and discrimination in data, as they help to move data beyond the settings in which we authorized its collection and use. Second, they are outside of traditional regulations that are framed around specific industries and their data use (like HIPAA provisions that regulate hospitals and medical record keepers, but not data brokers who might nevertheless traffic in health data). Third, they’re a newly emergent part of the data ecosystem, so they have not been thought about in the development of older legislation. But most importantly, they are a business that offers no social value to the individual or society whose data is being gathered. (Uh oh.) In all of the more traditional instances in which data is collected about individuals, there is some social benefit or service presumed to be offered in exchange. The government conducts a census, but we authorized that, because it is essential to the provision of government services: proportional representation of elected officials, fair imposition of taxation, etc. Verizon collects data on us, but they do so as a fundamental element of the provision of telephone service. Facebook collect all of our traces, and while that data is immensely valuable in its own right and to advertisers it is also an important component in providing their social media platform. I am by no means saying that there are no possible harms in such data arrangements (I should hope not) but at the very least, the collection of data comes with the provision of service, and there is a relationship (citizen, customer) that provides a legally structured and sanctioned space for challenging the use and misuse of that data — class action lawsuit, regulatory oversight, protest, or just switching to another phone company. (Have you tried switching phone companies lately?) Some services that collect data have even voluntarily sought to do additional, socially progressive things with that data: Google looking for signs of flu outbreaks, Facebook partnering with researchers looking to encourage voting behavior, even OK Cupid giving us curious insights about the aggregate dating habits of their customers. (You just love infographics, don’t you.) But the third party data broker who buys data from an e-commerce site I frequent, or scrapes my publicly available hospital discharge record, or grabs up the pings my phone emits as I walk through town, they are building commercial value on my data, but offer me no value to me, my community, or society in exchange.

So what I propose is a “pay it back tax” on data brokers. (Huh?! Does such a thing exist, anywhere?) If a company collects, aggregates, or scrapes data on people, and does so not as part of a service back to those people (but is that distinction even a tenable one? who would decide and patrol which companies are subject to this requirement?), then they must grant access to their data and access 10% of their revenue to non-profit, socially progressive uses of that data. This could mean they could partner with a non-profit, provide them funds and access to data, to conduct research. Or, they could make the data and dollars available as a research fund that non-profits and researchers could apply for. Or, as a nuclear option, they could avoid the financial requirement by providing an open API to their data. (I thought your concern about these brokers is that they aggravate the privacy problems of big data, but you’re making them spread that collected data further?) I think there could be valuable partnerships: Turnstyle’s data might be particularly useful for community organizations concerned about neighborhood flow or access for the disabled; health data could be used by researchers or activists concerned with discrimination in health insurance. There would need to be parameters for how that data was used and protected by the non-profits who received it, and perhaps an open access requirement for any published research or reports.

This may seem extreme. (I should say so. Does this mean any commercial entity in any industry that doesn’t provide a service to customers should get a similar tax?) Or, from another vantage point, it could be seen as quite reasonable: companies that collect data on their own have to spend an overwhelming amount of their revenue providing whatever service they do that justifies this data collection: governments that collect data on us are in our service, and make no profit. This is merely 10% and sharing their valuable resource. (No, it still seems extreme.) And, if I were aiming more squarely at the concerns about privacy, I’d be tempted to say that data aggregation and scraping could simply be outlawed. (Somebody stop him!) In my mind, it at the very least levels back the idea that collecting data on individuals and using that as a primary resource upon which to make profit must, on balance, provide some service in return, be it customer service, social service, or public benefit.

This is cross-posted at Culture Digitally.

New anthology on media technologies, bringing together STS and Communication perspectives

I’m thrilled to announce that our anthology, Media Technologies: Essays on Communication, Materiality, and Society, edited by myself with Pablo Boczkowski and Kirsten Foot, is now officially available from MIT Press. Contributors include Geoffrey Bowker, Finn Brunton, Gabriella Coleman, Gregory Downey, Steven Jackson, Christopher Kelty, Leah Lievrouw, Sonia Livingstone, Ignacio Siles, Jonathan Sterne, Lucy Suchman, and Fred Turner. We’ve secured permission to share the introduction with you. A blurb:

In recent years, scholarship around media technologies has finally shed the presumption that technologies are separate from and powerfully determining of social life, seeing them instead as produced by and embedded in distinct social, cultural, and political practices – and as socially significant because of that. This has been helped along by a productive intersection between work in science and technology studies (STS) interested in information technologies as complex sociomaterial phenomena, and work in communication and media studies attuned to the symbolic and public dimensions of these tools.

In this volume, scholars from both fields come together to provide some conceptual paths forward for future scholarship. Two sets of essays and commentaries comprise this collection: the first addresses the relationship between materiality and mediation, considering such topics as the lived realities of network infrastructure. The second highlights media technologies as fragile and malleable, held together through the minute, unobserved work of many, including efforts to keep these technologies alive.

Please feel free to circulate this introduction to others, and write back to us with your thoughts, criticisms, and ideas. We hope this volume helps anchor the exciting conversations we see happening in the field, and serves a launchpad for future scholarship.

ToC and Chapter 1 – Introduction (Media Technologies)

Tumblr, NSFW porn blogging, and the challenge of checkpoints

After Yahoo’s high-profile purchase of Tumblr, when Yahoo CEO Marissa Mayer said that she would “promise not to screw it up,” this is probably not what she had in mind. Devoted users of Tumblr have been watching closely, worried that the cool, web 2.0 image blogging tool would be tamed by the nearly two-decade-old search giant. One population of Tumblr users, in particular, worried a great deal: those that used Tumblr to collect and share their favorite porn. This is a distinctly large part of the Tumblr crowd: according to one analysis, somewhere near or above 10% of Tumblr is “adult fare.”

Now that group is angry. And Tumblr’s new policies, that made them so angry, are a bit of a mess. Two paragraphs from now, I’m going to say that the real story is not the Tumblr/Yahoo incident, or how it was handled, or even why it’s happening. But the quick run-down, and it’s confusing if you’re not a regular Tumblr user. Tumblr had a self-rating system: blogs with “occasional” nudity should self-rate as “NSFW”. Blogs with “substantial” nudity should rate themselves as “adult.” About two months ago, some Tumblr users noticed that blogs rated “adult” were no longer being listed with the major search engines. Then in June, Tumblr began taking both “NSFW” and “adult” blogs out of their internal search results — meaning, if you search in Tumblr for posts tagged with a particular word, sexual or otherwise, the dirty stuff won’t come up. Unless the searcher already follows your blog, then the “NSFW” posts will appear, but not the “adult” ones. Akk, here, this is how Tumblr tried to explain it:

What this meant is that your existing followers of a blog can largely still see your “NSFW” blog, but it would be very difficult for anyone new to find it. David Karp, founder and CEO of Tumblr, dodged questions about it on the Colbert Report, saying only that Tumblr doesn’t want to be responsible for drawing the lines between artistic nudity, casual nudity, and hardcore porn.

Then a new outrage emerged when some users discover that, in the mobile  version of Tumblr, some tag searches turn up no results, dirty or otherwise — and not just for obvious porn terms, like “porn,” but also for broader terms, like “gay”. Tumblr issued a quasi-explanation on their blog, which some commentators and users found frustratingly vague and unapologetic.

Ok. The real story is not the Tumblr/Yahoo incident, or how it was handled, or even why it’s happening. Certainly, Tumblr could have been more transparent about the details of their original policy, or the move in May or earlier to de-list adult Tumblr blogs in major search engines, or the decision to block certain tag results. Certainly, there’ve been some delicate conversations going on at Yahoo/Tumblr headquarters, for some time now, on how to “let Tumblr be Tumblr” (Mayer’s words) and also deal with all this NSFW blogging “even though it may not be as brand safe as what’s on our site” (also Mayer). Tumblr puts ads in its Dashboard, where only logged-in users see them, so arguably the ads are never “with” the porn — but maybe Yahoo is looking to change that, so that the “two companies will also work together to create advertising opportunities that are seamless and enhance the user experience.”

What’s ironic is that, I suspect, Tumblr and Yahoo are actually trying to find ways to remain permissive when it comes to NSFW content. They are certainly (so far) more permissive than some of their competitors, including Instagram, Blogger, Vine, and Pinterest, all of whom have moved in the last year to remove adult content, make it systematically less visible to their users, or prevent users from pairing advertising with it. The problem here is their tactics.

Media companies, be they broadcast or social, have fundamentally two ways to handle content that some but not all of their users find inappropriate.

First, they can remove some of it, either by editorial fiat or at the behest of the community. This means writing up policies that draw those tricky lines in the sand (no nudity? what kind of nudity? what was meant by the nudity?), and then either taking on the mantle (and sometimes the flak) of making those judgments themselves, or having to decide which users to listen to on which occasions for which reasons.

Second, and this is what Tumblr is trying, is what I’ll call the “checkpoint” approach. It’s by no means exclusive to new media: putting the X-rated movies in the back room at the video store, putting the magazines on the shelf behind the counter, wrapped in brown paper, scheduling the softcore stuff on Cinemax after bedtime, or scrambling the adult cable channel, all depend on the same logic. Somehow the provider needs to keep some content from some people and deliver it to others. (All the while, of course, they need to maintain their reputation as defender of free expression, and not appear to be “full of porn,” and keep their advertisers happy. Tricky.)

To run such a checkpoint requires (1) knowing something about the content, (2) knowing something about the people, and (3) having a defensible line between them.

First, the content. That difficult decision, about what is artistic nudity, what’s casual nudity, and what’s pornographic? It doesn’t go away, but the provider can shift the burden of making that decision to someone else — not just to get it off their shoulders, but sometimes to hand it someone more capable of making it. Adult movie producers or magazine publishers can self-rate their content as pornographic. An MPAA-sponsored board can rate films. There are problems, of course: either the “who are these people?” problem, as in the mysterious MPAA ratings board, or the “these people are self-interested” problem, as when TV production houses rate their own programs. Still, this self-interest can often be congruent with the interests of the provider: X-rated movie producers know that their options may be the back room or not at all, and gain little i pretending that they’re something they’re not.

Next, the people. It may seem like a simple thing, just keeping the dirty stuff on the top shelf and carding people who want to buy it. Any bodega shopkeep can manage to do it. But it is simple only because it depends on a massive knowledge architecture, the driver’s license, that it didn’t have to generate itself. This is a government sponsored, institutional mechanism that, in part, happens to be engaged in age verification. It requires a massive infrastructure for record keeping, offices throughout the country, staff, bureaucracy, printing services, government authorization, and legal consequences for cases of fraud. All that so that someone can show a card and prove they’re of a certain age. (That kind of certified, high-quality data is otherwise hard to come by, as we’ll see in a moment.)

Finally, a defensible line. The bodega has two: the upper shelf and the cash register. The kids can’t reach, and even the tall ones can’t slip away uncarded, unless they’re also interested in theft. Cable services use encryption: the signal is scrambled unless the cable company authorizes it to be unscrambled. This line is in fact not simple to defend: the descrambler used to be in the box itself, which was in the home and, with the right tools and expertise, openable by those who might want to solder the right tab and get that channel unscrambled. This meant there had to be laws against tampering, another external apparatus necessary to make this tactic stick.

Tumblr? Well. All of this changes a bit when we bring it into the world of digital, networked, and social media. The challenges are much the same, and if we notice that the necessary components of the checkpoint are data, we can see how this begins to take on the shape that it does.

The content? Tumblr asked its users to self-rate, marking their blog as “NSFW” or “adult.” Smart, given that bloggers sharing porn may share some of Tumblr’s interest in putting it behind the checkpoint: many would rather flag their site as pornographic and get to stay on Tumblr, than be forbidden to put it up at all. Even flagged, Tumblr provides them what they need: the platform on which to collect content, a way to gain and keep interested viewers. The categories are a little ambiguous — where is the line between “occasional” and “substantial” nudity to be drawn? Why is the criteria only about amount, rather than degree (hard core vs soft core), category (posed nudity vs sexual act), or intent (artistic vs unseemly)? But then again, these categories are always ambiguous, and must always privilege some criteria over others.

The people? Here it gets trickier. Tumblr is not imposing an age barrier, they’re imposing a checkpoint based on desire, dividing those who want adult content from those who don’t. This is not the kind of data that’s kept on a card in your wallet, backed by the government, subject to laws of perjury. Instead, Tumblr has two ways to try to know what a user wants: their search settings, and what they search for. If users have managed to correctly classify themselves into “Safe Mode,” indicating in the settings that they do not want to see anything flagged as adult, and people posting content have correctly marked their content as adult or not, this should be an easy algorithmic equation: “safe” searcher is never shown “NSFW” content. The only problems would be user error: searchers who do not set their search settings correctly, and posters who do not flag their adult content correctly. Reasonable problems, and the kind of leakage that any system of regulation inevitably faces. Flagging at the blog level (as opposed to flagging each post as adult or not) is a bit of a dull instrument: all posts from my “NSFW” blog are being withheld from safe searchers, even the ones that have no questionable content — despite the fact that by their own definition a “NSFW” tumblr blog only has “occasional” nudity. Still, getting people to rate every post is a major barrier, few will do so diligently, and it doesn’t fit into simple “web button” interfaces.

Defending the dividing line? Since the content is digital, and the information about content and users is data, it should not be surprising that the line here is algorithmic. Unlike the top shelf or the back room, the adult content on Tumblr lives amidst the rest of the archive. And there’s no cash register, which means that there’s no unavoidable point at which use can be checked. There is the login, which explains why non-logged-in users are treated as only wanting “safe” content. But, theoretically, an “algorithmic checkpoint” should work based on search settings and blog ratings. As a search happens, compare the searcher’s setting with the content’s rating, and don’t deliver the dirty to the safe.

But here’s where Tumblr took two additional steps, the ones that I think raise the biggest problem for the checkpoint approach in the digital context.

Tumblr wanted to extend the checkpoint past the customer who walks into the store and brings adult content to the cash register, out to the person walking by the shop window. And those passersby aren’t always logged in, they come to Tumblr in any number of ways. Because here’s the rub with the checkpoint approach: it does, inevitably, remind the population of possible users, that you do allow the dirty stuff. The new customer who walks into the video store, and sees that there is a back room, even if the never go in, may reject your establishment for even offering it. Can the checkpoint be extended, to decide whether to even reveal to someone that there’s porn available inside? If not in the physical world, maybe in the digital?

When Tumblr delisted its adult blogs from the major search engines, they wanted to keep Google users from seeing that Tumblr has porn. This, of course, runs counter to the fundamental promise of Tumblr, as a publishing platform, that Tumblr users (NSFW and otherwise) count on. And users fumed: “Removal from search in every way possible is the closest thing Tumblr could do to deleting the blogs altogether, without actually removing 10% of its user base.” Here is where we may see the fundamental tension at the Yahoo/Tumblr partnership: they may want to allow porn, but do they want to be known for allowing porn?

Tumblr also apparently wanted to extend the checkpoint in the mobile environment — or perhaps were required to, by Apple. Many services, especially those spurred or required by Apple to do so, aim to prevent the “accidental porn” situation: if I’m searching for something innocuous, can they prevent a blast of unexpected porn in response to my query? To some degree, the “NSFW” rating and the “safe” setting should handle this, but of course content that a blogger failed (or refused) to flag still slips through. So Tumblr (and other sites)  institute a second checkpoint: if the search term might bring back adult content, block all the results for that term. In Tumblr, this is based on tags: bloggers add tags that describe what they’ve posted, and search queries seek matches in those tags.

When you try to choreograph users based on search terms and tags, you’ve doubled your problem. This is not clean, assured data like a self-rating of adult content or the age on a driver’s license. You’re ascertaining what the producer meant when they tagged a post using a certain term, and what the searcher meant when they use the same term as a search query. If I search for the word “gay,” I may be looking for a gay couple celebrating the recent DOMA decision on the steps of the Supreme Court — or “celebrating” bent over the arm of the couch. Very hard for Tumblr to know which I wanted, until I click or complain.

Sometimes these terms line up quite well, either by accident, or on purpose: for instance when users of Instagram indicated pornographic images by tagging them “pornstagram,” a made-up word that would likely mean nothing else. (This search term no longer returns any results, although  — whoa! — it does on Tumblr!.) But in just as many cases, when you use the word gay to indicate a photo of your two best friends in a loving embrace, and I use the word gay in my search query to find X-rated pornography, it becomes extremely difficult for the search algorithm to understand what to do about all of those meanings converging on a single word.

Blocking all results to the query “gay,” or “sex”, or even “porn” may seem, form one vantage point (Yahoo’s?), to solve the NSFW problem. Tumblr is not alone in this regard: Vine and Instagram return no results to the search term “sex,” though that does not mean that no one’s using it as a tag – though Instagram returns millions of results for “gay,” Vine, like Tumblr, returns none. Pinterest goes further, using the search for “porn” as a teaching moment: it pops up a reminder that nudity is not permitted on the site, then returns results which, because of the policy, are not pornographic. By blocking search terms/tags, no porn accidentally makes it to the mobile platform or to the eyes of its gentle user. But, this approach fails miserably at getting adult content to those that want it, and more importantly, in Tumblr’s case, it relegates a broadly used and politically vital term like “gay” to the smut pile.

Tumblr’s semi-apology has begun to make amends. The two categories, “NSFW” and “adult” are now just “NSFW” and the blogs masked as such are now available in Tumblr’s internal search and in the major search engines. Tumblr has promised to work on a more intelligent filtering system. But any checkpoint that depends on data that’s expressive rather than systemic — what we say, as opposed to what we say we are — is going to step clumsily both on the sharing of adult content and the ability to talk about subjects that have some sexual connotations, and could architect the spirit and promise out of Tumblr’s publishing platform.

This was originally posted at Culture Digitally.