Skip to content

Should You Boycott Traditional Journals?

March 31, 2015

(Or, Should I Stay or Should I Go?)

Is it time to boycott “traditional” scholarly publishing? Perhaps you are an academic researcher, just like me. Perhaps, just like me, you think that there are a lot of exciting developments in scholarly publishing thanks to the Internet. And you want to support them. And you also want people to read your research. But you also still need to be sure that your publication venues are held in high regard.

Or maybe you just receive research funding that is subject to new open access requirements.

Ask me about OPEN ACCESS

Academia is a funny place. We are supposedly self-governing. So if we don’t like how our scholarly communications are organized we should be able to fix this ourselves. If we are dissatisfied with the journal system, we’re going to have to do something about it. The question of whether or not it is now time to eschew closed access journals is something that comes up a fair amount among my peers.

It comes up often enough that a group of us at Michigan decided to write an article on the topic. Here’s the article.  It just came out yesterday (open access, of course):

Carl Lagoze, Paul Edwards, Christian Sandvig, & Jean-Christophe Plantin. (2015). Should I stay or Should I Go? Alternative Infrastructures in Scholarly Publishing. International Journal of Communication 9: 1072-1081.

The article is intended for those who want some help figuring out the answer to the question the article title poses: Should I stay or should I go? It’s meant help you decipher the unstable landscape of scholarly publishing these days. (Note that we restrict our topic to journal publishing.)

Researching it was a lot of fun, and I learned quite a bit about how scholarly communication works.

  • It contains a mention of the first journal. Yes, the first one that we would recognize as a journal in today’s terms. It’s Philosophical Transactions published by the Royal Society of London. It’s on Volume 373.
  • It should teach you about some of the recent goings-on in this area. Do you know what a green repository is? What about an overlay journal? Or the “serials crisis“?
  • It addresses a question I’ve had for a while: What the heck are those arXiv people up to? If it’s so great, why hasn’t it spread to all disciplines?
  • There’s some fun discussion of influential experiments in scholarly publishing. Remember the daring foundation of the Electronic Journal of Communication? Vectors? Were you around way-back-in-the-day when the pioneering, Web-based JCMC looked like this hot mess below? Little did we know that we were actually looking at the future.(*)

jcmc-1-1

(JCMC circa 1995)

(*): Unless we were looking at the Gopher version, then in that case we were not looking at the future.

Ultimately, we adapt a framework from Hirschman that we found to be an aid to our thinking about what is going on today in scholarly communication. Feel free to play the following song on a loop as you read it.

(This post has been cross-posted on multicast.)

Introducing the 2015 MSR SMC PhD Interns!

March 24, 2015

Well, after a truly exciting spell of reviewing an AMAZING set of applications for our 2015 PhD Internship Program, we had the absolutely excruciating task of selecting just a few from the pool (note: this is our Collective’s least favorite part of the process).

Without further ado, we are pleased to announce our 2015 Microsoft Research SMC PhD interns:

At MSR New England:

Aleena Chia

Aleena

Aleena Chia is a Ph.D. Candidate in Communication and Culture at Indiana University. Her ethnographic research investigates the affective politics and moral economics of participatory culture, in the context of digital and live-action game worlds. She is a recipient of the Wenner-Gren Dissertation Fieldwork grant and has published work in American Behavioral Scientist. Aleena will be working with Mary L. Gray, researching connections between consumer protests, modularity of consumer labor, and portability of compensatory assets in digital and live-action gaming communities.

 

 

 

Stacy Blasiola

Stacy

Stacy Blasiola is a Ph.D. Candidate in the Department of Communication at the University of Illinois at Chicago and also holds an M.A. in Media Studies from the University of Wisconsin at Milwaukee. Stacy uses a mixed methods approach to study the social impacts of algorithms. Using the methods of big data she examines how news events appear in newsfeeds, and using qualitative methods she investigates how the people that use digital technologies understand, negotiate, and challenge the algorithms that present digital information. As a recipient of a National Science Foundation IGERT Fellowship in Electronic Security and Privacy, her work includes approaching algorithms and the databases that enable them from a privacy perspective. Stacy will be working with Nancy Baym and Tarleton Gillespie on a project that analyzes the discursive work of Facebook in regards to its social newsfeed algorithm.

 

J. Nathan Matias

NathanNathan Matias is a Ph.D. Student at the MIT Media Lab Center for Civic Media, a fellow at the Berkman Center for Internet and Society, and a DERP Institute fellow. Nathan researches technology for civic cooperation, activism, and expression through qualitative action research with communities, data analysis, software design, and field experiments.  Most recently, Nathan has been conducting large-scale studies and interventions on the effects of gender bias, online harassment, gratitude, and peer thanks in social media, corporations, and creative communities like Wikipedia. Nathan was a MSR Fuse Labs intern in 2013 with Andrés Monroy Hernández, where he designed NewsPad, a collaborative technology for neighborhood blogging. Winner of the ACM’s Nelson Prize, Nathan has published data journalism, technology criticism, and literary writing for the Atlantic, the Guardian, and PBS. Before MIT, he worked at technology startups Texperts and SwiftKey, whose products have reached over a hundred million people worldwide. At MSR, Nathan will be working with Tarleton Gillespie and Mary L. Gray, studying the professionalization of digital labor among community managers and safety teams in civic, microwork, and peer economy platforms. He will also be writing about ways that marginalized communities use data and code to respond and reshape their experience of harassment and hate speech online.

 

At MSR New York City:

Ifeoma Ajunwa

ajunwa

Ifeoma Ajunwa is a Paul F. Lazersfeld Fellow in the Sociology Department at the University of Columbia. She received her MPhil in Sociology from Columbia University in 2012. She was the recipient of the AAUW Selected Professions Fellowship in law school after which she practiced business law, international law, and intellectual property law. She has also conducted research for such organizations as the NAACP, the United Nations Human Rights Council, the ACLU of NY (the NYCLU), and UNESCO. Her prior independent research before graduate school include a pilot study at Stanford Law School where she interrogated the link between stereotype threat and the intersecting dynamics of gender, race, and economic class in relation to Bar exam preparation and passage. Ifeoma’s writing has also been published in the NY Times, the HuffingtonPost, and she has been interviewed for Uptown Radio in NYC. She will be working with Kate Crawford at MSR-NYC on data discrimination. 

 

 

 

Facebook’s improved “Community Standards” still can’t resolve the central paradox

March 18, 2015

fb-policies1On March 16, Facebook updated its “Community Standards,” in ways that were both cosmetic and substantive. The version it replaced, though it had enjoyed minor updates, had been largely the same since at least 2011. The change comes on the heels of several other sites making similar adjustments to their own policies, including Twitter, YouTube, Blogger, and Reddit – and after months, even years of growing frustration and criticism on the part of social media users about platforms and their policies. This frustration and criticism is of two minds: sometimes, criticism about overly conservative, picky, vague, or unclear restrictions; but also, criticism that these policies fall far too short protecting users, particularly from harassment, threats, and hate speech.

“Guidelines” documents like this one are an important part of the governance of social media platforms; though the “terms of service” are a legal contract meant to spell out the rights and obligations of both the users and the company — often to impose rules on users and indemnify the company against any liability for their actions — it is the “guidelines” that are more likely to be read by users who have a question about the proper use of the site, or find themselves facing content or other users that trouble them. More than that, they serve a broader rhetorical purpose: they announce the platform’s principles and gesture toward the site’s underlying approach to governance.

Facebook described the update as a mere clarification: “While our policies and standards themselves are not changing, we have heard from people that it would be helpful to provide more clarity and examples, so we are doing so with today’s update.” Most of the coverage among the technology press embraced this idea (like here, here, here, here, here, and here). But while Facebook’s policies may not have changed dramatically, so much is revealed in even the most minor adjustments.

First, it’s revealing to look not just at what the rules say and how they’re explained, but how the entire thing is framed. While these documents are now ubiquitous across social media platforms, it is still a curiosity that these platforms so readily embrace and celebrate the role of policing their users – especially amidst the political ethos of Internet freedom, calls for “Net neutrality” at the infrastructural level, and the persistent dreams of the open Web. Every platform must deal with this contradiction, and they often do it in the way they introduce and describe guidelines. These guidelines pages inevitably begin with a paragraph or more justifying not just the rules but the platform’s right to impose them, including a triumphant articulation of the platform’s aspirations.

Before this update, Facebook’s rules were justified as follows: “To balance the needs and interests of a global population, Facebook protects expression that meets the community standards outlined on this page.” In the new version, the priority has shifted, from protecting speech to ensuring that users “feel safe:” “Our goal is to give people a place to share and connect freely and openly, in a safe and secure environment.” I’m not suggesting that Facebook has stopped protecting speech in order to protect users. All social media platforms struggle to do both. But which goal is most compelling, which is held up as the primary justification, has changed.

This emphasis on safety (or more accurately, the feeling of safety) is also evident in the way the rules are now organized. What were, in the old version, eleven rule categories are now fifteen, but they are now grouped into four broad categories – the first of which is, “ keeping you safe.” This is indicative of the effect of the criticisms of recent years: that social networking sites like Facebook and Twitter have failed users, particularly women, in the face of vicious trolling.

fb-policies2As for the rules themselves, it’s hard not to see them as the aftermath to so many particular controversies that have dogged the social networking site over the years. Facebook’s Community Standards increasingly look like a historic battlefield: while it may appear to be a bucolic pasture, the scars of battle remain visible, carved into the land, thinly disguised beneath the landscaping and signage. Some of the most recent skirmishes are now explicitly addressed: A new section on sexual violence and exploitation includes language prohibiting revenge porn. The rule against bullying and harassment now includes a bullet point prohibiting “Images altered to degrade private individuals,” a clear reference to the Photoshopped images of bruised and battered women that were deployed (note: trigger warning) against Anita Sarkessian and others in the Gamergate controversy. The section on self-injury now includes a specific caveat that body modification doesn’t count.

In this version, Facebook seems extremely eager to note that contentious material is often circulated for publicly valuable purposes, including awareness raising, social commentary, satire, and activism. A version of this appears again and again, as part of the rules against graphic violence, nudity, hate speech, self injury, dangerous organizations, and criminal activity. In most cases, these socially valuable uses are presented as a caveat to an otherwise blanket prohibition: even hate speech, which is almost entirely prohibited and in strongest terms, now has a caveat protecting users who circulate examples of hate speech for the purposes of education and raising awareness. It is clear that Facebook is ever more aware of its role as a public platform, where contentious politics and difficult debate can occur. Now it must offer to patrol the tricky line between the politically contentious and the culturally offensive.

Oddly, in the rule about nudity, and only there, the point about socially acceptable uses is not a caveat, but part of an awkward apology for imposing blanket restrictions anyway: “People sometimes share content containing nudity for reasons like awareness campaigns or artistic projects. We restrict the display of nudity because some audiences within our global community may be sensitive to this type of content – particularly because of their cultural background or age. In order to treat people fairly and respond to reports quickly, it is essential that we have policies in place that our global teams can apply uniformly and easily when reviewing content. As a result, our policies can sometimes be more blunt than we would like and restrict content shared for legitimate purposes.” Sorry, Femen. On the other hand, apparently its okay if its cartoon nudity: “Restrictions on the display of both nudity and sexual activity also apply to digitally created content unless the content is posted for educational, humorous, or satirical purposes.” A nod to Charlie Hebdo, perhaps? Or just a curious inconsistency.

The newest addition to the document, and the one most debated in the press coverage, is the new way Facebook now articulates its long-standing requirement that users use their real identity. The rule was recently challenged by a number of communities eager to use Facebook under aliases or stage names, as well as by communities (such as Native Americans) who find themselves on the wrong side of Facebook’s policy simply because the traditions of naming in their culture do not fit Facebook’s. After the 2014 scuffle with drag queens about the right to use a stage identity instead of or alongside a legal one, Facebook promised to make its rule more accommodating. in this update Facebook has adopted the phrase “ authentic identity,” their way of allowing adopted performance names but continuing to prohibit duplicate accounts. The update is also a chance for them to re-justify their rule: at more than one point in the document, and in the accompanying letter from Facebook’s content team, this “authentic identity” requirement is presented as assuring responsible and accountable participation: “Requiring people to use their authentic identity on Facebook helps motivate all of us to act responsibly, since our names and reputations are visibly linked to our words and actions.”

There is also some new language in an even older battle: for years, Facebook has been removing images of women breastfeeding, as a violation its rules against nudity. This has long angered a community of women who strongly believe that sharing such images is not only their right, but important for new mothers and for the culture at large (only in 2007, 2008, 2010, 20112012, 20132014, 2015…). After years of disagreements, protests, and negotiations, in 2014  published a special rule saying that it would allow images of breast-feeding so long as they did not include an exposed nipple. This was considered a triumph by many involves, though reports continue to emerge of women having photos removed and accounts suspended despite the promise. This assurance reappears in the new version of the community standards just posted: “We also restrict some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring.” The Huffington Post reads this as (still) prohibiting breastfeeding photos if they include an exposed nipple, but if the structure of this sentence is read strictly, the promise to “ always” allow photos of women breast-feeding seems to me to trump the previous phrase about exposed nipples. I may be getting nitpicky here, but it’s only as a result of years of back and forth about the precise wording of this rule, and Facebook’s willingness and ability to honor it in practice.

In my own research, I have tracked the policies of major social media platforms, noting both the changes and continuities, the justifications and the missteps. One could dismiss these guidelines as mere window dressing — as a performed statement of coherent values that do not in fact drive the actual enforcement of policy on the site, which so often turns out to be more slapdash or strategic or hypocritical. I find it more convincing to say that these are statements of both policy and principle that are struggled over at times, are deployed when they are helpful and can be sidestepped when they’re constraining, and that do important discursive work beyond simply guiding enforcement. These guidelines matter, and not only when they are enforced, and not only for lending strength to the particular norms they represent. Platforms adjust their guidelines in relation to each other, and smaller sites look to the larger ones for guidance, sometimes borrowing them wholesale. The rules as articulated by Facebook matter well beyond Facebook. And they perform, and therefore reveal in oblique ways, how platforms see themselves in the role of public arbiters of cultural value. They are also by no means the end of the story, as no guidelines in the abstract could possibly line up neatly with how they are enforced in practice.

Facebook’s newest update is consistent with changes over the past few years on many of the major sites, a common urge to both impose more rules and use more words to describe them clearly. This is a welcome adjustment, as so many of the early policy documents, including Facebook’s, were sparse, abstract, and unprepared for the variety and gravity of questionable content and a awful behavior they would soon face. There are some laudable principles made explicit here. On the other hand, adding more words, more detailed examples, and further clarifications does not – cannot – resolve the other challenge: these are still rules that must be applied in specific situations, requiring judgment calls made by overworked, freelance clickworkers. And, while it is a relief to see Facebook and other platforms taking a firmer stand on issues like misogyny, rape threats, trolling, and self-harm, they often are accompanied by ever more restriction not just of bad behavior but of questionable content, a place where the mode of ‘protection’ means something quite different, much more patronizing. The basic paradox remains: these are private companies policing public speech, and are often intervening according to a culturally specific or a financially conservative morality. It is the next challenge for social media to strike a better balance in this regard: more effectively intervening to protect users themselves, while intervening less on behalf of users’ values.

This is cross-posted on the Culture Digitally blog.

Update on the 2015 SMC PhD Internship season

March 1, 2015

Hello!
We wanted to post a quick update on the status of the 2015 SMC PhD Internship Program. The application season closed January 31 and we ended up with more than 240 stellar candidates to the program. Thank you for your patience with our application process and please forgive the delays in sending an update.

The SMC was humbled and tickled pink by the quality of the applications that we received for the PhD internship this year. It’s always hard to let go of such a range of incredible work in our midsts and that made it very difficult to reach even a short list applicants to interview, let alone select three final candidates. We have reached out to finalists and are in the thick of finalizing offers. If you are reading this message and have not heard from us, until now, I’m afraid that means that we could not place you with us this year. And, due to the large numbers of applications, we cannot offer reviews of individual applications.

We will announce the 2015 PhD intern recipients in June here on the Social Media Collective blog. The 2016 PhD internship and Postdoc application rounds will open, again, in Fall 2015 with an announcement on the SMC blog.

Please know that this was an extremely competitive pool. You all are doing a LOT of amazing work out there! We very much appreciate the applications, welcome the opportunity to learn about your work, and encourage you to try, again, next year if you fit the criteria. Your applications leave us very excited about the direction of social media scholarship.

We look forward to crossing paths with you at conferences, in journal pages, and online.

Best wishes,

Mary L. Gray (on behalf of the SMC)

The Google Algorithm as a Robotic Nose

January 16, 2015

Algorithms, in the view of author Christopher Steiner, are poised to take over everything.  Algorithms embedded in software are now everywhere: Netflix recommendations, credit scores, driving directions, stock trading, Google search, Facebook’s news feed, the TSA’s process to decide who gets searched, the Home Depot prices you are quoted online, and so on. Just a few weeks ago, Ashtan Soltani, the new Chief Technologist of the FTC, has said that “algorithmic transparency”  is his central priority for the US government agency that is tasked with administration of fairness and justice in trade. Commentators are worried that the rise of hidden algorithmic automation is leading to a problematic new “black box society.”

But given that we want to achieve these “transparent” algorithms, how would we do that? Manfred Broy, writing in the context of software engineering, has said that one of the frustrations of working with software is that it is “almost intangible.”  Even if we suddenly obtained the source code for anything we wanted (which is unlikely) it usually not clear what code is doing.  How can we begin to have a meaningful conversation about the consequences of “an algorithm” by achieving some broad, shared understanding of what it is and what it is doing?

06-Sandvig-Seeing-the-Sort-2014-WEB.jpg

 

The answer, even among experts, is that we use metaphor, cartoons, diagrams, and abstraction. As a small beginning to tackling this problem of representing the algorithm, this week I have a new journal article out in the open access journal Media-N, titled “Seeing the Sort.” In it, I try for a critical consideration of how we represent algorithms visually. From flowcharts to cartoons, I go through examples of “algorithm public relations,” meaning both how algorithms are revealed to the public and also what spin the visualizers are trying for.

The most fun of writing the piece was choosing the examples, which include The Algo-Rythmics (an effort to represent algorithms in dance), an algorithm represented as a 19th century grist mill, and this Google cartoon that represents its algorithm as a robotic nose that smells Web pages:

The Google algorithm as a robotic nose that smells Web pages.

Read the article:

Sandvig, Christian. (2015). Seeing the Sort: The Aesthetic and Industrial Defense of “The Algorithm.” Media-N. vol. 10, no. 1. http://median.newmediacaucus.org/art-infrastructures-information/seeing-the-sort-the-aesthetic-and-industrial-defense-of-the-algorithm/

(this was also cross-posted to multicast.)

 

[C-SPAN] The Communicators feat. Mary Gray

December 18, 2014

Our own Mary Gray was featured on C-SPAN’s “The Communicators,” discussing the ethical implications of personal data collection online and its use in commercial and academic contexts.

You can watch this special here.

via The Communicators | Communicators with Mary Gray [12/3]

5 Years of New Media in 38 Words

December 8, 2014

In the fall of 2009, I sent the manuscript for my book Personal Connections in the Digital Age off to press. I’ve just finished the index for a second edition, which will be published by Polity in mid-2015 (in time for fall classes!). The list of terms I added provides a fun little peek into the last 5 years of digital media and social life. What would you have added? What words would you expect to see in a third edition?

#
4chan
Algorithms
Black Twitter
Broadband
Catfish
Cortana
Culture jamming
Cyberbullying
Emojis
FourSquare (also: Swarm)
Grindr
Hashtag
Her
Imgur
Indie Go-Go
Instagram
Kickstarter
Knowyourmeme.com
Locative media
Lolspeak
Micro-celebrity
Pinterest
Polymedia
Reddit
Self-branding
Selfie (selfies)
Sina Weibo
Siri
Slacktivism
Snapchat
Spotify
Tumblr
WhatsApp
Wikipedia
Wordpress.com
Xbox Live
XKCD

Follow

Get every new post delivered to your Inbox.

Join 1,352 other followers