Research:Online harassment resource guide
This page documents a research project in progress.
Information may be incomplete and change as the project progresses.
Please contact the project lead before formally citing or reusing results from this page.
Online harassment has been a persistent and intractable problem for digital communications since the earliest days of the Internet. Despite substantial efforts in many corners over recent decades, online harassment remains a central problem, with very real harms and risks for many people, particularly for women and minorities.
Valuable scholarship on harassment and its responses has emerged in recent years, but it is often scattered across disciplines and methodologies, often standing a scholarly distance apart from the work of practitioners and advocates. This literature review includes research on understanding and responding to online harassment. We connect and summarize these disparate efforts, with a focus on scholarly literature and reports by organizations.
Taken together, this review offers a broad overview and starting point to scholars, advocates, and practitioners working to understand and respond to online harassment.
About this resource guide
[edit]In this guide, we summarize and suggest first places to read for anyone getting started in their understanding of the issues surrounding online harassment. Our short list is based on a much larger list of articles in a shared Zotero group. We encourage you to visit to go beyond these introductory readings and browse the full list (link TBA).
The literature considered in this review was solicited from more than twenty scholars and practitioners conducting research related to online harassment. Each researcher was invited to submit bibliographies of relevant work, as well as suggestions of other scholars to invite. This initial sample was extended through citation analysis of the bibliographies they shared. Submitted scholarship was clustered into themes, which were evaluated in workshops that also identified missing literatures and lists of further scholars.
Citing this resource guide
[edit]If you make use of this resource in the course of your academic research, please cite it as follows:
- Matias, J. N., Benesch, S., Earley, P., Gillespie, T., Keegan, B., Levy, N., & Maher, E. (2015). Online Harassment Resource Guide. Wikimedia Meta-Wiki: Research. Retrieved on Nov 20, 2015 from https://meta.wikimedia.org/wiki/Research:Online_harassment_resource_guide
How to contribute
[edit]This resource guide is a collaborative effort; if you have suggestions, please share them! We meet semiregularly for working groups and check-ins, and we would love to welcome you. Here are several ways to contribute:
- Tell us what you need to know if you're a practitioner or researcher who has questions or problems not covered in this guide.
- Suggest sections by contacting us. Our bibliography includes many sections not yet in this document, so there may be an opportunity collaborate.
- Share your bibliography or suggest papers by contacting us so we can add you to the Zotero Group
Starting points for understanding online harassment
[edit]Understanding online harassment
[edit]What is online harassment? Who engages in it, and how common is it in society? The following papers make good efforts to answer these questions and summarize the state of academic knowledge.
What is online harassment?
[edit]In a U.S. study by Pew, Duggan surveyed U.S. residents on their experience of online harassment, a study that included being called an offensive name, being embarrassed, being threatened, being harassed over a period of time, being sexually harassed, and being stalked. Cyberbullying has been a focus of much of the research about online harassment, an issue covered in the Berkman review of "Bullying in a Networked Era," which defines bullying, identifies the people involved, and describes the norms around bullying and help-seeking among young people. The law can sometimes see things in different categories than people experience them: Marwick and Miller offer a clear, accessible outline of the law around obscenity, defamation, "fighting words," "true threats," unmasking, hate speech, and hate crimes in U.S. law-- with a substantial effort to define hate speech.
- Duggan, M. (2014, October). Online Harassment. Pew Research Center.
- Levy, N., Cortesi, S., Gasser, U., Crowley, E., Beaton, M., Casey, J., & Nolan, C. (2012). Bullying in a Networked Era: A Literature Review (SSRN Scholarly Paper No. ID 2146877). Rochester, NY: Social Science Research Network.
- Marwick, A. E., & Miller, R. W. (2014). Online Harassment, Defamation, and Hateful Speech: A Primer of the Legal Landscape (SSRN Scholarly Paper No. ID 2447904). Rochester, NY: Social Science Research Network.
Who are harassers?
[edit]Most research on harassers focuses on very specific populations or contexts, so we encourage you to consult our Zotero group (TBA) for more detail. Harassers and receivers of harassment aren't always mutually exclusive groups, argue Schrock and boyd in their literature review of research on solicitation, harassment, and cyberbullying. In a review of psychology literature, Foody, Samara, and Carlbring summarize research on the psychology of young and adult bullies, and the psychological risks of engaging in bullying behavior online.
- Schrock, Andrew and danah boyd (2011). "Problematic Youth Interaction Online: Solicitation, Harassment, and Cyberbullying." In Computer-Mediated Communication in Personal Relationships (Eds. Kevin B. Wright & Lynn M. Webb). New York: Peter Lang
- Foody, M., Samara, M., & Carlbring, P. (2015). A review of cyberbullying and suggestions for online psychological therapy. Internet Interventions, 2(3), 235–242.
Trolls and trolling culture
[edit]The idea of a "troll" is often used differently by people to achieve certain ends. Coleman considers the history of troll culture in transgressive politics like phreaking and hacking. Philips argues that troll culture's engagement in spectacle interacts with and relies on online media's incessant demand for scandal. Ryan Milner's work on trolling in 4chan and Reddit shows how "logic of lulz" is used to justify racist, sexist discourse as well as justify counter-speech that critiques that sexism. Is there something in common about trolls? Buckels and colleagues paid people $0.50 on Amazon Mechanical Turk to take extensive personality tests on the enjoyment of trolling. But there are limitations in the quality of results with asking people to self-report trolling and sample limitations with recruiting trolls via Amazon Mechanical Turk. Finally, in Bergstrom's study of response to a "troll" on Reddit, we see accusations of trolling used to justify violating someone's privacy and shutting down debate about important community issues.
- Coleman, E. G. (2011). Phreaks, Hackers, and Trolls. In M. Mandiberg (Ed.), The Social Media Reader. New York: New York University Press.
- Phillips, W. (2015). This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. MIT Press.
- Milner, R. M. (2013). Hacking the Social: Internet Memes, Identity Antagonism, and the Logic of Lulz. Fibreculture Journal, (22), 62–92.
- Buckels, E. E., Trapnell, P. D., & Paulhus, D. L. (2014). Trolls just want to have fun. Personality and Individual Differences, 67, 97–102.
- Bergstrom, K. (2011). “Don’t feed the troll”: Shutting down debate about community expectations on Reddit.com. First Monday, 16(8).
Flagging and reporting systems
[edit]Platforms often offer systems for flagging or reporting online harassment. These readings describe this approach, its effects, and its limitations.
What is a flag for?
[edit]Crawford and Gillespie "unpack the working of the flag, consider alternatives that give greater emphasis to public deliberation, and consider the implications for online public discourse of this now commonplace yet rarely studied sociotechnical mechanism."
- Crawford, Kate, and Tarleton L. Gillespie (2014) What Is a Flag for? Social Media Reporting Tools and the Vocabulary of Complaint. New Media & Society.
What is the flagging process like for those involved?
[edit]The report by Matias, Johnson et al describes the kinds of reports submitted to Twitter through a nonprofit service, what the process of responding to reports is like, problems (social, legal, technical) that prevent flags from being handled, and the risks that responders face. These systems are not always so formal: Geiger and Ribes describe the role of distributed cooperation and DIY bots in responding to vandalism on Wikipedia.
- Matias, J. Nathan, Amy Johnson, Whitney Erin Boesel, Brian Keegan, Jaclyn Friedman, Charlie DeTar (2015) Reporting, Reviewing, and Responding to Harassment on Twitter. arXiv Preprint arXiv:1505.03359.
- Geiger, R. Stuart, and David Ribes (2010) The Work of Sustaining Order in Wikipedia: The Banning of a Vandal. In Proceedings of CSCW 2010 Pp. 117–126. ACM.
What is the role of Terms of Service in flagging systems?
[edit]Flagging systems often rely on terms of use or other platform ("intermediary") policies-- which mean that flagging is as much a legal approach as a technical one. Wauters and Citron both address the legal and company policy questions from the perspective of user safety and protection.
- Wauters, E., E. Lievens, and P. Valcke (2014 )Towards a Better Protection of Social Media Users: A Legal Perspective on the Terms of Use of Social Networking Sites. International Journal of Law and Information Technology 22(3): 254–294.
- Citron, D. K., & Norton, H. L. (2011). Intermediaries and hate speech: Fostering digital citizenship for our information age. Boston University Law Review, 91, 1435.
How do interest groups mobilize to achieve political goals through flagging and reporting?
[edit]Thakor and Boyd look at the work of anti-trafficking advocates to carry out their goals through platform policies. Chris Peterson's thesis looks at how groups have gamed flagging and voting systems to promote their ideas and bury opposing views.
- Thakor, Mitali, and Danah Boyd (2013) Networked Trafficking: Reflections on Technology and the Anti-Trafficking Movement. Dialectical Anthropology 37(2): 277–290.
- Peterson, C. E., & others. (2013). User-generated censorship: manipulating the maps of social media. Master's Thesis. Massachusetts Institute of Technology.
Volunteer moderators
[edit]One approach to dealing with online harassment is to recruit volunteer moderators or responders to take a special role on a platform or in a community. This is the approach taken by Google Groups, Meetup.com, Reddit, Facebook Groups, and many online forums.
What actions could moderators be supported to take?
[edit]Grimmelman's paper offers a helpful taxonomy of moderation strategies, focusing on the "verbs of moderation" and the kinds of powers you might give moderators. Grimmelman also cites many papers and articles relevant to these possible actions. Quinn offers an alternative to Grimmelman's systematic approach, describing the "ethos" that is created through community and moderation by a few. In ongoing work, Matias is researching the work of Reddit's moderators.
- Grimmelmann, James (2015) The Virtues of Moderation. SSRN Scholarly Paper, ID 2588493. Rochester, NY: Social Science Research Network.
- Warnick, Quinn (2010) "The four paradoxes of Metafilter" in What We Talk about When We Talk about Talking: Ethos at Work in an Online Community. Graduate Theses and Dissertations.
- Matias, J. Nathan (2015) Recognizing the Work of Reddit's Moderators (work in progress). Microsoft Research Social Media Collective.
Is asking volunteers to moderate online conversation asking them to do free work?
[edit]Postigo's paper offers an overview of AOL's community leaders and the Department of Labor investigation into the work of moderators in the early 2000s.
- Postigo, H. (2009) America Online Volunteers: Lessons from an Early Co-Production Community. International Journal of Cultural Studies 12(5): 451–469.
Is self-governance democratic or oligarchic?
[edit]Shaw and Hill's quantitative research across 683 different wikis shows that "peer production entails oligarchic organizational forms," in line with a trend for large democracies to become oligrachic. This issue is taken up in Nathaniel Tkacz's book, where he outlines the kinds of contention that occur in "open organizations," a book that is as much about the idea of Wikipedia as the way Wikipedia actually works.
- Shaw, Aaron, and Benjamin M. Hill (2014) Laboratories of Oligarchy? How the Iron Law Extends to Peer Production. Journal of Communication 64(2): 215–238.
- Tkacz, Nathaniel (2014) Wikipedia and the Politics of Openness. Chicago ; London: University Of Chicago Press.
Why do people do volunteer moderation?
[edit]In behavioural economics experiments, Hergueux finds that Wikipedia's administrators are most motivated by social image rather than reciprocity or altruism. Butler, Sproul, Kiesler, and Kraut offer survey results showing a diversity of formal and informal community work in online groups, and that people's participation can be related to how well they know other community members.
- Hergueux, Jérôme, Yann Algan, Yochai Benkler, and Mayo Fuster Morell (2013) Cooperation in a Peer-Production Economy Experimental Evidence from Wikipedia. In 12th Journées Louis-André Gérard-Varet.
- Butler, Brian, Lee Sproull, Sara Kiesler, and Robert Kraut (2002) Community Effort in Online Groups: Who Does the Work and Why. Leadership at a Distance: Research in Technologically Supported Work: 171–194.
Automated detection and prediction of social behaviour online
[edit]Machine learning systems play many roles on social platforms, from spam filtering and vandalism detection to automated measures of trust and reliability. Building effective models is the first hard problem. It's no less hard to figure out how to respond when a machine learning system makes a judgment about a person or their speech.
Detecting high-quality contributions
[edit]The technical and ethical contours of automated detection systems are illustrated in systems that try to find high-quality conversation. In "How Useful Are Your Comments?" Siersdorfer et al describe what kinds of comments are upvoted by YouTube users, showing how those vary across topics, and illustrating the problem of "comment rating variance," where upvoters disagree strongly. In "The Editor's Eye," Diakopoulos analyzes New York Times comments to identify their "article relevance" and "conversational relevance." Finally, Castillo et al analyze Tweet content and sharing patterns to try to detect "information credibility" during fast-moving
- Siersdorfer, S., Chelaru, S., Nejdl, W., & San Pedro, J. (2010). How useful are your comments?: analyzing and predicting youtube comments and comment ratings. In Proceedings of the 19th international conference on World wide web (pp. 891–900). ACM.
- Diakopoulos, N. A. (2015). The Editor’s Eye: Curation and Comment Relevance on the New York Times. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 1153–1157). ACM.
- Castillo, C., Mendoza, M., & Poblete, B. (2013). Predicting Information Credibility in Time-Sensitive Social Media. Internet Research, 23(5), 3–3.
Detecting deviant behavior
[edit]Is it possible to detect harassment? One of the hardest problems is how to formally define the deviant behavior. Myle Ott has a pair of papers on detecting deception in hotel reviews, building Review Skeptic, and then using it to estimate the prevalence of deception in online reviews, part of a wider literature on opinion spam detection (review post here). Sood employs workers on Mechanical Turk to train machine learning systems to distinguish insults from profanity. Dinakar redefines the problem as "detection of sensitive topics," differentiates bullying within those topics, and trains per-topic harassment classifiers. The paper by Tran outlines a language-agnostic approach for detecting Wikipedia vandalism, trained by the moderation behavior of anti-vandalism bots and large numbers of volunteers. Cheng et al. identifies characteristics of antisocial behavior in three diverse online discussion communities that predict their being banned from a very early stage.
Detection and punishment based approaches may also fuel harassment. In the final two papers, we show work by Ahmad et al to automatically detect gold farmers in World of Warcraft, alongside work by Nakamura that unpacks the racism associated with sentiments and vigilante attacks on gold farmers.
- Ott, M., Choi, Y., Cardie, C., & Hancock, J. T. (2011). Finding Deceptive Opinion Spam by Any Stretch of the Imagination. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1 (pp. 309–319). Stroudsburg, PA, USA: Association for Computational Linguistics.
- Sood, S. O., Churchill, E. F., & Antin, J. (2012). Automatic identification of personal insults on social news sites. Journal of the American Society for Information Science and Technology, 63(2), 270–285.
- Dinakar, K., Reichart, R., & Lieberman, H. (2011). Modeling the detection of Textual Cyberbullying. ICWSM 2011.
- Tran, K.-N., & Christen, P. (2015). Cross-Language Learning from Bots and Users to Detect Vandalism on Wikipedia. Knowledge and Data Engineering, IEEE Transactions on, 27(3), 673–685.
- Ahmad, M. A., Keegan, B., Srivastava, J., Williams, D., & Contractor, N. (2009). Mining for gold farmers: Automatic detection of deviant players in mmogs. In Computational Science and Engineering, 2009. CSE’09. International Conference on (Vol. 4, pp. 340–345).
- Nakamura, L. (2009). Don’t hate the player, hate the game: The racialization of labor in World of Warcraft. Critical Studies in Media Communication, 26(2), 128–144.
- Cheng, J., Danescu-Niculescu-Mizil, C., & Leskovec, J. (2015). Antisocial Behavior in Online Discussion Communities. In Proc. ICWSM 2015.
Speech and the law
[edit]Many of the actions associated with online harassment are illegal, or as some argue, should involve government intervention. How do we draw the lines between things that communities should deal with on their own, platforms should do, and governments should be involved in? Answering this debate requires us to consider a complex set of rights and risks in the U.S. (where many large platforms are based) and internationally. The issue of online harassment has prompted many to revisit principles of free speech and consider relationships between platforms and governments in the U.S. and internationally.
If you have limited time, a good starting point is Danielle Citron's recent book on this topic:
- Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press.
Principles of free speech and hate speech
[edit]In the United States, civil harassment orders are one common response to "words or behavior deemed harassing." Caplan offers an overview of this approach and how to balance it with speech rights. Tsesis offers an overview of the relationship between free speech rights under the U.S. constitution and laws about defamation, threats, and support for terrorists. Writing for the Cato Institute, Kuznicki critiques and summarizes the view that "we must balance free expression against the psychic hurt that some expressions will provoke." Segall focuses on "individually-directed threatening speech" in the United States, describing the state of law and arguing for greater clarity in U.S. legal interpretation.
- Caplan, A. H. (2012). Free Speech and Civil Harassment Orders (SSRN Scholarly Paper No. ID 2141966). Rochester, NY: Social Science Research Network.
- Tsesis, A. (2013). Inflammatory Hate Speech: Offense versus Incitement. Minnesota Law Review, 97.
- Kuznicki, J. (2009). Attack of the Utility Monsters: The New Threats to Free Speech (SSRN Scholarly Paper No. ID 1543941). Rochester, NY: Social Science Research Network.
- Segall, E. (2011). The Internet as a Game Changer: Reevaluating the True Threats Doctrine (SSRN Scholarly Paper No. ID 2004561). Rochester, NY: Social Science Research Network.
Relationships between platforms and the legal system
[edit]Legal efforts to address online harassment require governments to collaborate closely with platforms, often introducing systems of surveillance and censorship, argues Balkin, unchecked by constitutional rights in the United States. Tushnet worries that some approaches to creating hate speech legal liability for platforms might harm speech while failing to serve the wider goals of diversity. Citron argues that platforms have far more flexibility than governments in responding to hate speech online, since platform behavior is only loosely regulated, offering powerful alternatives to the legal system. Gillespie points to how the word "platform" is deployed by companies as they try to influence information policy to "seek protection for facilitating user expression, yet also seek limited liability for what those users say."
- Balkin, J. M. (2014). Old School/New School Speech Regulation (SSRN Scholarly Paper No. ID 2377526). Rochester, NY: Social Science Research Network.
- Tushnet, R. (2008).Power without responsibility: Intermediaries and the First Amendment. George Washington Law Review, 76, 101.
- Citron, D. K., & Norton, H. L. (2011). Intermediaries and hate speech: Fostering digital citizenship for our information age. Boston University Law Review, 91, 1435.
- Gillespie, T. (2010). The politics of “platforms.” New Media & Society, 12(3), 347–364.
International approaches to speech rights
[edit]In a collected volume, Hare and Weinstein offer an overview of legal issues of speech rights and hate speech in Australia, Canada, France, Germany, Hungary, Israel, the United Kingdom, and the United States. Herz and Molnar's book offers another international perspective, with an emphasis on international law, defamation of religion, and human rights.
- Hare, I., & Weinstein, J. (Eds.). (2009). Extreme Speech and Democracy. Oxford University Press.
- Herz, M. E., & Molnár, P. (2012). The content and context of hate speech : rethinking regulation and responses. Cambridge ; New York: Cambridge University Press.
Voting and distributed moderation
[edit]Distributed moderation and voting systems that invite users to upvote, downvote, or remove content have become common features on many social sites. In some systems, content is given greater or lesser prominence depending on the votes it receives. In others, like Wikipedia, any user can remove unacceptable material, often with the help of specialized quality control systems. Do these systems work? Early scholarship asked if distributed moderation would result in high quality results or if it was possible to get enough participation to make it work. More recent research has examined and questioned the effects of these systems on users and communities.
Distributed moderation is rarely suggested as a response to extreme forms of harassment, such as threats of violence, attacks, or the release of private information, where there may be a need to respond swiftly beyond just demoting the prominence of information.
Can we trust distributed moderation?
[edit]Lampe's paper (part of a series) offers the classic analysis of distributed moderation via voting systems, unpacking how voters on Slashdot did manage to agree on ratings and rate comments well. Gilbert's study of reddit looks at a serious risk to these systems: what happens when there aren't enough votes? Gaiger and Halfaker look at one response to underprovision of ratings: the use of bots on Wikipedia. Finally, Chris Peterson offers important case studies on the coordinated use of voting systems to censor ideas that some people want to make disappear.
- Lampe, C., & Resnick, P. (2004). Slash (dot) and burn: distributed moderation in a large online conversation space. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 543–550). ACM.
- Gilbert, E. (2013, February). Widespread underprovision on reddit. In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 803-808). ACM.
- Geiger, R. S., & Halfaker, A. (2013). When the levee breaks: without bots, what happens to Wikipedia’s quality control processes? In Proceedings of the 9th International Symposium on Open Collaboration (p. 6). ACM.
- Peterson, C. E. (2013). User-generated censorship: manipulating the maps of social media. Master's thesis, Massachusetts Institute of Technology.
What effect does moderation activity have on users whose contributions are rejected?
[edit]Even if voting systems are effective at identifying the best and worst contributions, what are their effects on people? In a study of Wikipedia, Halfaker shows how sometimes-overzealous deletions of Wikipedia content have pushed people away from becoming engaged editors, contributing to a decline in participation on the site. In a study of political comments, Cheng shows how downvotes can drag a community down, as downvoted users respond to downvotes by increasing the number of contributions that the community dislikes.
- Halfaker, A., Geiger, R. S., Morgan, J., & Riedl, J. (2013). The Rise and Decline of an Open Collaboration Community. How Wikipedia's reaction to sudden popularity is causing its decline. American Behavioral Scientist.
- Cheng, J., Danescu-Niculescu-Mizil, C., & Leskovec, J. (2014). How community feedback shapes user behavior. ICWSM 2014.
How can we design principled and effective distributed moderation?
[edit]One option for improving the quality of community ratings is to offer a wider range of options than just up or down, an idea that Lampe explores in "It's all news to me." But greater nuance may not address the problems created by distributed moderation. In the "Snuggle" paper, Halfaker evaluates a system that offers peer mentorship and support to people who make sub-par contributions rather than taking negative action against them and their contributions.
- Lampe, C., & Garrett, R. K. (2007). It’s all news to me: The effect of instruments on ratings provision. In System Sciences, 2007. HICSS 2007. 40th Annual Hawaii International Conference on (p. 180b–180b).
- Halfaker, A., Geiger, R. S., & Terveen, L. G. (2014). Snuggle: Designing for efficient socialization and ideological critique. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 311–320). ACM.
Bystander interventions
[edit]Research on online harassment sometimes looks at interventions by non-expert bystanders—people who observe a situation or who are close to the people involved. This bystander activity is different from moderation in that it's not carried out by people in a formal moderation role. It also differs from affordances like distributed moderation, where observers are indirectly involved. In some cases, harassment reporting systems are designed to invite participation by those bystanders. In this section, we present some of the history of academic and policy debates on the role of bystanders in responding to violence, alongside research specific to online harassment.
What is a bystander and what is bystanding?
[edit]Debates about bystanders often reach back to completely false and discredited accounts of the rape and murder of Kitty Genovese in New York City in 1964. Newspaper articles erroneously reported that 38 bystanders had watched and done nothing, misinformation that is commonly repeated in social psychology textbooks and other popular psychology sources. Manning, Levine, and Collins argue that the Genovese story has been used to limit research on helping behavior in emergencies. Dillon offers a 5-stage model for bystanding in cases of cyber bullying: (1) noticing that something is happening, (2) interpreting the event as an emergency, (3) taking responsibility for providing help, (4) deciding how to provide help, and (5) taking action to provide help. She offers early stage experimental evidence for the effect of designs in these areas on the probability of bystander intervention. In a U.S. nationally-representative sample of young people, Jones and colleagues found high levels of positive and negative bystander intervention in cases of online harassment among young people. Finally, Bastiaensens and colleagues present results of an experiment of cyberbullying where bystanders were more likely to have an intent to intervene as bullying became more severe. They also found that bystanders were more likely to have an intent to join the bullying if they share friendships and social identity with other bystanders who support the bully.
- Manning, R., Levine, M., & Collins, A. (2007). The Kitty Genovese murder and the social psychology of helping: the parable of the 38 witnesses. The American Psychologist, 62(6), 555–562.
- Dillon, K. P., & Bushman, B. J. (2015). Unresponsive or un-noticed?: Cyberbystander intervention in an experimental cyberbullying context. Computers in Human Behavior, 45, 144–150.
- Jones, L. M., Mitchell, K. J., & Turner, H. A. (2015). Victim reports of bystander reactions to in-person and online peer harassment: a national survey of adolescents. Journal of Youth and Adolescence, 1–13.
- Bastiaensens, S., Vandebosch, H., Poels, K., Van Cleemput, K., DeSmet, A., & De Bourdeaudhuij, I. (2014). Cyberbullying on social network sites. An experimental study into bystanders’ behavioural intentions to help the victim or reinforce the bully. Computers in Human Behavior, 31, 259–271.
What kind of help can bystanders offer?
[edit]On Twitter, one kind of bystanding response to online threats is to use social media to organize speech that interrupts and critiques rape culture, a practice documented and put in historical context by Rentschler. Researchers from Yale and Berkeley have taken on consulting work for Facebook to design systems to support bystander intervention, but they have not published any results from their research as of Nov 2015. In interviews, they describe this work as introducing "research-based strategies" rather than conducting research. Evaluation of these systems reportedly focused on completion rates for reporting forms rather than outcomes for the people involved. Unfortunately, we have not been able to find much research on the effects of different kinds of bystander interventions in online harassment.
- Rentschler, C. A. (2014). Rape Culture and the Feminist Politics of Social Media. Girlhood Studies, 7(1), 65–82.
- Marsh, J. (2012, July 25). Can Science Make Facebook More Compassionate? Greater Good Science Center Blog.
Secondary, vicarious trauma for people who help
[edit]The work of responding to harassment can introduce serious risks into the lives of people who help others, even when they don't become targets of harassment themselves. Although there is limited research on secondary trauma and content moderation, parallel findings from journalism and counseling offer sobering accounts. In a study of journalists who work with user-generated content, Feinstein, Audet, and Waknine found that journalists who review violent images daily experience higher levels of PTSD, depression, and psychological distress than those who reviewed user-generated content less frequently. They link the frequency of exposure rather than the duration of exposure to these outcomes and encourage that journalists be exposed less frequently to violent images.
The effects on responders reach very deeply. VanDeusen and Way find that people who offer treatment to survivors or offenders of sexual abuse experience disruptions in their abilities for intimacy and trust, and that this disruption was greatest for people who were newer to this work. Furthermore, people with a personal history of maltreatment experienced greater disruptions than others. On the other hand, a review article by Elwood, Mott, Lohr, and Galovski questions whether secondary trauma effects reach clinical levels of concern, arguing for further, better coordinated research on this issue. They offer a clear overview of research and make important distinctions between burnout and secondary trauma. Finally, Bober and Regehr find that clinicians often fail to engage in coping strategies, and that clinicians who use coping strategies don't have fewer negative effects than people who fail to engage in self care. Like the journalism study, they argue for distributing the workload rather than encouraging self care.
- Feinstein, A., Audet, B., & Waknine, E. (2014). Witnessing images of extreme violence: a psychological study of journalists in the newsroom. JRSM open, 5(8), 2054270414533323.
- VanDeusen, K. M., & Way, I. (2006). Vicarious trauma: An exploratory study of the impact of providing sexual abuse treatment on clinicians’ trust and intimacy. Journal of Child Sexual Abuse, 15(1), 69–85.
- Elwood, L. S., Mott, J., Lohr, J. M., & Galovski, T. E. (2011). Secondary trauma symptoms in clinicians: A critical review of the construct, specificity, and implications for trauma-focused treatment. Clinical psychology review, 31(1), 25-36.
- Bober, T., & Regehr, C. (2006). Strategies for Reducing Secondary or Vicarious Trauma: Do They Work? Brief Treatment and Crisis Intervention, 6(1), 1–9.
Racism and sexism online
[edit]If you are interested in the experiences of marginalized people online, a good first step is to listen directly to those groups, who are often vocal about their experiences. Academic conversations do offer ways to think about those experiences and voices; we have assembled some resources here. This section is still in progress and needs substantial improvement. Please contact the authors if you would like to contribute.
How to think about sexism and racism online
[edit]In her compelling, helpful review of research on race and racism online, Daniels (author of Cyber Racism) outlines a way to think about these issues, including the infrastructure and history of the Internet, debates about digital divides, online platforms, information wars, social movements, law, hate speech, surveillance, and internet cultures. She concludes by arguing that it's important understand them in terms of the "deep roots of racial inequality in existing social structures." When people try to expand participation of marginalised groups through civility, we often make the same assumptions that excluded them in the first place — a history that Fraser traces. Within responses to violent and objectionable speech, the problems of gender, race, and class intersect, making some people even more vulnerable, as Crenshaw shows across several case studies. Gray describes how those intersecting identities shape discrimination and harassment of women of color in online gaming.
- Daniels, J. (2012). Race and racism in Internet studies: A review and critique. New Media & Society, 0(0), 1–25.
- Fraser, N. (1990). Rethinking the public sphere: A contribution to the critique of actually existing democracy. Social Text, 56–80.
- Crenshaw, K. (1993). Beyond Racism and Misogyny: Black Feminism and 2 Live Crew. In Words That Wound: Critical Race Theory, Assaultive Speech, And The First Amendment. Boulder, Colo: Westview Press.
- Gray, K. L. (2012). Intersecting oppressions and online communities: examining the experiences of women of color in xbox live. Information, Communication & Society, 15(3), 411-428.
Discrimination online
[edit]Given that racism and sexism are widespread social problems, it should not be surprising that we see evidence of them online. Yet certain design features make sexism and racism more likely. Dollar and Stein conducted a field experiment showing that people paid less to sellers they trusted less if the hand advertising an iPod was black compared to white. Jason Radford showed the effect on gender discrimination of introducing marital status as a field in the charitable giving site DonorsChoose. Data mining and algorithmic systems can learn discrimination from their users, an issue that Solon Barocas reviews. Matias and colleagues reflect on the ethical and political challenges of creating systems that try to correct problems of discrimination online.
- Doleac, J. L., & Stein, L. C. (2013). The visible hand: Race and online market outcomes. The Economic Journal, 123(572), F469-F492.
- Radford, J. (2014). Architectures of Virtual Decision-Making: The Emergence of Gender Discrimination on a Crowdfunding Website. arXiv preprint arXiv:1406.7550.
- Solon Barocas, “Data Mining and the Discourse on Discrimination,” Proceedings of the Data Ethics Workshop, Conference on Knowledge Discovery and Data Mining (KDD), August 2014
- Dwork, C. and Mulligan, D. K. (2013). It's Not Privacy, and It's Not Fair. Stanford Law Review.
- Matias, J. N., Agapie, E., D’Ignazio, C., & Graeff, E. (2014). Challenges for Personal Behavior Change Research on Information Diversity. Presented at the CHI 2014 Workshop on Personalizing Behavior Change Technologies.
Online misinformation
[edit]The prevalence of misinformation (broadly construed to include rumors, hoaxes, urban legends, gossip, myths, conspiracies) has garnered attention from researchers at the intersection of psychology, communication, and political science since World War II. At least three interrelated findings have been reliably replicated across settings and methods: (1) people's heuristics for evaluating the credibility of information are complex, (2) the persistence of belief in misinformation in spite of factual debiasing attempts, and (3) the social — rather than epistemic — functions of misinformation.
Information credibility
[edit]Metzger (2007) provides an excellent summary of "the skills that Internet users need to assess the credibility of online information". The paper reviews checklist approaches that provide a good list of content and behavioral features that could be used to develop automated methods as well as a cognitive processing model outlining a "dual processing model" for web credibility assessment decisions from exposure, evaluation, to judgment.
Several research papers have examined credibility assessments specifically on Twitter. Schmierbach & Oeldorf-Hirsch (2012) use experiments to show that information on Twitter is judged as less credible and less important than similar stories posted to newspapers. Morris, et al. (2012) uses surveys to evaluate users' perceptions of tweet credibility and finds a disparity between users' features and those engineered into search engines. The authors then perform two experiments, finding users are poor judges of truthfulness based on content alone but rely on heuristics such as the username of the tweet author. Castillo, et al. (2012) use a supervised machine learning approach to automatically classify credible news events and finds differences in how Twitter messages propagate based on their newsworthiness and credibility.
Earlier work by Fisher (1998) and Bordia & Rosnow (1998) provide a strong basis in psychological theory of misinformation in the quaint context of online systems before the turn of the century. Fisher discusses how grassroots social movements will use language, technology, institutional access, etc. Bordia & Rosnow (1998) examine an early rumor chain and connect its propagation to existing theories about "rumormongering as a collective, problem-solving interaction that is sustained by a combination of anxiety, uncertainty, and credulity" that are similar across face-to-face and computer-mediated settings.
- Morris, M. R., Counts, S., Roseway, A., Hoff, A., & Schwarz, J. (2012). Tweeting is believing?: understanding microblog credibility perceptions. In Proc. CSCW (pp. 441–450).
- Metzger, M. J. (2007). Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), 2078–2091.
- Bordia, P., & Rosnow, R. L. (1998). Rumor Rest Stops on the Information Highway Transmission Patterns in a Computer-Mediated Rumor Chain. Human Communication Research, 25(2), 163–179.
- Fisher, D. R. (1998). Rumoring Theory and the Internet: A Framework for Analyzing the Grass Roots. Social Science Computer Review, 16(2), 158–168.
- Castillo, C., Mendoza, M., & Poblete, B. (2013). Predicting information credibility in time-sensitive social media. Internet Research, 23(5), 560–588.
- Schmierbach, M., & Oeldorf-Hirsch, A. (2012). A Little Bird Told Me, So I Didn’t Believe It: Twitter, Credibility, and Issue Perceptions. Communication Quarterly, 60(3), 317–337.
Debiasing
[edit]Debiasing reflects efforts to correct or retract misinformation. Lewandowsky, et al. (2012) provide the definitive review of misinformation's origins, strategies for debiasing, and persistence of beliefs as well as backfire/boomerang effects where strength of belief increases. The paper's biggest contribution is an exhaustive review of strategies for reducing the impact of misinformation (pre-exposure warnings, repeated retractions, providing alternative narratives) and alternative strategies for correction (emphasis on facts, brevity, affirm worldview, affirm identity).
Berinsky (2012) explores ideological and demographic correlates of American voters' beliefs in various (American) political conspiracies and rumors and performs experiments showing that many strategies for correcting mistruths lead to confusion. Garret (2011) uses a survey method and finds Internet use promotes exposure to rumors and rebuttal and that rumors emailed to friends/family are more likely to be believed and shared. Garrett and Weeks (2013) find evidence that real-time correction may cause users to be resistant to factual information. Nyhan and Reifler (2010) conduct four experiments that replicate findings about corrections' failure to reduce misperceptions as well as "backfire effects" in which corrections increase belief in misinformation.
- Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest, 13(3), 106–131. [1]
- Berinsky, A. (2012). Rumors, truths, and reality: A study of political misinformation. Unpublished Manuscript, Massachusetts Institute of Technology, Cambridge, MA.
- Garrett, R. K., & Weeks, B. E. (2013). The promise and peril of real-time corrections to political misperceptions. In Proc. CSCW 2013 (pp. 1047–1058). [2]
- Nyhan, B., & Reifler, J. (2010). When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior, 32(2), 303–330.
- Garrett, R. K. (2011). Troubling Consequences of Online Political Rumoring. Human Communication Research, 37(2), 255–274.
Social functions
[edit]The prevalence and persistence of gossip, rumor, conspiracy, and misinformation in spite of debiasing efforts can be attributed to the important social functions that this information fulfills. Rosnow is one of the most influential post-war empirical researchers and his 1988 article provides a review of rumor as a "process of explanation" emerging from personal anxiety, general uncertainty, credulity, and topical importance that influence rumor generation and propagation. Donovan provides a comprehensive review of the concept of rumors throughout the 20th century as psychological, organizational, literary/folklore and provides some useful definitions to differentiate rumors, hoaxes, urban legends, gossip, and myths. Donovan's most persuasive argument is the role of rumor as dialogic acts where "both believers and skeptics build rumor" (pp. 69). Foster provides a review of the social psychological bases of gossip as having substantive social, evolutionary, and personal functions rather than being epistemic defects.
DiFonzo et al. run experiments on medium-sized networks to evaluate how structural properties like clustering drive the emergence of consensus, continuation, and confidence about rumors and hearsay and how they spread in social networks as social influence processes. Earlier work by Bordian and DiFonzo (2005) examined how rumors were transmitted on different online discussion groups, finding differences between "dread" and "wish" rumor types, 14 content categories, and different "communicative postures" such as explanation, information reporting/seeking, and directing/motivating.
- DiFonzo, N., Bourgeois, M. J., Suls, J., Homan, C., Stupak, N., Brooks, B. P., … Bordia, P. (2013). Rumor clustering, consensus, and polarization: Dynamic social impact and self-organization of hearsay. Journal of Experimental Social Psychology, 49(3), 378–399. [3]
- Bordia, P., & DiFonzo, N. (2004). Problem Solving in Social Interactions on the Internet: Rumor as Social Cognition. Social Psychology Quarterly, 67(1), 33–49.
- Donovan, P. (2007). How Idle is Idle Talk? One Hundred Years of Rumor Research. Diogenes, 54(1), 59–82.
- Foster, E. K. (2004). Research on gossip: Taxonomy, methods, and future directions. Review of General Psychology, 8(2), 78–99.
- Rosnow, R. L. (1988). Rumor as communication: A contextualist approach. Journal of Communication, 38(1), 12–28.
Upcoming sections we plan to add
[edit]We are thinking about adding the following sections from our bibliography. Contact us if you think you can help!
- Debates about anonymity
- The experience of online harassment
- Defining harassment as "online"
- Sexism, racism, and hate speech online
- Civility debates
- Information cascades
- International dimensions of online harassment
- Platform policies
- Responses to online harassment
- DIY responses
- Peer governance
- Mediation and dispute resolution
- Open questions for research and action
- Contention among publics, counter-publics, and anti-publics
Acknowledgments
[edit]Many researchers generously contributed their personal lists of literature to this effort. We are grateful to everyone who helped us bring this together by contributing literature and resources:
- Whitney Erin Boesel
- Willow Brugh
- Danielle Citron
- Katherine Cross
- Maral Dadvar
- David Eichert
- Sarah Jeong
- Ethan Katsh
- Lisa Nakamura
- Joseph Reagle
- Carrie Rentschler
- Rachel Simons
- Bruce Schneier
- Cindy Southworth