The trolls are teaming upand tech platforms aren’t doing enough to stop them – Fast Company

Posted By on December 11, 2019

What do trans women gamers, Jewish journalists, academics of color, and feminist writers have in common? All of them could find themselves targets of coordinated harassment campaigns simply because they have a presence online.

Take the story of Trista (all names have been changed to protect privacy), a trans woman gamer. When she began streaming her games on Twitch, bands of harassers arrived en masse to jam up her channel with what she called low effort, hateful memes. Another woman gamer was called an eBeggar, the misogynistic gaming equivalent of gold diggers. On 4Chan, where harassers organized their attacks, posters organized raids of SJWs (or social justice warriors) against gamers like Trista, planning to post many swasticas [sic] and hurl ableist insults to threaten and belittle them.

Or take the story of Keith, a white Jewish man, comedian, and media professional. After criticizing neo-Nazis and the alt-right in his comedy, he found himself the target of anti-Semitic attacks from users of the website 4Chan. Harassers found his image online and vandalized it in racist and anti-Semitic ways, depicting him with darkened skin covered in sores, an enlarged nose, and altered hair. They drew on old tropeswhite supremacist ideas that Jews cannot be considered white, are identifiable by their facial features, and are uncleanin an attempt to insult him.

These are just two examples from an original study we conducted with the Anti-Defamation League in 2019, titled The Trolls are Organized and Everyones a Target. We set out to understand how these campaigns happen and what its like to be on the receiving end of coordinated harassment. While several recent studies have quantified the problem of online harassment, including studies by Amnesty International, the Anti-Defamation League, and the Pew Research Center, as anthropologists of technology, we wanted to hear directly from individuals who had been affected by harassment. Our study involved 15 in-depth ethnographic interviews and an extensive review of previous research, all focused on understanding the experience of harassment and how it was shaped by the identity of the targeted individual. In line with previous studies, we found that cyberharassment severely affects women and people of color, particularly trans women and women of color.

Building on previous research, we found that the nature of online harassment has changed with the advent and spread of networked technologies such as social media. Harassers have new ways to interfere with the lives of their targets and to follow them wherever they go online. Study participants told us again and again about harassers high degree of collaboration and persistence. Each person we spoke to had been the target of repeated, sustained harassment, often across multiple platforms.

Each person we spoke to had been the target of repeated, sustained harassment, often across multiple platforms.

We call these types of campaigns networked harassment in the report. Targets receive barrages of hateful messages on Twitter, Facebook, Medium, via chat or messaging tools, in their live stream on Discord or Twitch, and through email. At the same time, if they own a business, like one Jewish woman we spoke to, they might receive false and defamatory online reviews on Google, Yelp, and even GlassDoor. Occasionally, online messaging escalates to in-person stalking or confrontations.

Because most of our respondents work in professional and knowledge work fields such as academia, media, gaming, nonprofits, business, or law, many relied on digital tools and spaces to build professional reputations and find work. Online harassment thus directly threatened their livelihoods and employability.

But while these people were inundated with horrifying, demeaning messages, they did not necessarily take the abuse passively. About half of the people we spoke to had documented how harassers targeted them. They and their friends took screenshots of tweets when they appeared on Twitter to keep a record of what was said by which accounts. Many but not all of the accounts responsible were anonymous or obviously created only to harass unsuspecting victims. In some cases, a public figureoften with many times the number of online followers as the targeted individualwould mock the target on social media, triggering hundreds or thousands of followers to pile on insults. Targeted individuals sometimes went so far as to systematically record the waves of harassment in spreadsheets. Even though targets were active in documenting and requesting responses from the companies hosting such behavior, such efforts often had little tangible effect.

The spreadsheets and screenshots revealed that often different accounts would use nearly identical language, suggesting that a single individual or group had coordinated behind the scenes. Some people who were targeted found evidence that attacks had been planned on sites like the anonymous 4chan message board, where attackers would post screenshots and archived links of conversations about who to target and how to do it. Often, these documents were submitted to platforms as evidence that offending accounts should be shut down, but platform companies rarely complied or responded in a timely fashion.

Although everyone we spoke with used the available reporting tools, none felt doing so led to adequate resolution.

Many targeted individuals responded to their harassment by withdrawing from social media, like Trista, the gamer described earlier, and Naomi, a professor and writer. Naomi was targeted for nine months after far-right websites like Breitbart covered her work, triggering waves of rape and death threats on Twitter and Instagram. Although everyone we spoke with used the available reporting tools, none felt doing so led to adequate resolution. Platforms such as Twitch or Twitter cant stop the harassment campaigns on their own. Even when these companies block or ban users from their service, they cant prevent harassers from finding their targets on other platforms.

Some people, such as Charles, a Latino academic, or Barbara, a Jewish businesswoman, felt unsafe enough to reach out to law enforcement. Yet local law enforcement were largely ill equipped to help because the harassment took place online. Most perpetrators were savvy enough to avoid explicit or specific threats to physical safety that would have been more likely to prompt law enforcement involvement (for instance, saying I hope you die rather than I want to kill you). On rare occasions, however, harassers would move from online to offline. For example, one white woman academic received threatening letters mailed to her new home only days after moving in. Another was stalked in person at her workplace, and building security acted as a protective layer between her and her would-be attackers.

Our findings build on over a decade of research on trolling, cyberbullying, cyberstalking, and other forms of online harassment and abuse. Researchers such asLisa Nakamura pioneered studies of how old prejudices appear anew in digital worlds. Recent work by Ruha Benjamin, Safiya Noble, Simone Browne, and many others show how prejudice is not just replicated on but built into new technology platforms. Our research, like previous studies, shows that perpetrators punch down. In other words, harassers target people with less social power and visibility than they have: young people, women, people of color, trans women, and disabled people. Trolling and harassment campaigns have long featured extremist and white supremacist themes.

So then what is to be done? In our report, we recommended three main areas for overhauling responses to cyberharassment and numerous ways to achieve them.

The personal, social, and material harms our participants experienced have real consequences for who can participate in public life.

First, platform companies must improve moderation tools and user control over profiles, pages, and accounts. They should consider formalizing practices targeted individuals already use, such as distributing moderation among trusted friends. Moderation and filtering tools should be strengthened, with referral-site filtering to prevent coordinated attacks from a single site, like 4Chan, and more stringent blocking to stop abusive individuals from viewing the activity of their targets.

Second, platforms can improve the abuse reporting process by adding transparent means to track abuse claims. They should also improve staffing and response time for existing reporting systems. Our research participants reported waiting weeks or months for responses from some platforms, which is simply unacceptable.

Third, platforms need to cooperate in preventing and responding to multiplatform harassment. This approach requires including targets of harassment, especially from marginalized and targeted groups, in the design processes and engineering oversight. For example, common standards and API-based tools could offer possibilities for blocking abusers across multiple platforms or sharing information across company safety teams. Platforms can prioritize user safety by hiring diverse designers and training technical staff to have a more rigorous understanding of how power, identity, and hate operate in society in online spaces and beyond.

Online harassment is ultimately about trying to control what kind of people are visible and have a voice in public arenas. The personal, social, and material harms our participants experienced have real consequences for who can participate in public life. Current laws and regulations allow digital platforms to avoid responsibility for content produced by users, but digital media companies must truly listen to their users, especially those from marginalized and frequently targeted communities, and follow in good faith any future regulations that limit hate speech and increase platform responsibility for abuse. And if online spaces are truly going to support democracy, justice, and equality, change must happen soon.

Read more from the original source:
The trolls are teaming upand tech platforms aren't doing enough to stop them - Fast Company

Related Posts

Comments

Comments are closed.

matomo tracker