Facebook uses techniques it learned from waging war on Russian bots trying to influence the 2016 election to crack down on online bullying campaigns by real users
- Facebook said today it will take a stronger front against groups of people on the platform promoting real-life violence and conspiracy theories
- New automated tool, executives said, will be used to detect organized and malicious efforts that violate site rules against hate groups
- This year, the social media giant has been criticized for its role in the ‘Stop the Steal’ movement, the spread of COVID disinformation and its negative effects on young people
- “We recognize that this challenge is complex,” Facebook Threat Disruption Director David Agranovich said at a press briefing.
- “We have to be careful … to distinguish between people who … come together to organize for social change, and … networks that can cause social damage.”
Facebook is taking a more aggressive approach to shutting down coordinated groups of real user accounts engaging in certain harmful activity on its platform, using the same strategy its security teams take against campaigns using fake accounts, said the company to Reuters.
The new approach uses tactics typically used by Facebook’s security teams for massive shutdowns of networks engaged in influence operations that use fake accounts to manipulate public debate, such as Russian troll farms.
This could have major implications for how the social media giant handles political and other coordinated movements that break its rules, as Facebook’s approach to abuse on its platforms comes under scrutiny. careful attention from global legislators and civil society groups.
Facebook said it now plans to take this same network-level approach with groups of coordinated real accounts that systematically break its rules, through mass reporting, where many users falsely report content or content. account of a target to shut down, or run for, a type of online harassment where users can coordinate to target an individual through mass posts or comments.
The announcement comes amid relentless pressure on the social media giant – in March, Zuckerberg testified before Congress for his platform’s role in spreading disinformation, and the government body announced his intends to investigate the charge that the app harms young teens yesterday
In a related change, Facebook said Thursday it would take the same type of approach for real user campaigns that cause “coordinated social damage” on and off its platforms, as it announced the withdrawal of German anti-COVID restrictions Querdenken Movement.
These extensions, which a spokesperson said were in their infancy, mean Facebook’s security teams could identify the main movements behind the behavior and take more drastic action than the company deleting individual posts or accounts, as it would otherwise.
In April, BuzzFeed News published a leaked internal Facebook report on the company’s role in the January 6 riot on the United States Capitol and its challenges to curb the rapidly growing âStop the Stealâ movement, where one of the conclusions was that Facebook had ” petty politics around coordinated genuine harm. ‘(https://bit.ly/2XmbHZN)
Facebook’s security experts, who are separate from the company’s content moderators and deal with threats from adversaries trying to evade its rules, began cracking down on influence trades using fake accounts in 2017, following the 2016 US election in which US intelligence officials concluded that Russia had used social media. media platforms as part of a cyber-influence campaign – a claim denied by Moscow.
“We recognize that this challenge is complex,” Facebook Threat Disruption Director David Agranovich (pictured) said during a press briefing today.
Facebook has called the activity banned by fake account groups “coordinated inauthentic behavior” (CIB), and its security teams have started announcing massive deletions in monthly reports. Security teams also deal with some specific threats that may not use fake accounts, such as fraud or cyber espionage networks or overt influence operations like certain state media campaigns.
Sources said the company’s teams have long debated how it should intervene at the network level for large movements of real user accounts that consistently break its rules.
In July, Reuters reported on the Vietnamese military’s online information warfare unit, which engaged in actions such as massive reporting of accounts to Facebook, but also often used their real ones. names. Facebook has deleted some accounts as a result of these mass reporting attempts.
Facebook is under increasing pressure from global regulators, lawmakers and employees to tackle large-scale abuses on its services. Others criticized the company for allegations of censorship, anti-conservative bias or inconsistent enforcement.
An expansion of Facebook’s network disruption models to affect genuine accounts raises further questions about how the changes might impact the types of public debate, online movements, and campaign tactics across the spectrum. Politics.
âMost of the time, problematic behaviors will be very close to social movements,â said Evelyn Douek, a Harvard law professor who studies platform governance. “It’s going to depend on this definition of evilâ¦ but obviously people’s definitions of evil can be quite subjective and nebulous.”
High-profile cases of coordinated activity around last year’s US election, of teenagers and K-pop fans claiming to have used TikTok to sabotage a rally for former President Donald Trump in Tulsa, Oklahoma, to campaigns policies paying memes creators online, have also sparked debates about how platforms should define and approach coordinated campaigns.