Meta Platforms Inc. (NASDAQ:META) has taken down hundreds of Facebook accounts tied to covert influence campaigns from China, Israel, Iran, Russia, and other nations, some of which utilized AI tools to generate disinformation. This information comes from the company’s quarterly threat report.
Meta, the parent company of Facebook, Instagram, and WhatsApp, has observed threat actors leveraging AI to create fake images, videos, and text to influence users on its platforms. However, the use of generative AI has not hampered Meta’s ability to disrupt these networks, as stated in the report released on Wednesday.
Among the disinformation campaigns, Meta identified a deceptive network from China distributing AI-generated poster images of a fictitious pro-Sikh movement, and an Israel-based network posting AI-generated comments praising Israel’s military on media organization and public figure pages. Many of these networks were removed before they could attract audiences within authentic communities.
“Currently, we’re not seeing generative AI being used in highly sophisticated ways,” said David Agranovich, Meta’s policy director of threat disruption, during a press briefing on Tuesday. He noted that tactics such as creating AI-generated profile photos or producing large volumes of spammy content have not been effective so far.
“But we know these networks are inherently adversarial,” Agranovich added. “They will continue to evolve their tactics as their technology advances.”
Social media platforms like Facebook, ByteDance Ltd.’s TikTok, and Elon Musk’s X have faced challenges with the influx of fake and misleading AI-generated content. This year, doctored audio of US President Joe Biden and fake images of the Israel-Hamas conflict have circulated widely on social media, garnering millions of views.
Nick Clegg, Meta’s president of global affairs, has emphasized the importance of detecting and labeling AI-generated content, especially as the company prepares for the 2024 election cycle. Global elections will occur in over 30 countries this year, including major markets for Meta’s apps like the US, India, and Brazil.
Meta has recently updated its policies to label misleading AI-generated content rather than remove it. The company also requires advertisers to disclose when AI is used to create Facebook or Instagram ads related to social issues, elections, or politics, though it does not fact-check political ads.
Featured Image: Unsplash © Dima Solomin