US officials announce the takedown of an AI-powered Russian bot farm
US officials and their allies have identified and taken down an artificial intelligence-powered Russian bot farm comprised of almost 1,000 accounts, which spread disinformation and pro-Russian sentiments on X. The Justice Department has revealed the the scheme that was made possible by software was created by a digital media department within RT, a Russian state-controlled media outlet. Its development was apparently led by RT’s deputy editor-in-chief back in 2022 and was approved and funded by an officer at Russia’s Federal Security Service, the main successor of the KGB.
In a cybersecurity advisory issued by the FBI, intelligence officers from the Netherlands and cybersecurity authorities from Canada, they specifically mentioned a tool called “Meliorator,” which can create “authentic appearing social media personas en masse,” generate text messages as well as images and mirror disinformation from other bot personas. Authorities have seized two domains that the operation used to create email addresses that were necessary to sign up for accounts on X, formerly known as Twitter, which served as home to the bots.
The Justice Department, however, is still in the midst of finding all 968 accounts used by the Russian actors to disseminate false information. X has shared information with authorities on all the identified accounts so far and has already suspended them. As The Washington Post has noted, the bots slipped through X’s safeguards, because they can copy-paste OTPs from their email accounts to log in. The operations’ use of US-based domain names violates the International Emergency Economic Powers Act, the Justice Department said. Meanwhile, paying for them violates federal money laundering laws in the US.
A lot of profiles created by the tool impersonated Americans by using American-sounding names and setting their locations on X to various places in the US. The examples presented by the Justice Department used headshots against gray backgrounds as their profile photos, which are a pretty good indicator that they were created using AI. One account with the name Ricardo Abbott, which claimed to be from Minneapolis, posted a video of Russian president Vladimir Putin justifying Russia’s actions in Ukraine. Another account with the name Sue Williamson posted a video of Putin saying that the war in Ukraine isn’t about territorial conflict and is a matter of “principles on which the New World Order will be based.” These posts were then liked and reposted by other bots in the network.
It’s worth noting that while this particular bot farm was confined to X, the people behind it had plans to expand to other platforms, based on the authorities’ analysis of the Meliorator software. Foreign actors that spread political disinformation have been using social media to disseminate false news for years. But now they’ve added AI to their arsenal. Back in May, OpenAI reported that it dismantled five covert influence operations originating from Russia, China, Israel and Iran that were using its models to influence political outcomes.
“Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government,” FBI Director Christopher Wray said in a statement. “The FBI is committed to working with our partners and deploying joint, sequenced operations to strategically disrupt our most dangerous adversaries and their use of cutting-edge technology for nefarious purposes.”
As for RT, the media organization told Bloomberg: “Farming is a beloved pastime for millions of Russians.”
Read the original article here