Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

In Russia, two operations created and spread content criticizing the US, Ukraine and several Baltic nations. One of the operations used an OpenAI model to debug code and create a bot that posted on Telegram. China’s influence operation generated text in English, Chinese, Japanese and Korean, which operatives then posted on Twitter and Medium.

Iranian actors generated full articles that attacked the US and Israel, which they translated into English and French. An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

  • frog 🐸@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    Had OpenAI not released ChatGPT, making it available to everyone (including Russia), there are no indications that Russia would have developed their own ChatGPT. Literally nobody has made any suggestion that Russia was within a hair’s breadth of inventing AI and so OpenAI had better do it first. But there have been plenty of people making the entirely valid point that OpenAI rushed to release this thing before it was ready and before the consequences had been considered.

    So effectively, what OpenAI have done is start handing out guns to everyone, and is now saying “look, all these bad people have guns! The only solution is everyone who doesn’t already have a gun should get one right now, preferably from us!”