Home Security OpenAI Terminates Operation Utilizing ChatGPT for Election Influence Activities

OpenAI Terminates Operation Utilizing ChatGPT for Election Influence Activities

by admin

In a recent announcement, OpenAI revealed the termination of several ChatGPT accounts involved in a clandestine influence campaign by Iran, focusing on content creation surrounding the US presidential elections. This action was detailed in a company blog post. The implicated operation reportedly utilized AI to craft articles and social media commentary, albeit without garnering significant attention.

This incident marks another instance where OpenAI has had to take action against accounts employed by nations or entities for nefarious uses of ChatGPT. In a previous crackdown in May, the company countered five separate endeavors aimed at swaying public sentiment through deceptive tactics.

Such tactics recall attempts by state-linked actors to manipulate public perception through platforms like Facebook and Twitter during earlier electoral periods. Now, it appears these or similar groups are embracing generative AI to propagate false narratives across social networks. OpenAI, similar to various social media entities, seems to be constantly on the alert, targetting and disabling accounts linked to such misinformation campaigns as they arise.

A decisive factor in the investigation into these accounts was a report by Microsoft Threat Intelligence, which last week identified this as part of a larger scheme, dubbed Storm-2035, to impact US electoral processes since 2020.

According to Microsoft, Storm-2035 comprises an Iranian collective using faux news sites to disseminate divisive messages among American voters on key issues, ranging from presidential candidates to LGBTQ rights and the Israel-Hamas dispute. Their goal is not to champion specific views but rather to fuel division and discord.

OpenAI tracked down five websites posing as both left and right-wing news platforms, under convincing web addresses like “evenpolitics.com.” The group leveraged ChatGPT to compose numerous detailed pieces, including baseless claims such as a platform owned by Elon Musk censoring Trump’s online expressions—a narrative contradicted by Musk’s welcoming stance towards the former president on social media.

An illustration of a counterfeit news platform utilizing ChatGPT-generated compositions.
Image Credits: OpenAI

In the realm of social media, OpenAI detected and shut down twelve accounts on X and one on Instagram associated with this scheme. These accounts were reportedly using ChatGPT to craft and disseminate altered political narratives, including misleading Tweets. An example includes a deceptive assertion linking Vice President Kamala Harris to statements on climate change’s impact on immigration costs, supported by a hashtag critical of her.

OpenAI observed that the output from Storm-2035’s deceptive activities did not achieve widespread sharing and noted that the majority of its social media content attracted minimal engagement. It’s a common outcome for such quick-to-launch, AI-powered misinformation pushes. As political tensions escalate with the election cycle, similar announcements from OpenAI are expected to become more frequent.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles