Home AI Everything OpenAI Thwarts AI Misuse in Covert Operations

OpenAI Thwarts AI Misuse in Covert Operations

In response to these developments, OpenAI announced the formation of a Safety and Security Committee, led by board members including CEO Sam Altman.

By Inc.Arabia Staff
images header

OpenAI, led by Sam Altman, revealed that it had intercepted five covert influence campaigns utilizing its AI models for deceptive purposes across the internet. The San Francisco-based artificial intelligence firm disclosed that threat actors from Russia, China, Iran, and Israel had employed its AI models to generate short comments, articles in multiple languages, and fictitious names and bios for social media accounts over the past three months.[1]

These campaigns, targeting topics such as Russia's invasion of Ukraine, the conflict in Gaza, Indian elections, and political affairs in Europe and the US, aimed to manipulate public opinion and influence political outcomes.

Despite the efforts, the deceptive operations failed to gain increased audience engagement or reach through OpenAI's services.

OpenAI clarified that while AI-generated material was part of these operations, they also involved manually written texts or memes sourced from various internet sources. 

In response to these developments, OpenAI announced the formation of a Safety and Security Committee, led by board members including CEO Sam Altman, to address emerging challenges as the company trains its next AI model. 

In May, OpenAI unveiled its latest innovation in AI, introducing the world to GPT-4o just one year after GPT-4's introduction.

Last update:
Publish date: