Home AI Everything Sam Altman Says He's 'Worried' About AI and Misinformation

Sam Altman Says He's 'Worried' About AI and Misinformation

Ahead of the Presidential Election. Security Researchers Are Too. At the Brookings Institution on Tuesday, the OpenAI CEO talked about efforts to clamp down on election misinformation, but was noticeably light on specifics.

By Inc.Arabia Staff
images header

BY SAM BLUM, SENIOR WRITER @SAMMBLUM

Two years after the commercial generative AI explosion began, countries across the globe are gearing up for elections that will test their defenses against a potential onslaught of AI-generated propaganda meant to undermine democracy. 

In an echo of 2016, when Russian troll farms meddled in the U.S. election, the possibility of disinformation deployed by a range of potential bad actors is again on the minds of U.S. officials. But this year, the addition of generative AI that uses written prompts to produce text, images, audio and video exacerbates the challenges of stamping out efforts to sow distrust in elections, or spread falsehoods about candidates and their policies. 

Sam Altman, CEO of OpenAI, acknowledged at a Brookings Institute talk on Tuesday that the widespread use of AI makes election security trickier. "What I'm worried about is not more of the same, which I think we've built up technological defenses for and also societal antibodies," Altman said of the kind of fake news that ran rampant on social media before the 2016 election.

What does trouble him, he said, is "the new stuff that may only be possible with AI," he explained. "The sophisticated one-on-one, one-to-one persuasion that you just couldn't do before."

It was a reference to deepfakes--images, audio or videos depicting real people doing or saying things they've never actually done or said. In January, an AI-generated robocall mimicking the voice of President Joe Biden implored upwards of 25,000 New Hampshire voters to stay home and save their votes for November's general election. Deepfakes also emerged from the Met Gala on Tuesday, showcasing images of celebrities such as Katy Perry, who wasn't in attendance. And there is ample opportunity for bad actors to hatch similar plots on a global scale: Over 40 different elections are taking place this year, from the US and UK, to India, Iran, Taiwan and Ukraine. 

Altman's company developed and released the chatbot ChatGPT in 2022, which revolutionized the commercial use of generative AI tools, sparking an explosion of startups and investment in the field. 

When asked about how his company detects misuse of its technology, Altman was noticeably vague on specific tactics and seemingly referred to OpenAI's election policy, made public in January. 

OpenAI uses "as many strategies as anyone else," he said. "Anyone who does this will say you get the best from looking at as many different signals as you possibly can. And then a really good investigation team." 

AI-generated misinformation is a moving target, he noted. "We're still so early [and] the science is still advancing rapidly...sometimes in ways we don't predict with what we know right now. I feel reasonably good about our ability to kind of stay abreast." 

Many security researchers have been warning about a similar threat this year: Bad actors no longer need to be mobilized by a state regime, because tools that produce realistic imagery and text en masse can be accessed by almost anyone. 

For example, fake images of Donald Trump surrounded by supposed Black supporters were dispersed on Facebook earlier this year by a number of the former president's supporters, including a conservative radio show host in Florida. 

AI-generated misinformation may be harder to pinpoint when it comes from an otherwise benign source, like "ordinary people creating fan content," Renee DiResta, a researcher at the Stanford Internet Observatory recently told Bloomberg. The ease of crafting text with generative AI tools can also help it spread far and wide, researchers have consistently warned this year. "The bulk of persuasion campaigns could be based on text. That's how you can really scale an operation without getting caught," Josh Lawson, director of AI and democracy at the Aspen Institute, explained in the same Bloomberg piece. 

Those types of misinformation campaigns could balloon on encrypted messaging apps like Telegram and WhatsApp before making their way to more traditional social media venues. Meta, which owns Instagram and WhatsApp, has pledged to label anything AI-generated that spreads on its platforms, though WhatsApp has a separate policy that puts much of the onus for reporting misinformation on users. Elon Musk nixed the election integrity team at X earlier this year. 

OpenAI announced on Tuesday various efforts to make content generated by its tools more transparent. The company didn't respond to a request for comment. 

Mark Warner, a Democratic Senator from Virginia and Chair of the Senate Select Committee on Intelligence, noted the difference between the kind of manipulation that pervaded 2016 and what governments are up against today. 

"The difference is now, because Americans are more willing to believe a lot more conspiracy theories, you don't need to have a Russian bot operation to create the content. The content is being created here by Americans," he told the Wall Street Journal on Wednesday. 

Overall, Altman said he's pleased with the ways tech companies are handling the issue so far.  "The level of seriousness that the AI companies and the platforms are treating this with, I am very happy to see."

Photo Credit: Getty Images.

Last update:
Publish date: