Home AI Everything Imitation Game Over? How This Deepfake Detection Company

Imitation Game Over? How This Deepfake Detection Company

Tells AI Voices From Human Ones. Pindrop recently launched a new tool aimed at weeding out spoofed audio.

By Inc.Arabia Staff
images header

By Brian Contreras, Staff reporter @_B_Contreras_

How do you distinguish the voice of a human from the voice of an AI trying to sound human?

It comes down to simple biology. That's according to Vijay Balasubramaniyan, co-founder and CEO of the audio authentication company Pindrop, which is launching a tool aimed at detecting deepfake AI audio clips. The human mouth is the result of a particular evolutionary legacy that artificial intelligence systems still often struggle to accurately mimic, Balasubramaniyan notes. "Over 10,000 years, you developed an overbite because you started eating soft food, and so you started developing these very natural ways to channel noise," he says. AI isn't quite there yet.

Called Pindrop Pulse Inspect, Balasubramaniyan's new tool is being marketed toward "fact-checkers, misinformation experts, security departments, trust and safety teams, and social media platforms," per a press release that announced Pulse Inspect in preview. The platform builds on Atlanta-based Pindrop's prior work helping financial institutions confirm their customers' identities over the phone. The company, founded in 2011, claims the tool is more than 99 percent accurate when identifying deepfakes from "previously seen deepfake models" and 90 percent accurate when it comes to "new or previously unseen tools."

Audio deepfakes "diverge away from our God-given human voice in very specific ways," says Balasubramaniyan. "In any audio channel, there are 8,000 samples of your voice every single second, so there are 8,000 times a machine can get it wrong."

Pindrop boasts partnerships with 11 of America's largest insurers and eight of the top 10 banks and credit unions. But about seven years ago, Balasubramaniyan says, Pindrop realized that as important as it was for its clients to know whom they were speaking with, it was often harder for them to know that they were speaking with a real human to begin with.

The problems deepfakes pose are wide-ranging, from fake celebrity endorsements in ads to emotionally manipulative phone scams to--in the case of a recent robocall that tried to suppress voter turnout in New Hampshire with a simulated Joe Biden impression--election interference.

"Because of the remote-first world we live in, [deepfakes] break trust in commerce," Balasubramaniyan says. "It's a pretty gnarly problem." Pindrop was actually involved in identifying the software that had been used to create the Biden-bot--something that paved the way for the firm to launch Pindrop Pulse Inspect.

"We did the same thing with a recent Elon Musk cryptocurrency scam," Balasubramaniyan says. "We did the same thing with a Lebron James deepfake that happened on X. We've been traditionally called in for a lot of these situations in a very ad hoc fashion, so rather than continuing to be ad hoc, we said we're just going to roll this out so people can just use it on their own."

Pindrop has garnered investment from venture firms Andreessen Horowitz and GV, as well as debt financing from Hercules Capital, Bloomberg reports. None of the three responded to a request for comment from Inc.

Picking out AI fakes from the real deal has been a problem for as long as the technology has been around, but it's becoming increasingly prevalent as artificial intelligence software gets cheaper, higher-quality, and easier to use. A tool that the writing software company Grammarly just announced aims to help teachers and students track the provenance of AI-generated text amid concerns about machine-generated academic writing, while a recent study out of Northwestern University and Germany's University of Tübingen suggested that certain telltale words--such as delves and showcasing--serve as AI chatbots' fingerprints.

Photo Credit: Getty Images.

Last update:
Publish date: