Deepfake detective

0
727

Siwei Lyu.

With the upcoming presidential election and the increasing use of AI tools like ChatGPT, UB faculty member Siwei Lyu expects his role as a deepfake detective will continue to grow. Photo: Douglas Levere

Published November 17, 2023

The images are provocative. Joe Biden and Kamala Harris embrace with glee in the White House to celebrate Donald Trump’s indictment in an alleged hush money scheme. A conservative activist has shared them with 2 million followers on X.

But are the photos real?

The question came to UB media forensics expert Siwei Lyu from editors at Agence France-Presse, who were seeking an urgent answer.

Lyu quickly ran the images through a series of detection algorithms and provided the news agency with proof — Harris’ hand appears to have six fingers — that the images were generated by artificial intelligence, otherwise known as a deepfake

“Every time a reporter asks me about the authenticity of a piece of media and I see that my analysis contributes to the debunking of misinformation and disinformation, that is very rewarding,” says Lyu, co-director of the UB Center for Information Integrity, which fights online misinformation and disinformation.

Providing such expertise to news media worldwide — recent examples include USA Today, Poynter, Politifact and Reuters — is a weekly occurrence for Lyu, a SUNY Empire Innovation Professor of computer science and engineering, who is also developing tools to help older adults and children spot online deceptions. 

His expertise will soon be available to the public, too. In the coming weeks, he and his students plan to launch the DeepFake-o-meter, an open platform where anyone online can submit a potential AI-generated video for analysis. 

With next year’s presidential election and the emergence of content-generating AI tools like ChatGPT, Lyu expects his role as a deepfake detective will continue to grow.

Only computer and internet connection needed

The video looks like a typical Buffalo Bills postgame news conference. Superstar quarterback Josh Allen stands at a microphone and dons his familiar white, No. 17 jersey.

Then he starts speaking. 

“I know in a different and more real world, we were beaten,” Allen says. “But in this universe created by artificial intelligence, we won the game.”

The video is a deepfake created by Lyu to educate the public about synthetic media.

“Professor Lyu always tells us, ‘The better deepfake you can make, the better you will be at detecting them,’” says Chengzhe Sun, a computer science and engineering PhD student in Lyu’s UB Media Forensics Lab. 

Allen’s words were written by ChatGPT, while his voice was created by Prime Voice AI. The lip synching was done using Wav2lip. The video was complemented by a news article about the Bills’ imagined victory, complete with more ChatGPT text and a Stable Diffusion photograph. The story’s alleged author even has an X account, with a profile picture from thispersondoesnotexist.com.

Every one of these tools is free and can be accessed through a web browser.

“So you do not need to know programming and machine learning — all you need is a computer with internet connection,” Lyu says.

The ease of deepfakes underscores their danger, Lyu says. If anyone can make them, they can easily be made to manipulate something more consequential than a football game. The technology has frequently been used to impersonate politicians, as well as place people’s likeness in pornographic videos without their consent.

Lyu started the Media Forensics Lab when he arrived at UB in 2020.

“I’ve been a puzzle solver since a very early age. When somebody makes a fake piece of media, it’s like the ultimate puzzle,” he says. “My scientific curiosity drives me to dig deeper into these issues and see if I can come up with some solutions.”

Siwei Lyu.

From left: Students working with Siwei Lyu (center in suit) in the UB Media Forensics Lab include Jialing Cai, Shuwei Hou, Chengzhe Sun, Zhaofeng Si, Shan Jia, Riky Zhou and Yu Cai. Photo: Douglas Levere

Fighting AI with AI

To combat AI-generated media, Lyu turns to an unlikely ally: AI

He and students train machine-learning algorithms how to spot AI-generated media by feeding the algorithms tens of thousands of both real and fake images, videos and audio samples. If the algorithm correctly guesses the authenticity, it’s rewarded. If it’s wrong, it’s penalized. 

“The internal pattern of the real images will get boosted inside the algorithm’s brain,” Lyu says. “Slowly, the algorithm figures out what are the right things to look at in an image, and what are the right ways to look at it.”

Their algorithms might look for infrequent blinking or whether a hand has five fingers. Their latest can even spot irregularities under the surface of an image that the human eye can’t.

The algorithms can make their determination in less than a second, which is beneficial for both the Lyu lab’s workload and journalists’ deadlines. Lyu will typically provide journalists with a brief report explaining how the algorithm reached its conclusion.

Still, the algorithms are not perfect. For example, they flagged the spliced steps in a recent viral deepfake of Sen. Rand Paul wearing a red bathrobe at the U.S. Capitol, but not Paul’s oddly curved thumb. 

That’s why Lyu stresses that humans need to be involved in the detection process. He and his students visually inspect every piece of media and include their own observations in their reports to journalists.

“Algorithms are probabilistic — they usually answer with confidence but there’s a range of data in which they cannot make definite answers,” Lyu says. “It should be a collaborative relationship between algorithms and humans.”

Lyu and his students are also careful not to overstate their findings. They never declare a piece of media as being real or fake — they simply state whether they’ve found evidence of generative AI.

“Professor Lyu has taught us how to be neutral in our analysis,” says Shan Jia, a postdoctoral associate in the lab. “I think that’s a very important attitude to have as a media forensics researcher. You don’t want to mislead anyone.” 

Future of deepfakes

Lyu is a proponent of more investment into deepfake mitigation and detection. 

“The money spent on detecting deepfakes is probably less than the rounding error of the money spent on generative AI technologies like ChatGPT and Stable Diffusion,” Lyu says. “There’s no single solution to this problem. It’s not going to happen overnight, but I think we can get there.”

When asked for the doomsday deepfake scenario that keeps him up at night, Lyu gives a surprising answer. 

It’s his Josh Allen deepfake, that is, “not just a single piece of disinformation, but a whole web presence — YouTube video, Twitter account, web page, Wikipedia page, everything together,” Lyu says. “That makes it much harder to verify or debunk in a short period of time and can flood our information system with false information.”

The biggest danger posed by deepfakes may not be convincing us that fake information is real, Lyu says, but making us doubt that anything is real.

“Deepfakes muddle the water. If they plant a seed of doubt in viewers’ minds that the things you see are not real, that will have a compound effect on our future information ecosystem.”

Source link