There is growing fraud online in which scammers manufacture other identities to dupe financial institutions or their customers out of money — and the crimes are only expected to grow more frequent with the increasing prevalence of artificial intelligence, experts say.
A new survey of 500 fraud and risk professionals, first reviewed by ABC News, shows widespread concern in the financial industry about the growing scale of these fake online customers and whether security and identity detection technology at banks and loan servicers can keep up.
According to industry experts, financial institutions responsible for servicing loans, issuing credit cards or running credit checks have long been forced to contend with criminals who steal other people’s personal information to create fake personae for their financial gain.
This is called “synthetic” fraud and it has taken on a new dimension with the spread of generative AI technology, said Ari Jacoby, whose AI security firm Deduce commissioned the survey, which was conducted by Wakefield Research.
Industry experts have said that fraud happens on a range of scales, from intricate financial manipulation to “phishing” expeditions, such as disguising malicious messages to tap into someone’s personal privacy — as when an email link is used to trick someone into submitting their phone number, address, Social Security number and other information.
Criminals using AI — which can help perform rapid, automated tasks, among other functions — can scrape the internet at record speed and, once armed with information from a combination of stolen, fake and legitimate digital data sources, can masquerade as other people, Jacoby said.
Generative AI tools can make scams faster and more sophisticated by, for example, making it easier to send out phishing messages, making it easier to create a trail of digital activity to seem like a real person while using a manufactured identity or making it easier to duplicate someone else’s activity in order to impersonate them to trick yet other people and gather more sensitive information.
“So it’s more forcefully coming at these institutions, more bad accounts can be created and more success can be had by those bad actors creating these fraudulent accounts,” Jacoby said. “And ultimately they’re in it to steal money.”
The Wakefield survey is the latest in a series of alarms security experts inside and outside of the financial industry have raised. Those warnings have come from major credit card companies, consumer advocates and more.
An analyst for Thomson Reuters, which advises financial institutions on security matters, in April called synthetic fraud “one of the fastest-growing financial crimes.”
That advisory noted that synthetic fraud is more complicated to address than traditional identity theft, in which a criminal steals a real person’s name and other personal data in order to commit financial fraud. In synthetic fraud cases, by contrast, a criminal combines real data, such as Social Security numbers, with manufactured identities in order to elude credit monitoring and other security services.
Thomson Reuters advised then that institutions should step up their verification requirements and ensure regular contact with customers, among other safeguards.
Mastercard in July detailed some of the steps it was taking to curb identity-related fraud, including through close tracing of how and when money moves through accounts. The banking giant said it was harnessing its own AI tools in this effort.
“As banking and payments security becomes increasingly advanced, fraudsters have shifted their focus to impersonation tactics,” the company said then. “Their goal is to convince people and businesses to send them money, thinking the transfer is to a legitimate person or entity.”
Ajay Bhalla, Mastercard’s cyber intelligence chief, said in a statement earlier this year that the problem was a stubborn one to address: “Banks have found these scams incredibly challenging to detect. Their customers pass all the required checks and send the money themselves; criminals haven’t needed to break any security measures.”
The Michigan attorney general’s office earlier this year likewise advised consumers that AI “allows scammers to easily create and personalize scams to make them more convincing,” including the use of “personal information pulled from social media profiles and other online sources to tailor the scam to you.”
California’s Department of Financial Protection & Innovation has also warned that generative AI can be used to impersonate people in order to commit fraud.
Traditional fraud prevention systems can have difficulty detecting synthetic fraud, according to the credit monitoring firm Equifax.
People who are less likely to routinely check their credit history, who have readily available information online or who are more likely to be unaware of the dangers, primarily young people and the elderly, are among the most heavily targeted victims, Equifax and Thomas Reuters have advised.
That danger underlines the importance of consumer and corporate vigilance. Consultant and author Nick Shevelyov, who has worked as a chief security and privacy officer in Silicon Valley, said there’s new demand for cybersecurity services that can adapt quickly.
“The very technology that empowers us may also imperil us,” Shevelyov said. “Everything is accelerating. The technologies used to defend against this are getting better, but also just the proliferation of false identities are also increasing.”
Firms like Jacoby’s are working to fight what he called “super-charged” AI fraud, which he said can act more quickly and more systematically than a person. “If you just had a smart human — that very smart human, if they were bent on committing financial crimes, if they wanted to be a fraudster, could create X number of fraudulent accounts. But that individual has to eat and sleep and do all the things that humans have to do,” Jacoby said.
Responding appropriately to AI-abetted fraud requires massive amounts of legitimate data to detect patterns that allow security professionals to flag illicit activity, he said.
“We’re looking for irregularities,” he said.
Law enforcement officials have for years hoped that artificial intelligence will serve as a criminal justice tool as much as a threat. For example, a 2019 Justice Department report documented the potential for AI to be used to combat financial fraud, citing efforts by PayPal at the time to train fraud detection algorithms with large amounts of data.
The report’s author, Christopher Rigano, said then that “artificial intelligence has the potential to be a permanent part of our criminal justice ecosystem … allowing criminal justice professionals to better maintain public safety.”
Last summer, business owner Cory Camp learned firsthand the perils of being tricked, he told ABC News. He said he received a text that he thought was an innocent request from his cellphone provider. But after clicking on the digital link and allowing access to his personal information, his cell service was immediately disabled.
“My first initial reaction was freaking out,” Camp said. “Like, what did I just do?”
He said he was forced to print out a map and track down the store where someone else had bought a phone in his name, apparently after he inadvertently granted them access to his account because they were pretending to be him.
After verifying his identity, Camp was able to reactivate his cell phone service, he said.
He said he never discovered who had deceived him and he decided not to file a police report.
“I definitely feel violated,” Camp said. “It definitely feels like a breach of safety in that sense, even though I didn’t actually run into this person.”
Camp isn’t sure if AI was used to help the fraudster, but experts told ABC News that that very kind of scam could be conducted more easily using AI.
Camp said the experience has given him a new awareness about digital security — and he said that the prospect of that type of fraud getting amplified by AI is “terrifying.”