The Commonwealth Bank of Australia has open-sourced the artificial intelligence and machine learning model that it uses to detect abusive language in transaction descriptions.

CBA publishes anti-abuse AI and ML model

The bank said on Wednesday the AI model, implemented on the platform, “helps to identify digital payment transactions that include harassing, threatening or offensive messages – referred to as technology-facilitated abuse”.

The model is now available for free to any bank worldwide via GitHub.

It is used by CBA to detect around 1500 high-risk cases annually, according to the bank.

The AI model adds to an automatic block filter introduced in 2020 after CBA identified more than 8000 customers that experienced harassment through transaction descriptions.

The bank has since implemented blocks that capture threatening or offensive words in digital payment transactions, stopping 1 million transactions so far.

In August, the bank said that with the victim’s consent it would trial the referral of abuse in transaction descriptions to NSW Police.

It said using AI demonstrated “how innovative technology can create a safer banking experience for all customers, especially those in vulnerable circumstances.”

The work was made possible by CBA’s bank’s partnership with

CBA group customer advocate Angela MacMillan said “financial abuse occurs when money is used to gain control over a partner and is one of the most powerful ways to keep someone trapped in an abusive relationship.”

“Sadly, we see that perpetrators use all kinds of ways to circumvent existing measures such as using the messaging field to send offensive or threatening messages when making a digital transaction,” she said.

“We developed this technology because we noticed that some customers were using transaction descriptions as a way to harass or threaten others.

“By using this model, we can scan unusual transactional activity and identify patterns and instances deemed to be high risk so that the bank can investigate these and take action.”

MacMillan said the bank’s decision to share its source code means “it will help financial institutions have better visibility of technology-facilitated abuse”.

“This can help to inform action the bank may choose to take to help protect customers,” MacMillan said.

In recent years other major banks have also implemented their own technology to tackle financial abuse.

Westpac introduced a tool to enable customers to report any abuse on payments via a report button within the bank’s online and mobile banking platforms.  

Meanwhile, NAB said it had been building out further capabilities to stop its digital payment services being used as a channel to send abusive messages.

In 2022, ANZ built an algorithm that identifies misuse of payment fields in transactions to harass or abuse victims.

Source link


Please enter your comment!
Please enter your name here