Artificial intelligence and financial fraud
Insight
More often than not, fraud involves impersonation, and artificial intelligence (AI) is a powerful device that is ultimately designed to impersonate all of us. This means that AI can equip fraudsters with the tools to make their attacks more convincing, efficient, and fast.
AI has the potential to be used for financial fraud, just like any other technology. AI can be used to impersonate people, automate phishing attacks, and manipulate data. It is important to have strong security measures in place to ensure that AI is not used for fraudulent purposes and to protect against any potential risks. Additionally, it is crucial to have ethical guidelines in place that govern the use of AI to prevent any misuse or abuse of the technology.
The above paragraph was generated by ChatGPT. As this makes clear, AI is not only able to provide detailed knowledge of a given topic, but it is able to present that information with a breezy, human-seeming confidence. That sense of trustworthiness is perhaps the most important element of any trick or scam.
How AI can be used in a fraud
At the consumer end, AI can be used to generate scripts that can be read out over the telephone and used to scam people into making bank transfers. As the paragraph above demonstrates, AI does a convincing job at playing a human correspondent.
On a more institutional level, and of more concern to banks, lenders, and other financial firms, generative AI can create videos and photographs of people who do not exist. In other words, AI can provide “evidence” to pass identity checks that could be used to open accounts, effect transfers, and even demonstrate the existence of (fake) liquidity or assets against which borrowing can be secured.
So what should be done?
Systems and controls
The potential for AI to be used for financial fraud can scarcely be overstated, particularly when something as powerful as ChatGPT is available for anyone to use entirely anonymously.
At the very least, firms who may be at risk should:
- Take steps to scrutinise the authenticity of all identifying documentation provided for anti-money laundering (AML) and know your customer (KYC). In particular, information should be sought from third parties if possible (ie public registries or verification firms) rather than taken directly. Any doubts should be checked with an in-house or external cybersecurity team.
- Ensure that when pre-existing clients and customers are dealt with, continuing steps are taken in line with best practice to ensure that the person is not being “spoofed”. This will include, for example, multi-factor authentication and perhaps even face-to-face meetings.
- Train vulnerable staff on the patterns that can signify financial fraud. The method may have changed (ie via the use of AI) but the fraudsters’ goals remain the same. Transactions that are unexplained or out of character or borrowing that has no obvious purpose should all be suspected, no matter how convincing the supporting documentation may appear to be.
Regulation of AI is, of course, coming down the tracks. The government published a white paper in March this year on what it proposes will be its “pro-innovation” approach to this, but AI is already everywhere. The white paper itself admits that “the pace of change itself can be unsettling.” In the meantime, self-help is the best (and only) defence.
Many thanks to Aysha Satchu, a current paralegal in the team, for their help in preparing this article.
This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.
© Farrer & Co LLP, May 2023