The ever-changing landscape of Artificial Intelligence

Artificial Intelligence (“AI”) is fast becoming the hot topic across the globe because of its ability to reduce manual processes and concern around “deepfakes”, e.g. synthetic audio or videos created by Generative AI (“GenAi”), which mimics real humans.

The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.

This blog examines the current AI legislative landscape and outlines some considerations for financial services firms to ensure that they deploy and manage AI systems safely.

Why is this important?

  • A key risk area for firms is criminals using GenAi to create “deepfakes” to circumvent biometric data security measures, generally used for identification and verification purposes. Fraud GPT (which mimics the ChatGPT platform) is available on the dark web and deploys machine-learning algorithms to generate malicious content for cybercriminals, such as persuasive phishing emails, fraudulent websites and malware. This product, and others like it, will undoubtedly accelerate existing levels of AI-facilitated fraud.
  • However, it’s not all bad news. GenAi exceeds “traditional” AI’s capability to identify irregularities in transactions based on known fraudulent typologies by also examining customer behaviour, device information and external fraud trends. Where firms can harness this technology correctly, it should reduce the risk of biometric data misuse. Visa launched a GenAi solution in May 2024, the Visa Account Attack Intelligence (“VAAI”) scoring system, which will apply a risk score to transactions in “real-time” to help firms prevent fraudulent Card-Not-Present transactions.

What is happening in the UK?

  • In April 2023, the UK Government (“UKG”) launched an AI Safety Institute, designed to enable the safe, reliable development and deployment of advanced AI systems. At present, UKG’s top priority is understanding the capability and risk of these systems, ahead of implementing a regulatory framework.
  • Various public authorities have set out their approach to the UK’s AI landscape. In April 2024, the Financial Conduct Authority, Bank of England, and Prudential Regulation Authority responded to the UK Government’s AI Regulation Policy Paper from July 2022 welcoming the proposed principles-based approach and none are advocating for further regulation at this point.

What’s happening elsewhere?

  • The European Union (“EU”) approved the final text of the AI Act on 21 May 2024 which includes:
  • A four-tiered risk matrix for AI providers, from “unacceptable” to “low” risk.
  • AI systems deemed “unacceptable” will be banned.
  • High-risk activities (including creditworthiness assessments, health/life insurance, and border control processes) will be subject to stringent obligations before going to market.
  • Fines of up to €35 million, or 7% of a firm’s annual global revenue (whichever is higher), may apply.

Jurisdictions leading the way in AI activity include Malta – Kai Kleingunther from ARQ Group notes that the Malta AI Taskforce was established in October 2018 and has since published two documents, namely the ‘Malta Ethical AI Framework and A Strategy’ and ‘Vision for Artificial Intelligence in Malta 2030’.

Both documents set out guiding principles for governing AI but aren’t legally binding. The key objective of the   former is to ensure that AI developments AI are ethically aligned, transparent and socially responsible, whilst the latter aims to help Malta position itself as a strategic global leader in AI, thereby gaining a global competitive advantage.

What are the risks of deploying AI solutions and how can they be managed?

  • In May 2024, the European Securities and Markets Authority (ESMA) issued a warning to investment firms using AI, stating that management bodies remain responsible for a firms’ decisions (either made by humans or AI tools), and customers must be protected. 
  • ESMA listed algorithmic bias, data quality issues, and privacy/security risks of data storage and processing within AI systems as inherent risks.
  • ESMA also emphasised the need for effective risk management frameworks, focused on AI implementation and application, including robust governance structures, regular AI model testing/training, and monitoring of AI systems to identify and mitigate potential risks and biases.

Global AI providers and financial services firms will need to be prepared for this rapid pace of change. The continued adoption of AI will impact firms on an enterprise-wide basis. Should any support be required, visit the K2 Integrity website.