This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

The EU AI Act: What is it and what do payments firms need to know?

The EU AI Act is a regulation which applies directly in every Member State and seeks to impose requirements on the supply and use of AI systems generally rather than taking a sector-specific approach.

What does the Act apply to?

The Act applies to providers and deployers of AI systems. The definition of an AI system is broad and the Commission is due to release guidance shortly which should assist in understanding its impact. The Act defines an AI system as:

"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

From a technological perspective it is not entirely clear what this means in practice and there is scope for argument about whether or not a system is an “AI system”. Having said that, there is likely to be a presumption that certain systems are AI based on the technology used in that system. For example, there is likely to be a presumption that a system using machine learning is “AI”. Chatbots, digital assistants and generative AI products are also likely to be in-scope.

Payments firms may use AI across a variety of use cases including:

  • Fraud detection: Machine learning algorithms can analyse large amounts of transaction data to identify patterns associated with fraud.
  • KYC: AI technology can be used to extract key data when processing documents and to check that the business description a merchant provides aligns with the material content of their website.
  • Transaction analysis: AI can be used to provide products to customers which assist with understanding spending behaviours, managing budgets and forecasting revenues.
  • Personalised customer experience: AI can be used to suggest the payment methods that will have the best conversion rate for a specific customer which leads to increased sales rates for merchants.

Who does the Act apply to?

EU payments firms are most likely to be playing the roles of a provider or a deployer of an AI system. A provider of an AI system develops the system and places it on the market. A deployer of an AI system includes an employer that makes an AI system available within its workforce.

Where does the Act apply?

The territorial scope of the Act is exceptionally broad and could potentially capture many international organisations with only a tangential connection with the EU. For example, the Act applies in a wide range of circumstances including:

  • Providers of AI systems that are put into service or placed on the market in the EU.
  • Deployers of AI systems established in the EU.
  • Providers or deployers of AI systems where the output from the system is used in the EU.

What do I need to comply with?

This depends on which tier of the Act that the AI system you provide or use falls within. High risk systems, such as those used for credit scoring, impose a number of obligations on a provider including requirements to:

  • Establish a risk management system with proper risk assessments.
  • Ensure the system is trained on high quality data.
  • Put appropriate technical documentation and record keeping arrangements in place and provide instructions for use with the AI system.
  • Ensure appropriate human oversight with possibilities to intervene.
  • Ensure the system achieves an appropriate level of accuracy, robustness and cyber security.
  • Put a quality management system in place for post-market monitoring and draw up an EU declaration of conformity.
  • Register the system.

A “deployer” must comply with a more limited set of obligations when using high-risk AI systems. For example, they must:

  • Ensure they use the AI system in accordance with the instructions for use.
  • Apply suitable human oversight.
  • Monitor the operation of the system and keep logs of its operation.
  • Inform workers’ representatives when using that technology in the workplace.

If an AI system is classed as limited risk then it will be required to comply with a more narrow set of obligations including the transparency requirements. For example, when an AI system is intended to interact with individuals, firms must ensure that the AI informs the individual that they are interacting with an AI unless it is obvious that is the case. 

Timeline

The rules applying to high-risk use cases generally apply from 2 August 2026. Other aspects of the Act apply sooner. For example, obligations on providers and deployers regarding AI literacy apply from 2 February 2025.

AI literacy means the skills, knowledge and understanding of a deployer or a provider (and other affected persons) to make informed use of AI systems and to be aware of both the opportunities of AI systems and the risks of potential harm. The practical consequence is that EU firms will need to train staff on AI and how the firm uses it.

Sign up for real-time updates on the latest ESG developments, delivered straight to your inbox - subscribe now!

Tags

ai, artificial intelligence, eu, payments