The regulator’s finding that only limited friction exists between new artificial intelligence (AI) obligations and established banking and payments rules paves the way for coordinated oversight as implementation begins.
In November, the European Banking Authority (EBA) published the results of a mapping exercise comparing obligations under the EU’s AI Act with requirements arising from existing sector-specific financial services legislation.
The review centred on AI systems classified as “high-risk” under Sections 5 (b) and (c) of Annex III of the AI Act.
These include AI models used for credit scoring and creditworthiness assessments, as well as those supporting risk assessments and pricing in life and health insurance.
Taking into account key regulations such as the revised Payment Services Directive (PSD2) and the Digital Operational Resilience Act (DORA), the study found “no significant contradictions” between the AI Act and these other frameworks.
“The AI Act is complementary to EU banking and payment sector legislation, which already provides a comprehensive framework to manage risks,” said the EBA.
The authority cautioned that firms will still need to invest effort in aligning and integrating the various frameworks in practice.
Planning ahead
The EBA’s mapping exercise examined how key AI Act obligations interact with, and could be integrated into, firms’ existing regulatory duties.
Topics assessed included quality management, serious-incident reporting, the consumer’s right to an explanation, risk-management processes, technical documentation, record-keeping and human oversight.
The regulator found that most requirements for high-risk systems under the AI Act are either “fully aligned”, “partially aligned” or “complementary” with the requirements of other sectoral legislation.
In the mapping exercise, a “fully aligned” interaction was defined as one where the AI Act obligation is “substantively identical” to requirements under EU financial services legislation.
Only one interaction scored “fully aligned”: the interaction between the consumer’s right to explanation under both the AI Act and the Consumer Credit Directive (CCD).
Most remaining interactions were classed as “partially aligned” or “complementary”. Roughly a third were left unrated due to an absence of comparable provisions.
Rationale for the mapping exercise
The EBA conducted the mapping exercise in order to inform the European Commission’s upcoming guidelines on the classification of high-risk use cases under the AI Act.
These guidelines, which are mandated to be issued by February 2, 2026, are expected to provide further practical examples of high-risk use cases and how firms can develop and deploy such systems in compliance with the act.
In a letter to the European Commission, EBA chair José Manuel Campa said the mapping exercise provided a “comprehensive overview” of how existing EU financial services law addresses relevant requirements under the AI Act.
Campa added that the information provided by the EBA will help to “facilitate management of potential overlaps” and to ensure a “smooth implementation” of the AI Act in the EU banking and payments sector.
The EBA also said the mapping exercise highlighted the importance of "supervisory cooperation” within and between member states to ensure effective implementation of the AI Act.
In 2026–27, the EBA plans additional work to support a common supervisory approach across financial sector and market surveillance authorities.
The authority will also provide input to the Commission’s AI Office, and will participate in discussions of the AI Board Subgroup on Financial Services.
European Parliament weary of ‘high-risk’ financial sector AI
Four days after the EBA published the results of its mapping exercise, the European Parliament passed a resolution setting out MEPs’ priorities for the use of AI in the financial sector.
The resolution acknowledged that most current AI deployments focus on back-office efficiency and represent incremental rather than high-risk innovation.
However, it also recognised that the use of AI to evaluate creditworthiness or to establish credit scores is “prevalent and increasing”.
The MEPs who voted in favour of the resolution said existing sectoral legislation is “mainly sufficient” to cover the deployment of AI in its current form, and that no further legislation is required.
However, due to the rapidly-evolving nature of the technology, they want to see greater coordination and cooperation between national competent authorities tasked with supervising AI, in line with the EBA’s proposals.
The resolution notes that the AI Act “explicitly” seeks to avoid duplication of requirements with current financial services legislation by allowing for “limited derogations”.
MEPs nonetheless warned that unclear overlaps and insufficient interpretative guidance could increase compliance burdens and slow AI adoption.
They therefore urged the Commission to issue clear, practical guidance that supports ethical, responsible and transparent use of AI in financial services.
The EBA’s findings signal that firms should not expect major structural conflict between the AI Act and existing financial services regulation.
The more pressing challenge is operational: mapping system-level obligations, demonstrating coherent governance and ensuring that AI risk-management processes can withstand supervisory scrutiny across multiple regimes.
Institutions that begin this integration early will be better positioned to scale AI use cases without creating avoidable compliance challenges.




