Regulatory Influencer: The EU AI Act - From Principles-Based Guidance to Sector-Specific Supervision

February 13, 2026
Request a Demo
Back
The European Union’s AI Act, Regulation (EU) 2024/1689, represents the first comprehensive, binding framework for the development, deployment and use of artificial intelligence across the European Union. Having entered into force on August 1, 2024, the regulation is subject to a phased implementation timetable. While certain provisions are already applicable, the majority of obligations most relevant to the financial services sector are scheduled to apply from 2026. As a result, financial services firms have a narrowing window to assess their AI use cases, align governance frameworks and prepare for supervisory scrutiny.

Introduction

The European Union’s AI Act, Regulation (EU) 2024/1689, represents the first comprehensive, binding framework for the development, deployment and use of artificial intelligence across the European Union. Having entered into force on August 1, 2024, the regulation is subject to a phased implementation timetable. While certain provisions are already applicable, the majority of obligations most relevant to the financial services sector are scheduled to apply from 2026. As a result, financial services firms have a narrowing window to assess their AI use cases, align governance frameworks and prepare for supervisory scrutiny.

The bigger picture 

Artificial intelligence remains a competitive differentiator, but is now also being treated as  a regulated risk category, requiring formal governance and supervisory oversight. For banks and payment service providers (PSPs), compliance will hinge less on technological capability and more on governance, accountability and control frameworks. Across Europe, supervisory authorities are increasingly moving beyond principles-based guidance and toward enforceable, auditable expectations. This trend is visible not only in the EU AI Act, but also in regimes such as that under Regulation (EU) 2022/2554 (Digital Operational Resilience Act - DORA) and  strengthened third-party and outsourcing requirements. In light of this, the AI Act translates technology into risk-based requirements and, once regulators such as the European Banking Authority (EBA) assume oversight responsibilities, AI becomes embedded within the financial services supervisory framework, subject to the same prudential, conduct and operational risk expectations as other regulated activities.

Supervisory Intent

Supervisory expectations for artificial intelligence are undergoing a clear and deliberate shift, with what began as high-level, principles-based guidance rapidly evolving into structured, sector-specific supervision. This shift sits alongside a broader EU policy agenda which combines regulatory control with measures to enable responsible AI adoption. In October 2025, the European Commission published its Apply AI Strategy, signalling that the AI Act is intended not only as a control on AI use, but also as a framework to support uptake through infrastructure, testing facilities and access to funding.

Within this policy context, supervisory authorities are now turning their attention to integration. In November 2025, the EBA released an update positioning itself as the coordinating authority for how the AI Act will be applied within financial services. In doing so, the EBA signalled a move away from generic AI compliance toward sector-tailored supervisory expectations for banks and PSPs. Within the newsletter, the EBA set out the following plans:

  • Promote a common supervisory approach to the AI Act across national competent authorities and market surveillance authorities, reducing fragmentation in how AI obligations are interpreted and enforced across member states.
  • Act as an interface between the AI Act and banking regulation, providing input to the European AI Office and participating in the AI Board’s Financial Services subgroup.
  • Map AI Act requirements against existing frameworks to identify overlaps, synergies and regulatory gaps, and to clarify how AI obligations integrate with prudential, conduct and operational rules.

These signals are reinforced at the supervisory level. In the European Central Bank’s 2026–28 supervisory priorities, AI and digital innovation remain a stated focus, indicating that supervisors will continue to monitor AI systems as part of routine prudential oversight, including firms’ AI strategies, governance arrangements, risk controls and exposure to emerging risks.

These actions indicate that European authorities intend to translate the AI Act’s generic requirements into banking and payments specific supervisory expectations, rather than treating AI compliance as a standalone and separate requirement.

Against this supervisory backdrop, the practical focus for banks and PSPs will increasingly centre on how specific AI use cases are classified, governed and controlled in practice. From August 2, 2026, when the AI Act’s high-risk obligations are set to become applicable, regulatory scrutiny is expected to intensify, including the prospect of supervisory reviews, conformity assessments and enforcement action, such as fines, for non-compliant systems. 

Although the European Commission’s “digital omnibus” proposal in November 2025 signals a potential postponement of full enforcement of the strictest high-risk provisions to December 2027, this does not alter the supervisory trajectory, as preparatory supervision, convergence and integration of AI into prudential oversight are already underway and are a priority for EU regulators. As supervisory expectations converge and enforcement approaches take shape, firms will need to demonstrate not only awareness of the AI Act’s requirements, but also effective implementation across high-risk systems, internal governance arrangements and third-party dependencies.

Why should you care?

High-Risk AI Systems

The EU AI Act adopts a risk-based framework, with obligations that scale according to the level of risk posed by an AI system. High-risk AI systems are subject to the strictest governance, transparency and monitoring requirements, reflecting the potential impact on consumers, financial stability and operational integrity. AI systems are classified as high risk when they materially affect customers or critical processes, and therefore within the banking and payments sectors, high-risk AI systems include the following:

  • Credit and lending systems.
  • Creditworthiness assessment and credit scoring tools.
  • Loan approval or rejection systems.
  • Limit-setting and pricing algorithms.
  • Collections, default prediction and forbearance tools.
  • AI systems that influence access to services or pricing decisions.
  • Certain anti-money laundering (AML) or fraud detection tools where outcomes materially affect customers.
  • Biometric identification systems.

Banks and PSPs using high-risk AI must be able to demonstrate the following:

  1. Purpose and functionality: why the system is needed and the business or operational problem it addresses; and how the system works (Articles 11, 13 and 14).
  2. Data governance: what data is used, why it is appropriate and measures to ensure quality (Article 10).
  3. Bias and error mitigation: processes for detecting, preventing and addressing bias or errors (Articles 10 and 15).
  4. Monitoring and oversight: how outputs are tracked, validated and updated over time (Articles 12, 13, 14, 15 and 61).
  5. Third-party oversight: where vendors or external AI providers are used, firms must demonstrate robust oversight and governance (Articles 16 and 17).

At this stage, the high-risk AI obligations are set to be in force from August 2, 2026 and are expected to trigger intensified regulatory scrutiny, which could include compliance audits, conformity assessments and enforcement actions. Banks and PSPs should therefore view high-risk AI not merely as a compliance exercise, but as a core supervisory expectation with preparedness, documentation and governance being key to avoiding both regulatory and operational risks.

Governance and BAU Integration

AI is increasingly used in banking and payments for credit scoring, loan approval, pricing, customer risk profiling, anti-money laundering (AML) and fraud detection. According to the EBA’s November 2025 newsletter on AI’s Impact on Banking, these areas are among the most common high-risk AI use cases under the EU AI Act.

Given the high-risk classification, banks and PSPs must implement robust governance, documentation, transparency and human oversight. Supervisory focus is expected to scrutinise model logic, data quality, monitoring and risk controls, reinforcing that high-risk AI is now a core supervisory expectation.

As mentioned above, for banks and PSPs, high-risk AI obligations are not abstract requirements but instead directly intersect with business-as-usual (BAU) processes. Most core AI use cases in banking and payments fall into the EU AI Act’s high-risk category (Annex III), meaning that compliance touches day-to-day operations, risk management and internal controls.

Key actions for integrating AI into BAU include:

  • Mapping AI use cases across the organisation, identifying which systems may fall into high-risk categories.
  • Classifying AI systems to ensure all high-risk systems are clearly documented and understood.
  • Documenting and maintaining transparency disclosures which could include training data descriptions, testing procedures, risk-mitigation measures, human oversight plans and audit logs.
  • Planning for conformity assessments, particularly for AI systems in regulated domains, such as credit scoring, lending decisions and automated risk assessment.

Firms that successfully integrate high-risk AI into BAU processes are likely to gain a significant competitive and regulatory advantage. They will be more trusted by regulators, better positioned to deploy advanced AI safely, have the ability to bring AI-driven products to market more quickly, and be more resilient to both operational and regulatory risks. Additionally, these firms will potentially be better insulated from enforcement actions, such as fines, system suspensions or mandated remedial measures, once the AI Act’s high-risk obligations come into force. This highlights that robust AI governance is not just a compliance requirement but a strategic differentiator.

With the AI Act’s high-risk obligations set to take effect from August 2026, if banks and PSPs have not done so already they should integrate these governance and compliance processes now, ensuring they are fully prepared well before supervisory scrutiny intensifies.

Third-Party and Vendor Oversight

Under Articles 3(2) and 28 of the EU AI Act, using a third-party AI system does not remove obligations, and banks and PSPs will remain legally responsible when deploying third-party AI systems. Using a vendor’s system does not remove obligations as a deployer, and these responsibilities must be integrated into procurement, governance and compliance frameworks.

The AI Act distinguishes between providers that develop or place AI on the market and deployers that use AI systems, including banks relying on third-party tools. Both roles carry direct legal obligations: providers are responsible for ensuring the AI system meets regulatory requirements before deployment, while deployers are responsible for using the system compliantly and demonstrating ongoing adherence to the act. 

Therefore, it is critical for banks and PSPs to document roles and responsibilities clearly, identifying whether they act as a provider, deployer, importer or distributor for a given system, as each role carries different obligations.

As deployers, banks must ensure that any high-risk third-party AI system is fully compliant. This includes verifying that the provider has conducted conformity assessments, that the system meets requirements for data quality, governance, human oversight and robustness, and that it is correctly CE-marked where applicable. Deployers cannot rely solely on vendor assurances; they must be able to demonstrate compliance to supervisors.

Banks must also use the AI system strictly as intended by the provider, avoiding unapproved modifications or use-case drift. Material changes in how the system is deployed may reclassify the bank as a provider, triggering additional obligations.

The EBA has signalled increased supervisory attention on third-party and general-purpose AI systems, including cloud-based tools, vendor models, embedded AI in platforms such as CRM or AML systems, and foundation models used for decision support or analytics. Banks must actively monitor these systems to ensure that vendor compliance does not become a gap in the bank’s own legal obligations.

Third-party AI oversight must be integrated into existing BAU risk management and compliance processes, alongside established outsourcing rules such as the EBA Guidelines on outsourcing and DORA ICT requirements, creating a new regulatory pressure point.

Conclusion

The EU AI Act marks a significant shift in how AI is regulated in financial services in the European Union, but delays and the proposed “digital omnibus” package introduce real uncertainty, particularly for high-risk systems. Under the original timeline, high‑risk AI obligations were set to apply from August 2, 2026; however, the European Commission’s digital omnibus proposals would defer the application of these obligations until later in 2027 (for systems listed in Annex III) and potentially into 2028 for AI embedded in regulated products, linking compliance triggers to the availability of harmonised standards and support tools.

Although the postponement to December 2027 may ease immediate compliance pressure, it also raises questions around consistency, fairness, market competition and risk exposure. Firms and regulators developing compliance frameworks could find themselves in regulatory uncertainty, needing to implement interim measures now and potentially adjusting again when final technical standards are issued. This uncertainty is compounded by the fact that the European Commission has also missed its statutory deadline to issue guidelines clarifying the practical implementation of Article 6, including a comprehensive list of examples of high-risk and non-high-risk use cases. Although delayed, these guidelines are still expected in early 2026, and are likely to become a critical reference point for how high-risk classification is applied in practice.

Finally, regulatory and supervisory capacity varies across member states, as national competent authorities and EU-level governance bodies are still being organised. This creates the potential for uneven enforcement standards, making it essential for banks and PSPs to adopt proactive, risk-based compliance strategies rather than relying solely on external clarification.

Firms that are proactive in integrating high-risk AI governance into BAU processes, document roles and responsibilities and establish robust compliance frameworks will not only mitigate operational and regulatory risk but also gain a strategic advantage when the AI Act’s high-risk obligations come fully into force, whenever that may be.

Our premium content is available to users of our services.

To view articles, please Log-in to your account. Alternatively, if you would like to gain access to the tools that will help you navigate compliance risk with confidence please get in touch today.

Request a demo

Simply complete the fields below to register your interest. You’ll then be given the option to book a specific appointment with our team.
Submission sent
Please select an industry of interest
Still can’t find what you’re looking for?
Get in touch to speak to a member of our team, and we’ll do our best to answer.
Contact us
No items found.
No items found.