As the next phase of artificial intelligence (AI) begins to affect the payments sector, regulators will need to adapt and evolve existing frameworks to manage the risk to consumers.
Agentic AI refers to AI systems composed of autonomous agents that can make decisions and take actions with minimal human intervention to achieve goals.
Unlike traditional AI, which is reactive, agentic AI is proactive. It can plan, adapt and even collaborate with other agents or systems to complete complex tasks.
Across the payments industry, a variety of products and services have recently begun to reach the market.
In April 2025, Mastercard announced Agent Pay, an agentic-payments programme introducing “Agentic Tokens” that allow AI agents to transact on behalf of consumers and merchants. The initiative includes partnerships with Microsoft and IBM.
Visa has launched its own frameworks for agentic commerce, notably a Trusted Agent Protocol designed to recognise AI agents making purchases. This protocol aims to let merchants and payment providers securely identify authorised AI agents, pass payment credentials and support agent-initiated transactions in a controlled and traceable way.
In addition, Google has introduced the Agent Payments Protocol (AP2) and is collaborating with PayPal on agentic commerce experiences.
The next chapter in AI
“There’s been a lot of buzz around Agentic AI lately, and while some of that’s overhyped, the potential for real change in financial services is hard to ignore,” said Rahul Chawda, product manager at Mastercard.
Offering views in a personal capacity, Chawda explained that agentic AI does more than just sit in the background analysing data.
“It can actually take action, follow through on tasks, and make decisions within the limits businesses set. That opens the door to a very different customer experience,” he said.
“For consumers, this is where things start to shift. Instead of simply getting balance alerts or fraud warnings, an AI agent can automatically take steps to protect your account, adjust your payment preferences or move funds where they need to be, all in real time, without you manually intervening. We’re already seeing bits of this with smart budgeting apps or investment platforms that automatically rebalance portfolios, but Agentic AI takes it to another level.”
Marius Galdikas, the CEO of ConnectPay, told Vixio that “agentic AI marks the next chapter in how we engage with financial services.”
The Lithuania-based fintech professional argued that “this shift challenges the long-standing assumptions that the user must manually review every transaction or decision. Instead, it introduces a model where AI can act within defined boundaries, streamlining financial interactions while keeping users in control.”
“It’s not about micromanaging every action, but about setting clear boundaries and staying in control, with the ability to review or revoke access when needed,” he said.
“In other words, agentic AI may redefine trust, not by removing user agency, but by enabling smarter delegation within secure, consent-driven frameworks.”
What about regulation?
An ongoing challenge for regulators is the possibility that soon after a law is introduced a new product or technology will be introduced that revolutionises the market and means existing rules are no longer applicable.
For example, the EU’s second Payment Services Directive (PSD2) did not account for the rise of Apple Pay, which is now in widespread use across the bloc.
This scenario could be repeated with agentic AI: there is a risk that the Instant Payments Regulation (IPR) and PSD3 could be undermined by the technology upending the payments sector.
“Banking systems were never designed for real-time, device-based, AI-augmented authentication,” said Galdikas.
He noted that evolving rules such as the IPR and the emergence of agentic AI-driven payments are pushing banks to modernise.
“But current authentication systems are not designed for these new demands, leading to experiences that feel out of step with users’ everyday interactions,” he said.
“Changing this is a massive task. However, it’s becoming necessary as new technologies and expectations leave traditional models behind,” he said.
According to Galdikas, the key to this evolution is shared accountability.
“This means clear rules for when devices, users, or institutions are liable in case of mistakes or fraud, and robust technical standards to keep everything secure,” he said.
“Regulation isn’t an obstacle to innovation, it’s one of the critical mechanisms that will make this shift possible and keep the balance between user empowerment and security.”
Managing risk
Regulation such as the IPR may even offer opportunities for agentic AI. For example, instant payment rails enabled by the new framework could use agentic AI to automate payments, or manage liquidity dynamically by transferring funds when balances drop or choosing the most cost-effective payment route at check-out for merchants.
Merchants could use agentic systems for real-time cash-flow management and immediate settlement, one of the key benefits of the IPR.
However, autonomy creates risk: faster payments enabled by AI could also mean faster fraud and more volatility.
Regulators such as the European Banking Authority (EBA) will need to ensure that they remain aware of the risks and opportunities of agentic AI for consumers.
Agentic AI could also significantly enhance open banking and finance by continuously analysing accounts across multiple institutions to optimise user outcomes, such as reallocating funds, automating transfers or tailoring investment choices.
However, these benefits and potential revenue opportunities again come with challenges. For example, firms must guarantee clear consent and data protection that adheres to both the open banking regime and General Data Protection Regulation (GDPR).
The question is whether regimes being negotiated, such as PSD3 and the Payment Services Regulation (PSR), as well as the Financial Data Access (FiDA) Regulation can deliver stronger governance that can be applied to the payments industry.
Prioritising customer outcomes
The use of agentic AI will also interact with broader regulatory frameworks such as the EU’s Digital Operational Resilience Act (DORA) and the UK Financial Conduct Authority’s (FCA) Consumer Duty.
Autonomous agents would introduce interdependencies between systems and third-party AI providers, placing increased emphasis on supply-chain and governance risk.
Institutions would adapt their DORA compliance processes to cover such systems, likely requiring human intervention to prevent possible operational weaknesses.
The same is likely with the Consumer Duty. The FCA, even with its focus on growth, is likely to be concerned by the use of agentic AI in a way that could put consumers at risk and fail to drive good outcomes.
Firms will need to design agentic AI products that have consumer outcomes as their primary focus, and are transparent and easy for their customers to manage and understand.
For example, an AI agent that deploys open finance and autonomously switches investments should provide clear limits, risk warnings and controls when doing this.
Consumers would also need to clearly understand what the agent can do and the potential consequences of its actions.
For instance, if an AI agent executes a payment based on a preset rule, the consumer should receive a clear notification detailing the reasoning, the amount and the impact on other accounts.
These sorts of questions will be necessary for the companies that see agentic AI as the future.






