Regulatory Influencer: AI In Banking Through A Consumer Protection Lens – What US Banks Must Know In An Evolving Regulatory Landscape

September 16, 2025
Back
The use of artificial intelligence (AI) in the US banking sector is evolving rapidly. New tools are being deployed across diverse use cases, creating both opportunities and potential compliance risks for financial institutions.

The use of artificial intelligence (AI) in the US banking sector is evolving rapidly. New tools are being deployed across diverse use cases, creating both opportunities and potential compliance risks for financial institutions.

Banks are using AI in areas such as credit underwriting, servicing, marketing, customer service, fraud detection and compliance. New tools can help monitor fraud and money laundering threats, automate manual processes in both the first and second lines of defense and streamline lending processes with quicker credit decisions. They also enable banks to offer innovative services such as personalized banking and digital assistants.

Amid the opportunities, there are also compliance challenges. These include the compliance risk when banks do not fully understand their regulatory obligations as they relate to the use of AI.

The regulatory framework for the use of AI in US financial services has not been fully defined, with no single piece of legislation, such as the EU’s AI Act, to provide clarity. However, financial institutions’ use of AI may put them in violation of existing consumer protection laws. 

Banks need to ensure they are fully aware of how AI tools are being used within their organizations. They must also understand the capabilities of the technology, and how its use may lead to breaches of established banking laws and regulations. 

This year, faltering steps have been made to begin building a federal framework for the use of AI, including a series of executive orders setting the administration’s goals in this area.

In addition, during the drafting phase, the One Big Beautiful Bill Act included a ten-year moratorium on states issuing AI laws, a sweeping move that would have bypassed states’ ability to develop legislation to address consumer protection around AI use.

However, the Senate overwhelmingly struck the AI moratorium in a 99–1 vote, preserving states’ authority to regulate AI.

The removal of the moratorium leaves more than 1,000 active AI-related bills moving through state legislatures, creating a patchwork of state-by-state regulation.

In addition, several states have clarified that state consumer protection regulations also apply to AI use. As federal deregulation continues and the Consumer Financial Protection Bureau (CFPB) is scaled back, states are increasingly leading consumer-protection regulation and enforcement.

Although there are not yet federal laws specifically addressing banks’ use of AI, organizations should keep both state and other federal consumer protection laws, such as fair lending, at the forefront of their AI deployment and develop compliance strategies around usage.

The bigger picture

Regardless of federal direction on AI usage and ensuring consumer protection, banks must understand and manage risks in areas such as explainability, privacy and data protection laws, and federal and state consumer protection laws such as fair lending, the Fair Credit Reporting Act (FCRA) and unfair, deceptive, and abusive practices (UDAAP).

Explainability means ensuring that bank employees can understand and communicate how their bank is using AI to make decisions. Using AI without properly understanding the process could lead to poor outcomes for customers and, in turn, compliance failures, with no individual having a clear view of how decisions such as credit approvals and denials are reached. 

There is also the risk of not being aware of or able to explain a third-party vendor’s AI-driven decision, which could lead to FCRA, fair lending and UDAAP violations.

The privacy risk means that customers may also be vulnerable to risks associated with a lack of security, with AI processes potentially exposing sensitive data. Privacy laws also require consumer consent to share personal data and could lead to potential violations of both federal and state privacy laws such as the Gramm–Leach–Bliley Act (GLBA) and California and Colorado Privacy Acts.

Similarly, there is a risk that if data is not managed effectively it may be used beyond what is necessary, potentially violating the GLBA and state privacy laws. AI retains large amounts of data, increasing the risk of breaches involving customers’ sensitive financial and personal information if banks lack strong data governance frameworks.

It is clear that states are not waiting for the federal government to create an AI governance framework or enforce consumer protection laws around AI use. Many have clarified that their existing consumer protection and anti-discrimination statutes, such as UDAAP laws, already apply to AI use, even in the absence of AI-specific legislation. 

For example, in April 2024, Massachusetts issued an advisory affirming that its existing state laws apply equally to AI systems.

In July 2025, the state followed through on its commitment to enforcing fair lending laws, sanctioning Earnest Operations LLC for deploying AI underwriting models that caused disparate impact, particularly against Black, Hispanic and non-citizen applicants, without adequate testing or oversight. 

The student loan lender paid a $2.5m settlement and agreed to implement extensive changes to its business practices, including taking steps to mitigate risks of unfair lending and ensure compliance with state and federal laws.

Elsewhere, Colorado was the first state to enact AI regulation. Its AI Act passed in 2024 and will take effect in February 2026, introducing requirements to prevent algorithmic discrimination in sensitive sectors such as finance.

Also in 2024, Utah introduced its AI Policy Act, which created the Office of Artificial Intelligence Policy. The act establishes liability for the use of AI that violates consumer protection laws if not properly disclosed, as well as implementing a regulatory AI analysis program.

Beyond these bank-focused examples, other states have introduced rules governing AI use, reflecting the broader shift toward state-driven regulation.

For example, Tennessee introduced the Ensuring Likeness, Voice, and Image Security (ELVIS) Act in 2024, which restricts harmful deepfakes and AI cloning. And in May 2025, Montana passed House Bill 178, which limits government use of AI, requiring transparency and human oversight.

Across the country, the trends in state-level AI regulation point toward increased supervisory focus on AI explainability, bias mitigation and model risk management. 

Banks will need to plan adoption of new tools and processes accordingly and ensure they remain in compliance with relevant laws and regulations at both the federal and state levels.

Why should you care?

As US banks adopt AI into their day-to-day functions, they must implement robust governance and compliance practices and be sure that they are harnessing the benefits of new technology safely and effectively.

If banks deploy AI poorly, organizations face a range of negative outcomes, including regulatory fines and penalties, reputational damage, operational failures and bias and discrimination claims.

Despite the lack of a single federal framework, some of the actions that banks need to take are clear. These include implementing AI model governance, conducting bias testing, ensuring transparency, training staff and emphasizing third-party oversight.

Banks accelerating their use of AI should consider the following steps to ensure they are managing the risks and operating within the rules:

  • Mapping the regulatory terrain – monitoring AI laws across all the states in which they operate, including identifying divergent requirements and enforcement expectations.
  • Establishing formal AI governance processes – assigning oversight, setting clear policies and defining accountability for all AI initiatives. This may include creating internal AI governance committees that span compliance, risk, legal and IT and are capable of predeployment review, ongoing monitoring and audit readiness.
  • Mitigating fair lending risk – conducting regular bias and disparate impact testing on AI tools, and banning or retooling models that use inherently biased variables such as “cohort default rate.”
  • Ensuring fairness and transparency – assessing bias, maintaining explainability and protecting customers from discriminatory outcomes.
  • Maintaining strong data protection practices – securing high-quality data and ensuring compliance with privacy laws at both the federal and state levels.
  • Engaging with regulators – following guidance from state-level authorities, as well as the Office of the Comptroller of the Currency (OCC), the Federal Reserve and the CFPB, and keeping records demonstrating compliance.
  • Training staff and managing model risk – educating teams on AI risks and best practices, and monitoring models continuously post-deployment to document, test, validate and ensure their accuracy and reliability.
  • Overseeing third-party vendors – vetting external providers for compliance, security and regulatory alignment. This may include requiring explainability, documentation and oversight of third parties.
  • Building adaptive compliance infrastructure – integrating AI oversight into broader governance, risk and compliance (GRC) systems to ensure policies, approvals and audit trails are centralised and robust.

Taking these steps should help banks innovate with AI confidently while managing the regulatory, operational and reputational risks.

Financial institutions should keep in mind that federal regulatory clarity is unlikely in the near future. With the moratorium removed from the One Big Beautiful Bill Act, states will continue to lead the way on AI regulation, potentially diverging in their approaches. 

This provides an opportunity for leadership: those banks that build robust, consumer protection-focused AI governance systems now will be better positioned for compliance, innovation and public trust going forward.

Our premium content is available to users of our services.

To view articles, please Log-in to your account. Alternatively, if you would like to gain access to the tools that will help you navigate compliance risk with confidence please get in touch today.

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.
No items found.