The initiatives contrast in approach and intent, but together signal a transition from abstract artificial intelligence (AI) theory towards concrete regulatory implementation and sector-specific oversight.
In March 2026, the Monetary Authority of Singapore (MAS) released the MindForge AI Risk Management Toolkit, which features an operational handbook and case studies designed to help financial institutions manage risks across traditional, generative and emerging agentic AI. In the US, the White House introduced a National Policy Framework that calls for federal preemption of state laws and a focus on US AI dominance.
The MAS developed its toolkit in collaboration with a consortium of 24 industry partners. The toolkit consists of an AI Risk Management Operationalisation Handbook, which offers detailed, practical guidance on how institutions can implement AI risk controls, and a Supplement of AI Case Studies that document the real-world experiences and lessons learned by various financial institutions.
The handbook is structured around four main sections that align with MAS’ proposed guidelines:
- Scope and oversight, which focuses on establishing a clear AI governance framework and defining the roles and responsibilities required for AI oversight.
- AI risk management, which details how to identify AI usage, assess risk materiality and maintain an inventory of AI through organisational systems, policies and procedures.
- AI lifecycle management, which guides the implementation of active controls covering the entire lifecycle of an AI system's use.
- Enablers, which emphasises developing the organisational capabilities, infrastructure and resources needed to support ongoing, responsible AI use and effective risk management.
Together, these resources are intended to shift the financial sector away from broad, theoretical guidelines towards active, operational AI governance.
The framework proposed by the Trump administration has six key objectives:
- Protecting Children and Empowering Parents: Giving parents effective tools, such as account controls, to manage their children's device use and protect their privacy.
- Safeguarding and Strengthening American Communities: Using AI to drive economic growth and energy dominance to support communities and small businesses.
- Respecting Intellectual Property Rights and Supporting Creators: Ensuring AI models can make “fair use” of information to continue advancing.
- Preventing Censorship and Protecting Free Speech: Preventing AI systems from being used to censor or silence lawful political dissent and expression.
- Enabling Innovation and Ensuring American AI Dominance: Removing regulatory barriers to accelerate the deployment of AI across industry sectors.
- Educating Americans and Developing an AI-Ready Workforce: Expanding skills training and workforce development programmes to create new jobs in the modern economy.
The White House approach focuses on enabling the US industry to innovate and win the global AI race without being hindered by what it considers excessive regulation, while ensuring that the benefits of the technology are shared by all Americans.
Despite their differences, both frameworks are shifting the regulatory focus from basic automation to agentic AI and the governance required to control new technologies.
Consensus vs decree
There are clearly challenges in comparing the approaches of jurisdictions that vary as markedly as a city-state in Asia-Pacific and a global superpower with a complex federal system of government.
However, the respective priorities of the two frameworks and the intentions behind them can tell us something about the likely trajectory of AI regulation around the world.
The MAS introduced its AI toolkit primarily to transition Singapore’s financial sector from theoretical, principle-based compliance towards active, operational AI governance.
Although many local financial institutions already have broad AI policies on paper, they face challenges in practically managing the complex operational risks that will arise as generative and agentic AI systems move from pilot projects to core operations.
By collaborating with industry, the regulator has sought to develop a practical, consensus-driven resource rather than merely issuing top-down rules. This collaborative approach is intended to avoid AI risk management being seen as a regulatory burden.
Kenneth Gay, MAS’ chief fintech officer, said, “We are committed to fostering a culture of continuous engagement and strengthening of AI governance and risk management practices across the industry.”
The goal is to ensure that financial institutions can safely scale their innovations while managing the risks of emerging technologies effectively.
The US framework focuses explicitly on trying to win the global AI race, while promoting national security and economic competitiveness.
A key goal is to preempt state-level regulation and avoid the development of a patchwork of AI laws, instead introducing a single, minimally burdensome national standard. The administration argues that “conflicting state laws would undermine American innovation and our ability to lead in the global AI race”.
Another aim of the US framework is to build public trust in how AI is developed and integrated into daily life by addressing specific everyday concerns such as protecting children, defending free speech and managing electricity costs.
This broadens the approach significantly compared to MAS’ focus on financial services.
The divergence between the US and Singaporean frameworks suggests that the future of global AI regulation could sit on a spectrum between operational agility and strategic sovereignty.
Many jurisdictions and regulators will see the MAS’ toolkit as the more attractive blueprint. By focusing on implementation-led governance they can create sector-specific frameworks that ensure AI can be safely integrated into their economies without stifling productivity.
However, a handful of more powerful nations may favour the US approach, which signals that AI governance is inseparable from industrial policy, and that regulation is a tool for national security and domestic harmony.
Clear guidance vs regulatory uncertainty
Payment service providers (PSPs) and fintechs operating in Singapore should shift their focus from theoretical compliance to active, operational AI governance, adopting the detailed, step-by-step guidance provided in the AI handbook to structure their governance around scope and oversight, risk management, lifecycle controls and organisational enablers.
Firms should also expect more concrete, legally binding regulation once MAS concludes its consultation on AI risk guidelines.
In the medium term, they may consider participating in BuildFin.ai, an MAS initiative that brings together technology providers, research institutes and financial institutions to address complex industry challenges.
This would enable them to collaborate on developing frameworks for advanced technologies such as agentic AI and to share knowledge with industry peers.
PSPs operating in the US must navigate a more uncertain and legally complex environment. The Trump administration’s framework is likely to be implemented piecemeal over time, with different elements included in different regulation and legislation.
Until federal control is officially enacted by Congress or resolved in the courts, state AI statutes remain fully operative, so firms must continue to comply with existing state laws, such as those in California and Colorado.
They should also prepare to be caught between conflicting mandates: state laws may require algorithmic discrimination mitigations for racial or gender bias, whereas the White House’s framework explicitly targets the prevention of “ideological bias”.
In the short term, financial institutions should watch for new policy statements and reporting standards from existing federal agencies and evaluate engagement opportunities, such as through industry trade associations, to help shape these specific proposals as they move forward.
PSPs and fintechs operating across these jurisdictions face the dual challenge of implementing concrete operational controls in Singapore while simultaneously navigating a complex, shifting legal landscape in the US.
In the short term, firms that can demonstrate technical compliance in the former while maintaining flexibility in the latter will be best positioned to lead as AI moves from experimental pilots to the backbone of global finance.
In the medium term, these models of AI governance could influence the development of regulation in jurisdictions around the world, so financial institutions should monitor their implementation closely.




