Singapore Unveils World’s First Framework for Agentic AI Governance

In January 2026, Singapore unveiled the Model AI Governance Framework for Agentic AI at the World Economic Forum, positioning itself at the forefront of AI regulation. Unlike typical AI tools that respond to prompts, agentic AI systems are capable of independently reasoning, planning, and executing multi-step tasks on behalf of humans. While the framework is not legally binding, it provides a strong signal to businesses about how regulators expect these autonomous systems to be used responsibly.

For organisations operating in Singapore or engaging with Singapore-based clients, the framework offers more than theoretical guidance. It provides a structured approach for embedding governance practices into AI deployment, helping companies manage both operational and reputational risks while maximising the potential of agentic AI.

Singapore’s approach builds on over half a decade of AI governance initiatives, including the 2019 Model AI Governance Framework, AI Verify, and the Global AI Assurance Pilot launched in 2025. What makes the new framework distinctive is its focus on the risks introduced by agents that act autonomously, particularly around decision-making, data access, and system interactions. This emphasis is critical for businesses considering how to operationalise AI in environments where mistakes can carry real-world consequences.

Why the Framework Matters for Businesses

Businesses are increasingly using agentic AI to automate tasks that were previously thought to require human judgment. This can range from administrative activities, such as scheduling meetings and updating customer records, to more complex functions like managing supply chains, initiating financial transactions, or coordinating multi-agent systems across departments.

These systems promise significant efficiency gains, but they also introduce unique risks. Unlike generative AI tools that produce outputs in response to prompts, agentic AI can directly interact with internal databases, third-party APIs, and external platforms. It can execute actions that have tangible consequences, and errors can propagate quickly through connected workflows.

Singapore’s framework acknowledges this operational reality. It is not designed to restrict innovation but to ensure that organisations implement sufficient safeguards to manage potential errors, unauthorised actions, biased outcomes, and data breaches. 

Assessing and Bounding Risk

At the heart of the framework is the principle of proactive risk management. Before deploying an agentic AI system, organisations are expected to evaluate the appropriateness of the use case by considering both the potential impact of errors and the likelihood that they will occur.

This evaluation should take into account factors such as the sensitivity of the data accessed by the system, the degree of autonomy granted to the agent, and the reversibility of its actions. For example, an agent capable of scheduling meetings in a marketing department may pose minimal risk, whereas an agent authorised to process payments or alter client data carries far greater consequences.

Once risks are identified, organisations are encouraged to “bound” them through careful design. This might involve limiting an agent’s access to only the tools and data essential for its tasks, restricting its ability to modify critical records, or creating fail-safe mechanisms that allow human supervisors to disable the system in the event of unexpected behaviour. By embedding these controls at the design stage, businesses can reduce exposure and increase confidence in the safe operation of their AI systems.

Defining Action-Space and Autonomy

Two interrelated levers shape how agentic AI behaves in practice: the action-space or what an agent is allowed to do and the degree of autonomy it possesses.

Action-space relates to the system’s permissions. An agent that can only retrieve information is far less risky than one that can execute transactions, make decisions, or alter databases. Similarly, access to external systems and APIs amplifies exposure and potential consequences.

Autonomy determines how much independent decision-making the agent can exercise. Some agents operate under strict, step-by-step instructions, requiring human approval at every stage, while others can define their own workflows and take actions unless a threshold for human oversight is triggered. Both levers are under the control of the organisation deploying the system, and the framework emphasises that ultimate responsibility remains with the business.

Human Accountability Remains Central

A defining feature of Singapore’s framework is its insistence that human responsibility cannot be delegated to AI. Organisations are expected to clearly allocate accountability across leadership teams, technical teams, cybersecurity experts, and operational users. Leadership should define permitted use cases, while product teams ensure safe design and implementation. Cybersecurity teams protect systems from exploitation, and operational users maintain oversight over AI actions.

Human accountability also requires active checkpoints. For sensitive or irreversible actions such as executing payments, deleting records, or engaging external systems, human approval is essential. Beyond compliance, this structure addresses a more subtle challenge which is automation bias. Even well-trained employees can become over-reliant on autonomous systems, so ongoing training and auditing are critical to maintaining meaningful oversight.

Technical Safeguards Across the Lifecycle

The framework encourages businesses to embed technical controls throughout the AI lifecycle. During development, safeguards should focus on how agents plan, select tools, and communicate with other systems. This includes limiting tool access, standardising communication protocols, and using sandboxed environments to reduce the risk of cascading errors.

Before deployment, agents should undergo rigorous testing, not only for individual accuracy but also for how they interact in multi-agent workflows and real-world scenarios. After deployment, continuous monitoring, activity logging, and rapid escalation processes are essential to detect unexpected behaviours before they cause harm.

This lifecycle perspective aligns closely with established risk management practices, treating AI systems not as static tools, but as evolving components that require ongoing governance and oversight.

Empowering End Users and Maintaining Trust

The framework recognises that trust in AI is not only about how the system is built, but also how it is used. Internally, employees should understand when they are interacting with an agent, what it is authorised to do, and how to intervene if necessary. Externally, customer-facing agents must be accompanied by transparency about capabilities, limitations, and channels for human escalation.

Additionally, as agents take over routine tasks, organisations must ensure employees continue to develop core skills rather than relying solely on automation. By initiating informed engagement with AI, businesses can prevent over-reliance and reinforce the integrity of their workflows.

Translating Guidance Into Business Practice

While the Singapore framework is non-binding, it carries practical weight. Organisations, particularly in regulated sectors like finance, healthcare, and telecommunications, should view it as a roadmap for responsible deployment. Following the framework can help businesses align internal policies, establish effective oversight, and demonstrate due diligence to regulators and clients.

Practical steps might include reviewing current AI use cases, mapping agent permissions and autonomy, confirming human approval checkpoints, evaluating monitoring and incident response capabilities, and aligning vendor agreements with governance expectations. By embedding these practices early, organisations not only mitigate risk but also build trust with stakeholders and prepare for evolving regulatory requirements.

How to Keep a Human in the Loop

One of the core principles of Singapore’s Model AI Governance Framework for Agentic AI is that humans remain ultimately accountable for the behaviour and decisions of autonomous agents. Even though these systems can act independently, plan across multiple steps, and adapt dynamically to changing circumstances, responsibility does not shift away from the organisations and individuals that deploy and oversee them.

In practice, maintaining meaningful accountability can be complex. Agentic AI often involves multiple stakeholders across a value chain from model developers and platform providers to internal product teams and end users. Each party contributes in different ways, and without clear allocation of responsibilities, accountability can become diffuse. Moreover, the phenomenon of automation bias, where humans place undue trust in automated systems, particularly those that have performed reliably in the past, can weaken oversight and increase the likelihood of errors going unnoticed.

To address these challenges, organisations are encouraged to establish clear chains of responsibility across all actors involved. Within a business, leaders such as board members and department heads should define high-level goals, governance policies, and acceptable use cases. Product teams and engineers translate these policies into operational controls, ensuring that agents are designed, tested, and deployed safely. Cybersecurity teams safeguard both the agents and the systems they interact with, implementing preventative measures and monitoring for potential threats. End users, whether employees or customers, also play a role in exercising oversight and using the systems responsibly.

Accountability extends beyond internal teams to external partners as well. When working with vendors, model providers, or third-party APIs, businesses should formalise expectations in contracts. These agreements should clarify responsibilities for data protection, security, and operational compliance, and ensure that the organisation retains sufficient control and visibility over the agent’s actions.

Ultimately, human accountability is not a static checkmark but a continuous practice. Organisations should embed mechanisms for human supervision at critical decision points, audit approvals regularly, and maintain adaptive governance structures that evolve alongside the technology. This approach ensures that even as agents gain capabilities, oversight remains robust, traceable, and enforceable.

Continuous Monitoring: Keeping Agents Under Oversight

Beyond establishing accountability, the Singapore framework emphasises the need for continuous monitoring and testing. Unlike traditional software systems, agentic AI is inherently dynamic, executing multi-step workflows that can interact unpredictably with internal and external systems. Without ongoing observation, errors ranging from unauthorised actions to biased decision-making can propagate rapidly.

Before deployment, agents should undergo rigorous testing that evaluates not just the final output but the entire workflow, including how the agent reasons, selects tools, and interacts with other agents or systems. This testing should cover individual agents as well as multi-agent setups, identifying risks such as task execution errors, failure to follow approval processes, or unintended interactions between agents. Realistic testing environments, including sandboxes that mirror production systems, help ensure that agents respond appropriately under conditions similar to real-world operations.

Once in deployment, monitoring cannot be a one-time effort. Organisations should implement logging and observability systems to track agent behaviour across all tasks. Key considerations include defining what events to log, particularly high-risk actions like database modifications or financial transactions, and establishing thresholds for alerts. Alerts can be triggered programmatically when certain conditions are met, or through anomaly detection systems that identify unusual patterns indicative of malfunctions or security breaches.

Monitoring strategies can also involve agents supervising other agents, automatically flagging unexpected or unsafe behaviours. When anomalies occur, interventions should be clearly defined and proportionate to the risk. For routine issues, human review can suffice, whereas significant failures may require pausing or terminating agent workflows to prevent harm.

Finally, monitoring should not be static. Agentic AI models may evolve over time due to changes in data inputs, connected systems, or model drift. Continuous post-deployment testing ensures that agents maintain their intended performance, comply with governance policies, and remain aligned with organisational risk tolerance. By combining human oversight with automated monitoring, organisations can create a robust safety net that allows them to leverage the benefits of agentic AI while mitigating potential risks.

How Can Gerrish Legal Help?

Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property. 

We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements. 

We are here to help you, get in contact with us today for more information.



Next
Next

Your Next Personal Shopper Might Be an AI: Are Current Laws Ready?