Managing Agentic AI in Real‑World Use: From Outputs to Actions
Agentic artificial intelligence (AI) is the next frontier for companies and organizations that are using AI. Agentic AI can select and carry out actions on a user’s behalf based on instructions, context, and the permissions it has been configured to use. As organizations integrate these systems and capabilities, they face an additional layer of legal risks and governance concerns.
As companies begin to use agentic AI, they should consider key risk management practices to ensure responsible adoption. This includes aligning with emerging best practices and standards being studied and promoted by the National Institute for Standards and Technology (NIST) around agentic AI, including the Center for AI Standards and Innovation (CAISI) AI Agent Standards Initiative and the National Cybersecurity Center of Excellence Project addressing Software and AI Agency Identify and Authorization. For example, organizations utilizing agentic AI should look more closely at how the authority of AI agents is defined, constrained, and supervised, and how actions taken by AI agents are documented, traceable, and attributable. Organizations should also account for how agentic AI use cases may implicate existing obligations and internal controls across areas like cybersecurity, privacy, recordkeeping, and third-party risk management.
This post highlights practical steps organizations can take proactively to address these considerations in deploying agentic AI.
What Makes an AI System “Agentic”?
In its January 2026 Request for Information on AI agent security, NIST’s CAISI characterizes “AI agent systems” as those “capable of taking autonomous actions that impact real-world systems or environments,” typically combining a generative model with additional software that equips the model with tools to act. These systems can, for example, draft and send emails, update records, submit filings or forms, and initiate refunds or credits.
As a practical matter, a key feature of agentic AI is that it has some authority to act, as opposed to other types of AI—like chatbots—that provide information. Once an AI system crosses the line into executing actions, companies must consider not only output quality, but also key issues for AI agents, like the scope of delegated authority, appropriate supervision, auditability, and the ability to reconstruct events and demonstrate reasonable controls if something goes wrong. While those issues are not unique to AI, agentic deployments can make them more acute because AI agents can act quickly, across systems, and with limited human touchpoints.
Defining the Agent’s Authority: What Is It Allowed to Do?
“Authority” is becoming a central concept in enterprise discussions about agents, with active work across industry and the government in areas including agent security and identity, as previewed in NCCoE’s recent Concept Paper on this topic. These concepts, in practical terms, map to whether an organization can identify the acting system and define what it is permitted to do. A clear authority definition helps reduce ambiguity and may be important if the organization needs to explain how it has structured oversight and what meaningful boundaries are set. Once an organization has articulated what an agent is permitted to do, it is better positioned to evaluate the core risks that arise when those permissions are exercised.
Key Risks to Manage When Addressing Agentic AI
- Human Oversight Is Needed Despite Automation
When an AI system can take actions—rather than simply suggest them—organizations still need people to supervise the process, and this is particularly important for significant actions that create risks for the organization. Actions that affect consumers, create meaningful financial exposure, involve sensitive information, or are hard to undo all create legal and regulatory risks, and are good candidates for structured “stop points,” where a person reviews and approves before the agent proceeds.
The principle of human oversight is central to risk management frameworks. For example, the OECD AI Principles emphasize mechanisms and safeguards to support human agency and oversight, and accountability of AI actors across the AI lifecycle. The NIST AI RMF also focused on human-AI interactions. This principle is additionally reflected in recent regulatory developments both in the U.S. and abroad. For example:
- California has updated its privacy regulations to regulate the use of “automated decision-making technology”—defined as “any technology that processes personal information and uses computation to replace human decision-making or substantially replace human decision-making” to make a significant decision concerning a consumer.
- The EU AI Act similarly reflects an expectation that certain high-risk systems be overseen by individuals during use, with oversight calibrated to risk and context.
Focusing on adequate human oversight helps to manage risk and get in front of questions that regulators and courts are likely to examine if there is an issue with AI agent deployment: what safeguards existed, how the organization identified and tried to prevent problems, and ultimately who was responsible.
2. Organizations Should Identify Governance Gaps and Ensure that Internal AI Policies and Approaches Account for Agentic Activities
Many existing organizational AI policies may be focused on generative AI outputs rather than agentic AI capabilities. Using agentic systems changes the practical question from “Is the output acceptable?” to “What is this system allowed to do on our behalf, and under what conditions?”
Internal AI agent governance and risk management processes should address this question, and can incorporate a number of steps. For example, companies can tailor pre‑deployment risk assessments to the agent’s purpose, the specific actions it can take, and the operational environment in which it will run. Companies also can make accountability explicit by assigning a clear internal owner responsible for the agent’s behavior and performance in production, reducing the risk that responsibility becomes diluted across product, engineering, compliance, and vendor teams.
3. Monitoring for Malicious Activity and Cascading Errors Is Critical
When execution is automated across systems, small mistakes may propagate faster than traditional detection and control processes. One concern is that the system can repeat the mistake before human processes detect it and at a scale that is difficult to undo.
Separately, use of AI agents can involve new security risks. For example, as CAISI’s work on agent security illustrates, one risk is “agent hijacking,” a form of indirect prompt injection where hidden or malicious instructions embedded in content the agent reads can steer it into taking unintended actions.
Within the context of governance, where an agent can act, organizations benefit from monitoring and logging sufficient details to detect anomalies—both unintentional and malicious—and from having the ability to pause or disable agent activity quickly when needed. Organizations can also attempt to leverage AI to help detect and alert humans to unintended consequences. These safeguards can also help the organization show it acted responsibly if something goes wrong.
4. Agentic AI Interactions with Consumers Must Account for Existing Legal Frameworks
As we explained here, the use of AI agents to provide customer-facing capabilities is expanding at a rapid pace. For example, service providers are increasingly beginning to pair advanced texting capabilities with AI agents to deliver personalized and interactive messaging for consumers. While such cutting-edge combinations have the potential to empower consumers to communicate directly with companies about the precise products and services they are looking for, companies must be mindful of how existing consumer protection laws—including the Telephone Consumer Protection Act and state laws governing outbound calling and texting—may apply to agentic AI use cases. Further, a growing number of state laws regulate the ways that AI might interact with consumers. Organizations rolling out consumer‑facing AI agent interactions should thus identify potential legal and regulatory risks up front, plan for compliance, and design their systems to account for potentially applicable existing federal, state, and local legal frameworks.
Strategies for Addressing Risks Before Deployment
To mitigate these risks, organizations should build in controls and governance for agentic AI deployment at the outset. Based on the potential significance and risks of certain deployments, organizations can evaluate the following practices as a starting point:
- Run a pre-deployment risk assessment tailored to the agent’s purpose, system access, data exposure, and capabilities, including “reasonably foreseeable misuse” scenarios (e.g., prompt injection leading to unintended tool use).
- Define and document the agent’s scope and authority prior to deployment. Specify goals, allowed system tools, allowed data sources, prohibited actions, and oversight gates (which actions require human approval, which can run under supervision, and what triggers escalation).
- Assign a clear internal owner accountable for the agent’s behavior, performance, and monitoring, with authority and guidance to seek separate legal and compliance review as appropriate.
- Implement technical and operational controls like monitoring, audit logs, and a tested ability to pause/disable the agent and revoke credentials.
- Conduct an inventory of the data sets the agent can access, and implement a continuous review process for privacy, confidentiality, security, and data minimization.
- Review contractual terms and obligations if the agent will interact with third parties or access third-party data (including relevant restrictions, audit rights, security requirements, and liability allocation).
- Ensure the overall governance structure contains input from individuals across the organization with different perspectives, including those in business, technical, legal, and compliance roles.
***
Wiley’s Artificial Intelligence Practice counsels clients on AI compliance, risk management, and regulatory and policy approaches, and we engage with key government stakeholders in this quickly moving area. Please reach out to the authors with any questions.




