Enabling and Protecting Agentic AI: What Security and IT Leaders Need to Know
Enabling and Protecting Agentic AI: What Security and IT Leaders Need to Know
This article first appeared on March 4, 2026 as the second edition of the Risk Realist Newsletter.
Those of us who were leading information security and IT risk management programs in the early 2010s remember the shadow IT explosion. Employees started adopting consumer cloud storage tools like Dropbox, WeTransfer, iCloud, Slack, and dozens of cloud SaaS tools because the approved alternatives were slow, clunky, or non-existent. We spent years trying to get visibility into what was being used, where data was going, and who had access to what. And many of us, including myself, chose to block unapproved tools and push people towards our approved tools. But when we blocked and hoped that our enterprise file share tools were sufficient, they were not. So the puck just moved to smartphones and tablets. Or even worse, many companies were providing access to ANY device so that employees could log in to email, calendar, and…. yes, your whole SharePoint file ecosystem.
The consumerization of IT and shadow IT challenges are back. But this time it’s running on AI, connected to your email, your CRM, and your local files, and it can take autonomous actions by spinning up numerous agents, with or without asking for permission at every step (depending on the user’s choice).
I want to be clear about where I stand: I am very bullish on AI. I have spent the last three weeks deep inside Claude Cowork (Anthropic’s agentic AI desktop tool), building automations, testing use cases, connecting plugins, and exploring what’s possible for both my business and eventually my clients. The productivity gains are real. The innovation potential is extraordinary for those who know what they are doing. I want (and understand company employees want) their companies to adopt these tools aggressively, but safely.
But I also know what happens when powerful technology gets into the hands of every employee without policy, controls, education, and governance. I have seen this movie before, and the sequel is 100 times more complex, action-packed, and most likely with some horrific scenes.
Shadow AI Is Shadow IT in “Ludicrous Speed” Mode
In the last 24 months, AI went from “a chatbot that writes emails for you” to fully autonomous agents that browse the web, execute code, read and write files on your local machine, and connect to dozens of enterprise applications through API plugins.
Claude Cowork, which launched in January 2026, can access your local file system and connect to Slack, Jira, Salesforce, HubSpot, Google Drive, email, and hundreds of other tools through its MCP (Model Context Protocol) ecosystem. OpenAI, Google, and Microsoft are all racing toward the same destination. Let’s also put aside Open Claw (less mainstream, but less controlled, with fewer restrictions on what it can take control of natively). But if you aren’t trying it, you might be missing out, right? (It’s called FOMO.)
Here is why this is DIFFERENT from the shadow IT era:
Shadow IT was about data storage and collaboration. Shadow AI is about autonomous agents that can read, analyze, transform, AND act on your data. A rogue Dropbox folder back in the day could expose a file leak or a way to bypass detection controls. A rogue AI agent connected to your CRM, email, and file system could synthesize data across all three, make decisions based on that synthesis, and take actions that no one authorized. Identity and access management is becoming more complex, moving beyond basic “machine” identities to agentic identities that evolve much faster than user access or static machine/service account access ever did.
The numbers are staggering, and these stats are already outdated:

(Sources from Stats Below)
- AI usage at work has grown 61x in two years, and nearly 40% of all data flowing into AI tools is sensitive.
- Over 80% of employees are using unapproved AI tools at work. Nearly half say they would continue using them even if their company explicitly banned them.
- 32.3% percent of ChatGPT usage happens through personal accounts, bypassing all corporate logging and visibility.
- Only 17% of organizations have implemented technical AI governance frameworks, meaning 83% are flying blind on governance.This is not classic shadow IT. This is shadow IT with a brain and the keys to your filing cabinet. What makes it worse is that most of your workforce (employees and contractors) do not realize how much access they have or are about to hand over.
Real Vulnerabilities, Not Hypotheticals
The workplace benefits AND concerns with these tools are both absolutely real. The cyber governance/policy, workforce education, and technical controls to protect against and detect threats while enabling the benefits are critical. But as the stats above show, cyber teams are falling behind and, in the worst cases, not even at the table.
In October 2025, researchers demonstrated that Claude’s code interpreter could be used through prompt injection to silently exfiltrate chat histories, uploaded documents, and data from integrated services. A DNS-based data exfiltration vulnerability (CVE-2025-55284) allowed API keys and credentials to leak from developer machines. The MCP ecosystem itself has supply chain risks, with a critical remote code execution vulnerability affecting over 437,000 downloads. With MCP connections to an endless variety of SaaS tools, I was personally so excited after spending 2 weeks with Claude Code, that I wanted to connect everything immediately, before I really thought about taking a step back and enabling that level of progress with the right controls. Many executives feel this immediate rush of excitement and “need for speed” as AI jockeys for position in competitive strategies.
How to Enable Innovation Without Losing Control
I committed in Edition #1 of The Risk Realist that I would not raise a problem without offering a recommended solutions and paths forward. So, here’s how companies that ARE letting these tools into their environments can protect themselves without slowing down progress.

1. Govern the front door. Don’t board it up. Create an AI acceptable use policy that is role-specific, education-first, and fast. If your approval process takes 6 weeks, your employees will just use personal accounts (and you’ve lost all visibility). Aim for sub-2-week approval cycles for low and medium-risk use cases. The companies that banned ChatGPT in 2023 are now 12-18 months behind the companies that created governed paths early. Oh, and don’t forget that 32.3% stat of employees using personal ChatGPT accounts (and that only includes those that were honest in a survey)
2. Treat AI agents like contractors with badge access. Every AI agent connecting to your systems should have a defined scope of access, monitored activity, and a clear owner. Apply some of the classic cyber control strategies and principles, like: zero trust, least privilege, continuous verification, default-deny for sensitive data. Don’t give an AI agent access to your ENTIRE file system when it only needs one project folder. That is over-provisioning, and we know better (but our broader workforce probably does not.)
3. Know where your data is flowing. This is where most companies are completely blind. Deploy DLP that understands AI data flows (not just email, internet, and endpoint). Classify your data: public data can go to approved AI tools, internal data requires enterprise-tier accounts with contractual protections, restricted and regulated data stays off AI platforms entirely unless you have validated, controlled pipelines with the right controls in place. Monitor for employees using personal AI accounts with corporate data. That is your biggest leak, and it is often invisible if they are not using your devices that you think you have all locked down.
4. Vet your AI supply chain. MCP servers and plugins are the new third-party risk. Do not enable all available plugins by default. Explicitly allowlist only servers from trusted providers. If your team is writing custom MCP integrations, apply the same code review and security testing standards you would apply to any production code. This requires process development and ensuring processes work quickly and efficiently, so they do not get side-stepped.
5. Build your people strategy before your tool strategy. This is where my OCM (Organizational Change Management) bias comes through, and I make no apologies for it. Create “AI Ambassadors / Champions” within departments who understand both the capabilities AND the risks. Run practical workshops that show employees how to use AI tools safely. Frame governance as an enabler (“here’s how to use these powerful tools effectively”) rather than a restriction (“here’s what you can’t do”). We often sink money into tools and 1-time trainings but really fall short in investing in people and process, and this is where good intentions get sideways.
6. Plan for when something goes wrong. Build an AI-specific incident response playbook. Your traditional IR playbook does not account for AI prompt injection, autonomous agent behavior, or behavioral drift. You need the ability to pause agents immediately, audit trails that trace what data the agent accessed and what actions it took, and tabletop exercises for AI data exfiltration scenarios. If this is not on your Q2 calendar, put it there.
The Bottom Line
AI is not going back in the box. Your employees are already using it. The CISOs and business leaders who figure out how to enable innovation while maintaining control will create a real competitive advantage. The ones who just say “no” will watch their best people find workarounds. And the ones who say “yes” without guardrails will end up as cautionary tales and rapid execution chutes and laddering them back to square one or worse.
Neither outcome is acceptable. The path forward is governance that moves at the speed of innovation. It’s not easy. But it’s achievable. And it starts with accepting that this is fundamentally an organizational change management problem wrapped in a technology problem.
If you’re wrestling with how to govern AI adoption, or need help building an AI acceptable use policy, risk assessment framework, or incident response playbook for agentic AI scenarios, let’s talk.
Aaron Pritz