By  Jason Rader / 5 Aug 2025 / Topics: Artificial Intelligence (AI) Cybersecurity Data protection

In my role as Global CISO at Insight, I help my team ensure that our environment is ready to support agentic AI securely, responsibly, and at scale. And, in true Insight fashion, I thought I’d share the things we’re doing so other like-minded people who are on this journey can benefit as well. (And whether you think you are or not, you’re probably on the journey.)
Truthfully, I wanted to be farther ahead with agentic AI than we were 6 months ago. Some days, it feels like we’re really in a race to secure the technology advancements of AI in real time.
As a security professional, we tend to have things sorted well in advance of their actual introduction into the environment. But AI has been different. The demand has never been greater to incorporate new technologies into our environment, and the availability and accessibility of such tools raises the stakes for security professionals to get it right.
To enable the business to accelerate with agentic AI, we need the appropriate controls in place to protect, monitor, report, and respond. Anything less, and the company is unwittingly exposed. Anything more, we become barriers to the business.
So, here’s a breakdown of what we’re doing — or ensuring is being done — to secure agentic AI in the enterprise.
“NASA didn’t start with rockets. They started with governance. Because when lives are on the line, you don’t build first and govern later. That mindset applies to AI and security too.” – Me, just now.
Governance isn’t just an InfoSec thing. It’s a team sport. Legal, compliance, finance, audit, enterprise risk, and operations all need a seat at the table. And with agentic AI, you’re likely introducing concepts that most of those folks haven’t encountered before. That’s okay. Bring in external experts if needed, and don’t be afraid to admit that we’re all learning as we go.
Start with a solid AI policy that holds people accountable, and pair it with practical AI guidelines that help them make smart decisions. Don’t reinvent the wheel — reference your existing data governance and acceptable use policies. And yes, I did try to add “AI shouldn’t break the law” to our policy, but legal reminded me that’s already covered. Fair enough.
The key is awareness. Get these policies in front of every teammate, especially developers and AI power users. And unlike your dusty password policy, review these often. AI moves fast; your governance should too.
Agentic AI is a spotlight on your data practices. If your classification, labeling, and access controls weren’t solid before, they’re being tested now. With agentic AI, AI agents don’t just access data — they hunt for it. And they’re not picky about whether it’s financials, IP, or that spreadsheet someone forgot to lock down in 2019.
You’re probably already seeing requests for data from parts of the business that never needed it before. That’s not a red flag — it’s a bonfire. Use this moment to double down on enterprise data management.
And remember: You can’t control data access without knowing who (or what) is accessing it.
If an agent is acting on your behalf, it needs an identity. We treat agents like users: they get scoped roles, least-privilege access, and a full audit trail. Every action is logged, every credential is rotated, and every Application Programming Interface (API) key is monitored. No free-range agents here.
This isn’t just about control. It’s about accountability. If something goes sideways, you need to know who (or what) did it, when, and why. And if you can’t answer that, you’re not ready for agentic AI.
If you don’t give your teams a secure, governed platform to build on, they’ll find one that isn’t. That’s how you end up with rogue agents in public ChatGPT or some software as a service tool that no one hears about until there’s a breach.
We centralize development on platforms like Copilot Studio and Azure OpenAI, where we can enforce security without stifling creativity. We monitor agent activity with Microsoft Purview, Entra, and Defender XDR. And we make sure agents inherit our existing security stack — because reinventing controls for every use case is a fast track to burnout.
Even the best-behaved agents need boundaries. We use runtime constraints to limit which tools and APIs they can access. Policy enforcement points, like Open Policy Agent (OPA) and Azure Policy, help us dynamically gate behavior. And we apply Data Loss Prevention (DLP) and sensitivity labels to everything agents touch.
If an agent tries to generate content with sensitive business data or personally identifiable information, we block it — or at least flag it. Because “oops” is not an acceptable incident response.
Prompt hygiene isn’t just about clean inputs — it’s about secure thinking. We sanitise prompts, use classifiers to catch risky content, and train our teams to write prompts that don’t accidentally leak confidential info.
Think of it as secure coding for the AI era. If you’re not teaching prompt engineering, you’re leaving the door wide open.
If an agent makes a decision that impacts the business, we need to know about it immediately and in detail. We log every action, store those logs centrally, and integrate telemetry into our Security Information and Event Management (SIEM) and Extended Detection and Response (XDR) platforms.
And we don’t stop there. We run tabletop exercises to simulate agent misuse or compromise. Because when it comes to incident response, “we’ll figure it out” is not a strategy.
Technology is only half the battle; the other half is people. We train developers and business users on secure agent design. We promote a culture of responsible AI, where security is everyone’s job.
And we celebrate wins. Horizon AI, Insight’s internal hub for our company's AI agents and applications, is a great example of how secure platforms can drive adoption and innovation without sacrificing control. Share those stories. They matter.
We don’t just secure agentic AI for ourselves. We help our clients do it, too.
We’ve documented our internal model, and we offer our frameworks as services. We stay aligned with standards like NIST’s AI Risk Management Framework and ISO/IEC 42001 because credibility matters — and so does being able to sleep at night.
Agentic AI is a force multiplier, but only if we secure it properly. This security goes beyond protecting data; it protects decisions, reputations, and the trust our organisations are built on.
We’re all figuring this out in real time, and that’s okay. What matters is that we share what we learn, support each other, and keep moving forward.
Together, we’re building the future of AI. Let’s do so securely and responsibly.