Agentic AI is reshaping how organizations think about security, compliance, and identity. Alex Bovee, CEO of ConductorOne, recently sat down with Brad Thies, founder and president of BARR Advisory, a cybersecurity and compliance professional services firm, to explore what this means for the world of cybersecurity compliance and auditing. BARR specializes in audits, consulting, and standing up cybersecurity programs for organizations navigating cloud-first environments.
AI introduces a new layer of identity complexity to the audit process. AI agents can act on behalf of humans, accessing data and automating workflows, and each of these agents becomes a new non-human identity that must be governed. Without visibility and guardrails, organizations risk losing control over who (or what) has access to their most sensitive systems.
Their conversation dove into the opportunities, risks, and practical realities of agentic AI adoption, and how leaders can prepare.
Raising the compliance floor with AI
Brad frames agentic AI as a chance to “raise the floor” in professional services. “To raise your ceiling, you also have to raise your floor in what you say no to. AI gives us the ability to raise that floor.”
He compares the auditor’s role to that of a doctor: auditors act as both the lab (gathering evidence) and the doctor (diagnosing and providing professional judgment). AI, he says, can take on more of the “lab work,” freeing auditors to focus on their real value: independence, judgment, and professional opinion.
Brad has found that BARR’s clients—just like most enterprises—are in varying stages of AI adoption. Some are still experimenting with surface-level AI wrappers marketed as innovation, while others are exploring real adoption of AI agents to bypass traditional workflows.
But that adoption often requires a mindset shift. The initial excitement of “AI can do everything for me” often gives way to the reality that it takes real effort, strategy, and new ways of working to integrate AI effectively.
Risks: Same fundamentals, magnified
While AI introduces unique challenges like explainability, or understanding how AI makes decisions, the fundamentals of risk management remain the same:
- Do you know where your data is?
- Do you know where it’s going?
- Do you have controls in place?
AI simply accelerates and magnifies existing risks, making governance even more critical.
Standards bodies are beginning to respond. ISO 42001 emphasizes risk-based, impact-driven management systems, while HITRUST’s AI certification takes a prescriptive approach with detailed controls. Both ultimately reinforce the importance of access management and change control in an AI context. At the same time, frameworks like SOC 2, widely used for cloud security, are poorly suited for AI governance.
Still, frameworks are only the starting point. They provide a baseline, not a ceiling. The more important step is for companies to define their own AI strategies and acceptable use policies. And central to that strategy must be identity security: ensuring visibility and governance across every user, service account, and AI agent.
Strategy-first approach to AI implementation
One of the core messages from their conversation is that strategy must come before technology.
If you don’t have a clear strategy of where you’re going, it’s hard to address the risk. The starting point is always: what’s your acceptable use policy?
Rather than chasing AI hype, organizations should define their AI strategy, communicate it to stakeholders, and then layer on the right technologies and frameworks.
Process over controls
Traditional auditing focused on sampling outputs: checking where data sits or testing specific systems. But with AI, things often spin up and down too quickly. Instead, Brad advocates for focusing on processes upstream:
- Assessing CI/CD pipelines
- Ensuring proper configuration of systems like AWS S3
- Lowering friction to avoid shadow IT while keeping governance intact
This process-first lens ensures resilience and reduces the need to dig organizations out of trouble after the fact.
Advice for security and compliance teams:
Brad’s guidance for enterprises navigating AI implementation is clear:
- Be transparent with auditors. Think of them as doctors: the more honest you are, the better they can help.
- Establish an acceptable use policy for AI early.
- Pick a framework and do it well. Use it as your floor, not your ceiling.
- Engage stakeholders to align on risks and responsibilities.
How BARR auditors are using AI
On the audit front, BARR is experimenting with AI in two ways:
- Front stage: Empowering auditors with real-time analysis tools to enhance client interactions, streamline readiness assessments, and cut through noise in control sets.
- Backstage: Structuring internal datasets and prompts to train future agents, ensuring outputs are explainable and reliable.
Agentic AI is already reshaping security and compliance. For Brad, the opportunity isn’t just about new technology, it’s about using AI to elevate professional judgment, magnify risk management fundamentals, and build more resilient, trusted organizations.
“Compliance has always been about building a floor. AI raises that floor, so we can spend more time where it matters most on diagnosis, judgment, and building trust.”
Want to learn more about the impact of AI on compliance? See a demo of how ConductorOne simplifies identity governance for the AI era.




