When Abraham Ingersoll stepped into the security leadership role at THG Ingenuity in the U.K., he brought a unique perspective with him: that of an American technologist navigating deeply entrenched European attitudes toward data protection, regulation, and security culture. In a recent conversation with ConductorOne CEO Alex Bovee, Abraham shared ground-level observations about the cultural and operational differences between the U.S. and U.K. when it comes to AI, security, and governance.
Here are some of the key takeaways from their conversation.
Security Culture Is Shaped by Broader National Norms
In the U.K., Ingersoll notes, there is a pronounced deference to authority and a culture of following directives precisely. That can be a tough adjustment for an American used to startup environments where proactivity is expected. He shares, “They will often only do what they’re told to be done, which is troubling for an American entrepreneur who kind of just expects people to take an idea and drive towards it.”
That same mindset shows up in how people think about innovation. In the U.S., employees may be encouraged to take risks and experiment. In the U.K., people are more likely to ask for permission before deviating from the norm.
The Regulatory Divide: U.S. Pragmatism vs. EU Precision
Both Abraham and Alex agreed that one of the biggest differences between the US and UK is the regulatory environment. In the U.S., there are few barriers to adoption, and companies can deploy tools with minimal oversight. That freedom can drive innovation, but it also introduces risk.
By contrast, Europe’s regulatory landscape is far more structured. Ingersoll described the AI governance model in the EU as being deeply inspired by GDPR, with specific roles like AI implementers and AI builders mirroring the data controller and data processor designations. He admits, “I live and breathe that now, which is weird.”
The U.K., post-Brexit, occupies a gray zone. While the government has signaled openness to AI innovation, Ingersoll notes that as an enterprise service provider, they still face the pressure of meeting EU-like data protection expectations from customers. That includes scrutiny around AI policies, training data usage, and data sovereignty.
AI Adoption: U.S. Leads in Speed, U.K. Plays Catch-Up
There’s a clear delta in AI adoption. According to Abraham, “The states are probably six months ahead, which in AI years is like a millennium.” In the U.S., AI use is encouraged across the board, and often seen as a performance expectation. Alex shared, “If you’re not using it, it’s a performance problem.”
In the U.K., the adoption curve is steeper. While technologists are experimenting with tools like Claude and OpenAI,, many employees are still dabbling or hesitant. A big part of that hesitation stems from cultural anxiety. “There’s this visceral reaction of ‘Oh my God, this is going to take my job,’” Ingersoll explains.
Still, he sees exponential growth. Once employees are shown what modern AI models can do, many become enthusiastic adopters. Others remain skeptical, particularly legal professionals who are wary of AI-generated errors in high-stakes environments.
Privacy, HR, and the Right to Human Judgment
In Europe, AI cannot make decisions about people without offering the right to human judgment. Ingersoll points out that under EU law, employees can demand access to or deletion of their data, and this includes performance reviews, HR communications, and Slack messages.
He shared awkward cases where employees used AI to write messages to HR, only to be surprised that the AI origin had to be disclosed. “You can’t subject other humans to AI output without telling them it’s AI output,” he said.
Identity Governance Needs to Evolve
As the conversation turned toward identity and access management, both Abraham and Alex agreed that just-in-time access and no birthright permissions are becoming critical. Ingersoll described incidents in the U.K. where threat actors gained access through third-party contractors, disrupting major retailers, and recommended companies invest in dynamic access control to prevent long-standing entitlements from becoming entry points.
Ingersoll’s vantage point reveals what it really means to operationalize AI and security in a multinational context. Culture, regulation, and tooling all intersect in complex ways, and the speed of AI adoption varies drastically across regions.
For companies navigating these global waters, the lesson is clear: success with AI and identity security isn’t just about choosing the right tools. It’s about understanding the people, the policies, and the cultural norms that shape how those tools are used.
Want the full story? Listen to the complete All Aboard conversation with Abraham Ingersoll and Alex Bovee for a candid look at how AI, regulation, and identity security are evolving across borders.