Your auditor is already getting ready to ask about AI agents. If your SOC 2, ISO 27001, or DORA audit lands in 2026, you can assume agent access will be in scope, even if the framework itself hasn't caught up yet.
That was the clearest takeaway from a recent session between Johannes Keienburg, Founder and CEO at Cakewalk, and Julie Gibelin, Senior Information Security Officer at Talon.One. Julie spent years building access governance at N26, one of Europe's most scrutinized fintechs. Now she is doing it again at Talon.One, this time with AI agents actively in play across the stack.
The conversation dug into what actually changes when agents enter the access picture, why most compliance frameworks were not designed for this, and how to prepare for your next audit without pretending the tooling landscape is further along than it is.
Key takeaways
- EU auditors will ask about agentic access under DORA, HIPAA, and ISO 27001 within your next audit cycle.
- Current compliance frameworks cannot catch runtime dependencies. The checkbox model is breaking, and the recent fake SOC 2 reports scandal is one early signal of what that looks like.
- Most companies still cannot reliably produce access control evidence for human users. Agent access makes that gap 10x wider.
- Three properties compound each other: static long-lived credentials, autonomous behavior at machine speed, and catch-all broad permissions.
- Human-in-the-loop breaks down in practice because of decision fatigue. People stop reading the prompts after a while.
- Start with toxic combinations and security design principles, not granular permission policies (they can come at a later stage). Define what a role should never do before you define what it can.
Agent access is already a bigger problem than human access
Johannes opened with a forced choice. What is worse: walking into your next audit with no access control evidence for your employees, or knowing your teams are wiring AI agents into production systems without any oversight at all?
Julie did not hesitate. Option two. Agent adoption is already widespread in regulated industries, fintechs included, and regulators are scrutinizing it. The tooling to monitor agents at runtime is not really there yet, so for now, teams are stuck with manual configs and manual access decisions to keep any governance layer in place.
Her point was clear: companies have not fully solved human access governance. Asking them to bolt on agent governance without changing the model is setting them up to fail on both.
"You can be very good at the paperwork, but that doesn't mean you're necessarily very good at security." — Julie Gibelin, Talon.One
Why agent access is not just a new flavor of IAM
Julie broke down why agents are not just another identity type. The same pattern kept showing up across her answers: agents act in runtime, across chains of dependencies, and change context while they are doing it.
You can control whether your agent has access to a specific resource. You cannot control the whole chain. Once the agent is running and has an objective to hit, it will take whatever path works. That is not malicious. That is the point of an agent. But it does mean predicting the blast radius is hard, especially on a developer's laptop where the agent is also "negotiating" with the terminal, the kernel, and local files.
There are three compounding properties that make this different.
Static long-lived access
When a team member connects Claude or any other agent to GitHub, the CRM, or an HR system, they approve it once and move on. The access stays. Nobody rotates it. Nobody revokes it. The static credential was designed for a machine identity, not for something that can improvise.
Autonomy at machine speed
Behind that static credential is an agent that only unlocks its value when it has the autonomy to act. An agent doing what it was asked will happily go down the wrong path, share data, or delete a table it decided was not needed. Again: not malicious, just trying to complete the task.
Catch-all permissions
Connect Claude to GitHub via MCP and the agent inherits the full tool surface. No job-specific scope. No differentiation between a CTO's use case and a sales team's use case. Static plus autonomous plus broad is the combination that breaks conventional access models.
A fourth property is worth naming alongside these: most agent deployments today produce no accessible audit trail on the client side. If the action happened at 3am via MCP and nothing wrote it down, good luck reconstructing the decision for your auditor.
Human-in-the-loop works, until it doesn't
The obvious answer is to keep a human in the approval loop. Julie has tried this. It works as a guardrail, and it is probably the best thing most teams can do today. But there is a psychological tax that is easy to underestimate.
When you are excited about an agent solving a real problem, you approve faster. You stop reading the details. You say yes to the terminal access, yes to the file system, yes to the API call. Not every user has the knowledge to make those calls correctly in the first place, and even the ones who do get fatigued.
Johannes pointed to his own behavior with coding agents. Early on, every prompt gets scrutiny. Three weeks in, it is yes, yes, yes.
The fix is not to remove humans from the loop. It is to stop asking them to rubber-stamp the routine stuff and save their attention for the cases the system genuinely cannot resolve.
Start with toxic combinations, not policies
When Johannes asked how Julie would build access governance from scratch in a company that treated compliance as a priority, her answer was not "buy a tool." It was "define your toxic combinations first."
This sits upstream of any specific policy. Before you get granular on permission schemes, you write down the things a role should never be able to do. CEOs do not get access to production. That one is easy (CEOs might disagree). But there are subtler combinations across tools that, when stacked, create real risk.
Julie floated a wish here: she would love a tool that could derive contextual, company-specific security design principles automatically. Attribute-based access control, but driven by the context of what the company actually does. A fintech has different toxic combinations than a content startup. Nobody is fully solving that yet.
What Talon.One is actually doing
Talon.One is not regulated the way a bank is, but it supports large, demanding enterprise customers. Julie's approach has been iterative by design.
Rather than green-lighting every AI tool at once, her team picked a provider, started connecting internal resources (not production, not yet), and built connectors one at a time against tested user needs. Every agent creation still goes through central review. Configs get checked. Guidelines exist and are enforced.
She also talked about a simple but useful experiment: running an internal agent to scan company resources for exposed secrets, API keys, credentials, and PII. When it finds something, she takes it back to the relevant team and asks the obvious question: do we actually need this published? It has already reduced the surface of exposed sensitive data in public and internal resources.
"You can have all the sophisticated tools in the universe. It will never replace the human factor." — Julie Gibelin, Talon.One
The human factor kept coming up. Security leaders who rely only on centralized policy will miss what is actually happening in product and engineering. You have to be in the RFCs. You have to be on the team channels. In regulated tech specifically, you cannot block your teams from using AI. Market pressure will not let you.
What EU auditors will ask about agentic access
Julie was direct here. Talon.One has a SOC 2 audit coming up. She already knows AI will be in scope.
She does not yet know how granular auditors will get. Nobody does yet. But the direction is clear:
- Some form of AI governance has to be in place.
- You need an inventory of agents. How many exist, what they are, where they sit.
- You need logs you can retrieve and show. Not eventually.
- And: You need a plan how to manage this going forward!
This maps to what DORA, HIPAA, and ISO 27001 already require in principle, even if not by name. Granular access controls, auditability, and evidence that the controls are enforced. Agents inherit all of those requirements the moment they touch anything regulated.
Europe's AI regulation: the honest take from a security practitioner
Johannes asked Julie whether Europe's approach to AI regulation was on the right track. Her answer was nuanced in a way worth capturing properly.
She supports regulating and enforcing security. The question is not whether there is too much or too little compliance. It is the how. DORA, in her view, is an important piece, but the people writing these frameworks often lack hands-on operational experience. That creates friction with the innovation and speed European tech companies need to stay competitive.
She has lived this from inside. In her earlier role in a second line of defense control function, she was too far from the teams actually building things. The controls were not reflective of operational reality. They were checkboxes derived from frameworks written a long time ago.
Her prescription: get practitioners into the room when regulation is being written. Modernise the frameworks. Be willing to throw some of them out and start again from the operational realities of how modern companies run. Compliance needs to shift toward by-design, embedded in the systems themselves, not extracted manually by a team every audit cycle.
Johannes added a harder edge to this. Europe arguably has more AI regulation than AI right now. And the checkbox approach creates its own failure mode. The recent case of a compliance company selling fake SOC 2 reports, despite having millions in funding, is what you get when compliance and security become separate disciplines (and when founders are reckless). If you are only asked "do you have an access control policy?" and not "show me what your agents actually have access to right now," the policy is just paper.
Julie agreed. Incentivising companies to produce mountains of paperwork pulls resources away from actual security work. AI, ironically, makes the paperwork part trivial. Ask a modern model to do a DORA gap analysis and it will do it, and it will be pretty good. Which only makes the case stronger that auditors need to start asking about the things AI cannot fake: live dependencies, runtime behavior, actual logs.
How to prepare for your next audit
A few concrete actions for security and compliance teams in regulated EU environments, drawn from the session:
- Inventory your agents. Who introduced them, what they are connected to, which credentials they are using. Most teams have no clean list. This is the single biggest gap between current reality and what an auditor will ask for.
- Start with toxic combinations. Before you debate permission schemes, write down what roles and agents should never be able to do. This becomes the frame for every policy decision downstream.
- Treat tool execution as the security boundary. Focusing on what the model says misses the point. The risk is what it does through its tools.
- Capture logs now, even if imperfect. Something is better than nothing. An agent action without a log is an incident you cannot investigate.
- Get closer to the builders. Security leaders who sit in central teams miss the real risk. Be in the product RFCs. Be on the dev channels. Educate, do not block.
Govern agent access before auditors start asking
If your next SOC 2, ISO 27001, or DORA audit includes questions about AI agent access, you want to be showing evidence, not explaining why you do not have it.
Cakewalk's AI Agent Access Management is built for this. Dynamic, job-scoped access for AI agents. Real-time policy enforcement. A complete audit trail for every agent action. Built for security teams operating in EU regulated environments.