.png)
OpenClaw & Moltbook: Why AI Agent Access Is The Next Identity Crisis [Webinar Summary]
AI agents are spreading across company stacks faster than most security teams realize.
They connect to email systems, code repositories, cloud infrastructure, CRM platforms and internal databases. They act autonomously, can trigger workflows and move data between systems.All of that at machine speed and with zero accountability.
Many run with long-lived tokens and very broad permissions. In fact, if you connect an agent to an application, the agent today gets access to all scopes (skills) that are available in the MCP. That’s very risky.
Then OpenClaw and Moltbook happened.
Those incidents did not reveal a broken AI model. They exposed something more dangerous: AI agents have broad, persistent access to company systems with literally no governance. And LLM-inherent guardrails won't be able to fix this, as they are probabilistic models that you can always trick.
This article summarizes the key lessons from our recent session “OpenClaw & Moltbook: How To Control Access Of AI Agents.”
It also connects those lessons to practical Identity & Access Management principles shared during the discussion with Mackenzie Jackson, Security Advocate at Aikido Security.
The takeaway is simple.
The next identity problem is not human access. It is agent access.
Key Takeaways
- AI agents are becoming a new category of identity inside companies.
- Many agents run with standing permissions and broad scopes.
- Incidents like OpenClaw exposed how easily these agents can become attack surfaces.
- Traditional Identity & Access Management models assume human users.
- AI agents require a dedicated access management and governance approach - treating them as privileged accounts.
- The companies that control agent access early will scale AI safely.
What OpenClaw And Moltbook Actually Revealed
Most headlines framed OpenClaw and Moltbook incidents where the AI went crazy. That is true, but only touches the surface of the problem:
Agent have to act autonomously. That’s where their true power comes from. And an autonomous agent will always do what it takes to complete a task it got. But: If we work under that assumption - the access permission agents get (access to other company systems) has to go through an authorization process, ideally real-time and for each agent task.
AI agents have:
- Long-lived access tokens
- Broad permissions across systems (depending on the scopes of each MCP they get connected to)
- No clear ownership (who’s behind an agent)
- No lifecycle management
This already makes clear: AI Agents have an access governance problem.
AI Agents Are Becoming Non-Human Identities
Security teams already manage multiple types of human and non-human identities:
- Employees
- Contractors
- Service accounts
- API Keys & Tokens
- Workload Identities
AI agents add a new category: autonomous non-human identities.
And that makes a massive difference: Unlike like e.g. API keys, agents can act with huge freedom to get their job done.
An agent, as a non-human identiy, might access:
- Slack conversations
- GitHub repositories
- Customer databases
- Cloud infrastructure
All through one initial authentication - often using the authentication method of a human.That token often holds broad permissions so the agent can “just work.Meaning: Read data, modify data, share data. All of this at machine speed.
It’s a similar pattern that caused many historical breaches. Privileged access grows quietly until it becomes impossible to manage.
Mackenzie described the pattern well during the session. Access creation spreads across teams while governance for these new non-human identities is missing.
The Hidden Risk: Agents Multiply Human Access
One overlooked risk with AI agents is how they expand human access indirectly.
Imagine an AI agent connected to a company’s GitHub organization.
The agent holds broad repository access.
In theory, multiple engineers can control the prompts or workflows behind that agent.
Now multiple humans effectively share that privilege - so potentially they expand their permissions through the agent
From an identity perspective the system sees one identity.In reality it represents multiple humans.
Why Traditional IAM Models Break With AI Agents
Most Identity & Access Management systems were designed for humans.
The core assumptions look like this:
- A user requests access.
- A manager approves it.
- IT provisions it.
- Periodic reviews validate it.
AI agents do not follow this lifecycle.
They are often created by engineers, product or GTM teams.They connect directly to MCPs.They receive static tokens with wide permissions.
Many organizations do not even know how many agents exist.
That creates three structural problems.
1. Discovery
You cannot govern what you cannot see.
AI agents today live outside traditional identity systems. They exist in developer tools, automation platforms or third party AI services.
2. Ownership
Many agents have no clear owner.
If an agent breaks or behaves unexpectedly, it can be difficult to determine who is responsible.
3. Lifecycle
Human access follows a lifecycle.
Joiners
Movers
Leavers
Agents often live forever.
Tokens remain active long after the project that created them ends.
In other words: Where is the access review or the offboarding process for agents?
AI Agents Should Be Treated Like Privileged Identities
The principle is simple. The implementation is extremely complex.
This is what companies need to establish - even though it’s a complex task: Treat every AI agent (actually: every agent that is acting on behalf of a human) like a unique identity. Meaning: “Agent A acting on behalf of Julia!”
Ask the same questions security teams already ask for human access.
- What human identity is behind this agent?
- What systems can it access?
- What permissions does it hold?
- Who governs the agent?
- How long should that access last?
If those answers are unclear, the agent should not be live.
Mackenzie described a similar mindset when discussing agent access management. If AI agents enter an environment they should follow the same scrutiny applied to privileged employees and privileged accounts.
This principle prevents most AI access risks before they appear.
How Fast-Growing Companies Should Manage AI Agent Access
need a clear framework to start managing agent access: simple rules.
1. Create A Complete Inventory Of Agents
The first step is visibility.
Security teams should maintain a list of:
- All agentic identities
- All machine-to-machine connections each of these agents have - including the permission scope
Every agent should have an owner.
2. Introduce An Approval Path For New AI Tools
Innovation should not stop.
Teams should still experiment with AI agents.
The key is governance.
At Nudge, engineers propose new AI initiatives through a structured process so access decisions remain visible and documented.
This keeps experimentation safe without slowing teams down.
3. Apply Least Privilege By Default
Agents rarely need broad system access.
Limit permissions to the smallest possible scope.
Short-lived credentials are safer than static tokens.
4. Monitor Access Like Any Other Identity
Agents should appear in the same access reviews as employees.
If an agent is inactive or unnecessary, revoke its permissions.
Why Non-Human Access May Become The Bigger Risk
At the end of the discussion, Mackenzie was asked what worries him most going into the next phase of AI adoption.
His answer was immediate.
Non-human access.
Humans operate within policy and context. AI agents execute instructions without understanding consequences.
If misconfigured, an agent can interact with systems continuously and at machine speed.
Mackenzie explained the risk clearly. AI agents expand the attack surface because they introduce external vendors and additional complexity into the environment. That complexity increases the chance of misconfiguration or exploitation.
Of course: Human access still matters. After all, if’s humans who introduce agents.
Access Management Must Expand To Cover AI Agents
AI agents will become a normal part of company operations.
The real question is whether companies will govern them properly.
OpenClaw and Moltbook offered a preview of what happens when agent access grows faster than access management.
The lesson can not be to slow down or block AI adoption.
The lesson must be to properly govern AI agents.
Once you see them that way, the path forward becomes clear.
Discover them.
Assign ownership.
Control their permissions.
Review them like any other privileged account.
That is how modern companies scale AI safely.
See How Modern Teams Control AI Agent Access
If your company is starting to experiment with AI agents, the best time to fix access governance is now.
Cakewalk helps IT and Security teams:
- Discover all identities, apps and AI agents
- Control permissions across the stack
- Automate joiner, mover and leaver workflows
- Generate audit-ready access evidence automatically
Book a demo to see how fast-moving companies run Access Management without growing their IT team.
Make Access Management a piece of cake.

.avif)
