AI in Cybersecurity & Identity Governance: The Bitter and the Sweet
Well, SKYNET isn't quite here yet, but it feels awfully close, doesn't it? We've yet to see a robotic revolution, but it seems like with each passing week a new AI tool is storming the market and simplifying mundane tasks. Companies naturally jump at the chance to streamline tedium and reduce costs, but this begs a few nagging questions:
How much, if at all, are companies taking security and compliance risks into account?
What even are those risks?
AI has already proliferated across nearly every industry and team, making it a massive struggle for IT and security experts to control. On top of that, bad actors are using AI to enhance their attacks (e.g., AI-generated phishing scams that look authentic.)
So let's take a deeper dive into AI's role(s) in cybersecurity and identity governance.
The Bad
A Double-Edged Sword: AI is for Hackers Too!

A crowbar can help you pry open a crate, or it can help you do a smash-and-grab. It's all in how you use it.
According to a report published by Darktrace, "around 78% of CISOs surveyed believe that AI is having an impact on cyber threats today and are moving quickly to protect themselves. . . but around 45% of them feel they are not ready."
And many governments are already legislating away to address privacy concerns with NIS2, and now even the EU AI Act will govern AI specifically.
AI is just a tool that can be wielded for good or for bad. Yes it can shut down anomalous behavior on a dime, but it can also be used to generate official-looking phishing emails, assist in social engineering efforts through deepfakes/impersonations, and even find and exploit unaddressed vulnerabilities at a massive scale.
It's a never-ending game of cat-and-mouse, but we (the cat) need to stay ahead (of the mouse). Recently, we interviewed Rawad El Khoury, Engineering Director for Security at Aircall, and he explained this dichotomy thusly:
"Attackers are leveraging AI more and more. As part of your internal framework, you need to validate and be sure that it's not impacting your performance or results, while from an attacker perspective it's trial and error. He tries it, if it doesn't work, he doesn't care. We as security professionals and security leaders need to put more effort into this side in order to get ahead of the attackers, not stay behind them."
No Visibility: Shadow AI is the New Shadow IT
Shadow IT has long been a problem for CISOs in how it presents security risks and compliance issues. Unapproved systems have the capacity to store data insecurely and may violate data protection laws. However, this pales in comparison to Shadow AI. Because AI's utilization has skyrocketed so quickly, companies introduce (and build) new AI tools at rapid speed. These tools host sensitive data and often have access to other company systems (non-human identities - NHIs!). Many of these are super useful on a day-to-day basis, but IT teams can't monitor what they don't know about, as with shadow IT.
On top of that, NHIs may behave in unpredictable ways, as they are guided by their own internal logic. This underscores how oversight and control are more crucial than ever to avoid cans of worms like ethics and automated decision-making.
The Good
Authentication and Enhancing Zero-Trust Systems
Through behavioral analytics, AI can log and categorize human behaviors and convert them into biometric and behavioural data, which is then used to create a profile for each user. Data analyzed includes everything from typing patterns and mouse movements to how each person interacts with touchscreen mobile devices (preferred orientation, typing vs. swiping, etc.) and usage times so that if there are deviations from the norm, they will be immediately flagged.
Once suspicious activity is flagged, it can automatically trigger multi-factor authentication and notify the proper authority.
AI Detects. AI Prevents.
If there's one thing that AI is good for, it's analyzing large volumes of data in real-time to detect patterns and understand behaviors, which helps enhance security beyond authentication. As you may expect, this is ideal for cyber security. Manual anomaly detection can be a slow and painstaking process, but an AI program can detect and flag any abnormal patterns such as unapproved data transfers. AI can even draw correlations between low-risk events that may eventually become a more serious attack down the line.
Enhancing Access Controls
AI has the capability to enable pattern and/or context-based access privileges and suggest access rights accordingly. This streamlines role-based access control (RBAC) and attribute-based access control (ABAC) creating a more granular approach to access management.
Automating Policy Creation & Review
AI tools can draft, update, and help standardize your security policies based on your industry's best practices, new legislation, and even your individual needs. For instance, you could have it:
- Generate policy templates and suggestions
- Improve and tailor existing policies through comparative analysis
- Find gaps between regulatory standards and your current policies
- Highlight vague or inconsistent language within policies that may not align with intent
- Automate reviews of policies likely to be affected by incoming legislation
Automated Identity Governance is a Piece of Cake
Cakewalk is the new standard in identity governance, designed to get you up-to-speed on state-of-the-art access management tools and cybersecurity requirements.
We help you achieve full visibility over all employee apps, control and restrict access, and auto-remove 100% of seats during offboarding.
How has AI affected your industry? How can we as an InfoSec community stay ahead of cybercriminality? Join the conversation on LinkedIn and tell us your thoughts!