the double-edged reality of agentic AI & cyber security

Artificial Intelligence is rapidly reshaping cyber security, introducing both new risks and opportunities for organisations. Agentic AI systems, capable of orchestrating multiple security tools, provide faster a more coordinated approach to defence. However, the same technology is also enabling adversaries to attack with unprecedented speed and sophistication. In this exclusive contribution for The Executive Magazine, Professor Simon Parkinson discusses the growing imbalance places pressures on organisation to not only think about how they adopt AI, but how they govern it and make sure it is operating in an environment with good cyber hygiene. Ultimately, success will depend on combining agentic AI with strong security controls
Picture of Simon Parkinson

Simon Parkinson

Professor of Cyber Security at University of Huddersfield

Share this article:

Developments in Artificial Intelligence (AI) technologies have had a profound impact across the business world, and cyber security is certainly no exception. Prior to the release of Large Language Models (LLMs), AI systems were often regarded as ‘narrow’ AI agents, able to perform well-defined tasks and were embedded in security tools, such as those for authentication and monitoring functionality, which are the core areas where my research lies. Cyber security analysts interact with these tools to configure security controls and monitor for any concerns.

The importance of the human analyst in driving and understanding these tools cements their importance; however, the growing complexity (new products and new vulnerabilities) of digital systems means that their time and expertise can be regarded as of key importance to maintain a strong security posture. I have spent much of the past decade examining how AI reshapes cyber security, and the current shift towards agentic systems represents a significant turning point.

With agentic AI, which is where multiple security tools are working together and orchestrated using LLM support, more higher-level cyber security operations are possible. Higher-level meaning that the cyber analyst can orchestrate security operations that span multiple security controls without having to engage as heavily as before with narrow AI tools. In other words, agents can work together to understand and configure a system without the analyst handling the low-level interaction. In my experience, this significantly enhances the capability of cyber analysts and will enhance system security.

I would argue that this capability will define future security outcomes for many organisations. However, it also introduces new risks. While it reduces pressure on human analysts and improves the ability to identify vulnerabilities, adversarial use often advances at a faster pace. This stems from the dual-use nature of AI, where the same tools designed to strengthen systems can be exploited to identify and attack weaknesses. This article will examine both sides, beginning with defensive applications, before considering adversarial innovation and the continuing importance of cyber security fundamentals.

What this means, from my perspective, is that cyber security is becoming less about individual tools and more about how effectively organisations can orchestrate them at scale.

Agentic AI in defence

On the defensive side, agentic AI can provide significant benefit in terms of a response speed and consistency. I have observed that agentic AI systems are increasingly able to coordinate multi-stage defensive workflow across distributed security tools, enabling unified reasoning and response across network, endpoint, and identity layers. 

As an example, with intrusion detection systems, agents can support the identification of alerts from heterogeneous security information sources, before deliberating over escalation and remedial actions. This can change the human analyst’s workflow from manually inspecting different dashboards and analytical outputs to validating and authorising corrective actions where agents have coordinated the analysis and interpretation across multiple security systems. 

This will provide a great benefit in improving handling both routine security events where a known attack is taking place, as well as novel attacks where patterns are unclear. Agentic AI systems, through learning from prior incidents and analyst feedback, can refine detection approach and remediation actions, improving resilience against adaptive adversaries. In addition to the motivating factor of these systems helping to reduce the shortage and costs of security analytics, they are also consistent and may reduce some human errors.

I have observed that agentic AI systems are increasingly used for security tasks beyond the configuration and monitoring of security systems. The ability to review software and discover vulnerabilities and suggest mitigation can help strengthen digital systems, especially those classed as Operational Technologies (OT) and are regarded as essential to critical industries. This includes software controlling transport, energy, and water infrastructure, among others. OT is often required to comply with safety and security standards, and this presents a significant challenge to security professionals. 

Examining software artefacts and comparing them against known vulnerabilities, international standards, and best practices is incredibly time-consuming and knowledge-intensive, which results in increased costs and slow approval processes. In this context, agentic AI systems can be used to perform diverse tasks with a common goal. This could, for example, include reviewing source code, retrieving vulnerability information from threat intelligence sources, analysing architecture diagrams, and performing reasoning to identify vulnerabilities and suggest mitigation action. These agents would be orchestrated by an LLM.

Agentic AI systems are not without their faults, from the analyst’s point of view, trust and over-reliance on automation technology may result in incorrect decisions being made or mistakes of the AI system is not spotted. For example, hallucination of the AI system may result in poor reasoning and proposed remedial action that would be stopped if spotted. However, spotting errors within the system may be challenging as it can be difficult to trace decisions in multi-agent systems.

In practice, I find that the constraint is rarely access to AI capability, but the ability to integrate these systems coherently into existing security operations. In my view, organisations that succeed will be those that treat agentic AI as a capability to work alongside cyber analysts. 

Rapidly evolving adversaries

The asymmetry between defence and offence makes it easier for adversaries engaging in offensive cyber security to innovate more quickly. Adversaries only need one success, whereas the defenders must secure everything. Adversaries are also operating outside of legal, ethical and regulatory frameworks, which gives them the flexibility to experiment and deploy new techniques far more quickly. They also have a higher incentive for creativity. 

Organisations, on the other hand, are often slow to adopt new technologies as they wait for higher confidence to emerge over their capabilities and how they fit into the organisation. This difference results in the adversary being able to leverage a technology advantage, and this is certainly true of AI.

AI is being widely used in offensive cyber security. One of the more relatable and quickly developing attacks that many are familiar with is phishing. The use of generative AI to produce credible text, images and audio has resulted in it being easier for an attacker to generate credible and better-informed emails. The attacks extend beyond those trying to social engineer users to include accelerating reconnaissance and the identification of vulnerabilities.

AI systems employed in a defensive capability might themselves become an attack surface. Agentic AI systems are connecting models, knowledge bases, and software tools, which together can deliver productivity gains. These integrations create new pathways for an attacker to influence model behaviour or try to extract sensitive data. 

Another concern is supply chain exposure. As is commonplace in digital solutions, there is a reliance on third-party technologies, such as software libraries. It can be incredibly problematic if a vulnerability is discovered in a third-party dependency. AI is no exception here, and models will be relied on for AI capabilities. Where AI capabilities are embedded into core services, the attack surface expands beyond traditional endpoints and networks to include prompts, retrieval logic, and agent permissions that require their own assurance and monitoring.

I have observed that there is a growing imbalance, where the speed of attack development increasingly outpaces the rate at which organisations can adapt their defences. I see this asymmetry as the defining pressure shaping cybersecurity over the next five years.

The basics matter more than ever

Even with agentic AI systems in cyber security, the fundamentals remain imperative. This is because attacks will occur. The use of agentic systems for offensive cybersecurity is likely to be ahead of defensive system due to the asymmetry mentioned earlier, and organisations without advanced defensive systems will continue to rely on well-established cyber security controls to prevent easy vulnerabilities and have good cyber security hygiene.

Some of the most important security approaches continue to be diligent and robust security configurations, principled access control, appropriate monitoring, and rehearsed incident response. Agentic AI can enhance these practices and reduce manual effort, but it cannot compensate for weak configurations, insufficient monitoring and untested recovery procedures. Furthermore, if step-by-step responses to security attacks (widely named playbooks) are out-of-date or incomplete, automation can accelerate incorrect remedial action to an attack which a human analyst might have discovered earlier.

Across organisations I have worked with, this shifts the priority from adopting the latest technology to ensuring that existing controls are consistently applied and properly governed.

Getting the basics right also applies to AI-specific controls. Organisations need to think carefully over what data an agent can access, what security tools it can invoke, how permissions are reviewed, and how changes are tested and monitored. Prompts, retrieval pipelines, model updates, and tool integrations should be treated as governed components with versioning and auditability. This is necessary to have oversight of what the agents are doing and to minimise risk.

Organisations that adopt agentic AI without strengthening core cyber security controls risk amplifying existing weaknesses rather than addressing them. The strategic advantage lies not in wholesale automation, but in the disciplined integration of these systems alongside robust governance, clear oversight, and well-established security fundamentals.

In my view, the implication is clear: competitive advantage in cybersecurity will depend less on adopting AI first, and more on deploying it responsibly and effectively within a well-managed system.


About the author: Simon Parkinson is a Professor of Cyber Security at the University of Huddersfield, UK. He is also a member of both the UK Government’s Cyber Security Advisory Board and the Department for Science, Innovation and Technology’s College of Experts.

Simon’s research expertise and interests span the intersection of cyber security and artificial intelligence, with a particular focus on identity and access management. Over the last decade, he has led a range of research and knowledge exchange projects to successful completion, funded by organisations such as the UK’s Engineering and Physical Sciences Research Council (EPSRC), the Defence Science and Technology Laboratory (DSTL), and Innovate UK. Through his research, he has developed new methods for detecting security weaknesses in access control systems, identifying intrusions in enterprise IT systems, and uncovering unauthorised access using novel biometric approaches, ranging from keystroke dynamics to Wi-Fi signal distortion.

Latest Stories

Continue reading