RXL

Artificial intelligence has entered a new and transformative phase with the rise of agentic AI—systems capable of acting autonomously, making decisions, and executing multi-step tasks with minimal human oversight. Unlike traditional AI tools that respond to direct prompts, agentic AI operates with a degree of independence, often pursuing goals, adapting to changing conditions, and interacting with other systems in complex ways.

While the technological promise is immense—ranging from productivity gains to entirely new forms of digital collaboration—there is an increasingly urgent need to examine the human-centric security challenges that accompany this evolution. The risks are no longer confined to system vulnerabilities or data breaches; instead, they extend deeply into human behaviour, trust, cognition, and organisational culture.

This article explores the security implications of agentic AI through a human lens, highlighting the emerging threats, psychological dynamics, and governance gaps that organisations and individuals must address.

Understanding Agentic AI

Agentic AI refers to systems that can independently plan, decide, and act to achieve objectives. These systems often:

  • Maintain memory over time
  • Interact with external tools and APIs
  • Execute sequences of tasks autonomously
  • Adapt strategies based on feedback

Examples include autonomous customer service agents, AI-powered research assistants, automated financial trading bots, and digital “co-workers” capable of managing workflows.

The shift from reactive AI to proactive agents fundamentally changes the threat landscape. Humans are no longer simply users—they become collaborators, supervisors, and sometimes even targets of AI-driven manipulation.

The Human Attack Surface Expands

Traditionally, cybersecurity has focused on protecting systems, networks, and data. However, with agentic AI, the human attack surface grows significantly.

Social Engineering at Scale

Agentic AI can generate highly personalised and context-aware communications. This makes phishing and social engineering attacks far more convincing. Instead of generic scam emails, attackers can deploy AI agents that:

  • Analyse a target’s online presence
  • Mimic writing styles and tone
  • Engage in prolonged, realistic conversations

Humans are naturally inclined to trust communication that feels authentic. Agentic AI exploits this tendency, blurring the line between genuine interaction and malicious intent.

Emotional Manipulation

Agentic systems can be trained to recognise and respond to emotional cues. In a security context, this opens the door to emotionally intelligent attacks. For example:

  • An AI posing as a colleague might express urgency or distress
  • A fake support agent could build rapport before requesting sensitive information
  • Persistent interaction may lead to psychological influence over time

These tactics target human empathy, not just logic.

Overreliance and Automation Bias

One of the most significant human-centric risks is automation bias—the tendency to trust and defer to automated systems, even when they are incorrect.

The Illusion of Competence

Agentic AI often presents outputs with confidence and coherence, which can create an illusion of reliability. Users may:

  • Accept decisions without verification
  • Delegate critical tasks without oversight
  • Ignore warning signs due to perceived authority

This becomes particularly dangerous in high-stakes environments such as healthcare, finance, and critical infrastructure.

Reduced Human Vigilance

As AI agents take on more responsibilities, humans may become less engaged and less attentive. This “out-of-the-loop” problem reduces the ability to detect anomalies or intervene when something goes wrong.

In security terms, complacency is a vulnerability.

Insider Threats Reimagined

Agentic AI complicates the concept of insider threats. Traditionally, insiders are employees or trusted individuals with access to systems. With AI agents:

  • The definition of “insider” becomes blurred
  • AI tools may act with privileged access
  • Misconfigured or compromised agents can behave unpredictably

Delegated Authority Risks

When humans grant AI agents permission to access systems, send communications, or execute transactions, they effectively extend their authority. If an agent is manipulated or behaves incorrectly, the consequences can be severe.

For instance:

  • An AI assistant with email access could be exploited to distribute malware
  • A financial agent might execute fraudulent transactions
  • A project management agent could leak sensitive data

The risk is not just malicious intent—it is also misalignment.

Prompt Injection and Human Susceptibility

Prompt injection attacks exploit the way AI systems interpret instructions. While often discussed as a technical vulnerability, they also have a strong human dimension.

Indirect Manipulation

Attackers can craft inputs that influence an AI agent’s behaviour, even when the human user is unaware. For example:

  • Malicious content embedded in documents
  • Hidden instructions on web pages
  • Subtle cues in data sources

Humans may unknowingly expose AI agents to these inputs, effectively becoming unwitting intermediaries.

Trusting the Output

Even when an AI agent is compromised via prompt injection, users may continue to trust its outputs. This creates a feedback loop where human confidence amplifies the impact of the attack.

Identity, Authenticity, and Deepfakes

Agentic AI accelerates the erosion of trust in digital identity.

Hyper-Realistic Impersonation

AI systems can now generate highly convincing text, voice, and even video. Combined with agentic behaviour, this enables:

  • Persistent impersonation of individuals
  • Real-time adaptive conversations
  • Multi-channel deception campaigns

Humans rely heavily on cues such as tone, language, and familiarity to establish trust. Agentic AI can replicate these cues with alarming accuracy.

The Collapse of Verification Norms

Traditional methods of verification—such as recognising a voice or writing style—become unreliable. This forces organisations to rethink authentication and trust frameworks, placing additional cognitive burden on individuals.

Cognitive Overload and Decision Fatigue

As AI agents proliferate, humans are required to manage, supervise, and interpret their outputs. This leads to cognitive overload.

Too Many Signals

Users may be confronted with:

  • Multiple AI recommendations
  • Continuous alerts and updates
  • Complex decision pathways

This abundance of information can lead to decision fatigue, increasing the likelihood of errors.

Security Implications

When overwhelmed, individuals are more likely to:

  • Ignore security warnings
  • Approve requests without scrutiny
  • Default to AI recommendations

In essence, the human becomes the weakest link—not due to lack of skill, but due to systemic overload.

Ethical Ambiguity and Responsibility Gaps

Agentic AI introduces ambiguity around responsibility and accountability.

Who Is Responsible?

When an AI agent makes a harmful decision, it is not always clear who is accountable:

  • The developer who designed the system?
  • The organisation that deployed it?
  • The user who authorised it?

This uncertainty can delay response times and complicate incident management.

Moral Disengagement

Humans may distance themselves from decisions made by AI agents, leading to reduced ethical scrutiny. This phenomenon, sometimes referred to as “moral crumple zones”, places humans in difficult positions where they are blamed for outcomes they did not fully control.

Cultural and Organisational Challenges

The integration of agentic AI requires shifts in organisational culture.

Trust Calibration

Organisations must strike a balance between:

  • Encouraging AI adoption
  • Maintaining healthy scepticism

Overtrust leads to vulnerability, while undertrust limits the benefits of AI.

Training and Awareness

Traditional security training is insufficient. Employees need to understand:

  • How agentic AI behaves
  • What risks it introduces
  • How to interact with it safely

This includes recognising manipulation, verifying outputs, and maintaining oversight.

Mitigation Strategies

Addressing human-centric security challenges requires a multi-layered approach.

Human-in-the-Loop Design

AI systems should be designed to keep humans meaningfully involved, particularly in critical decisions. This includes:

  • Clear escalation pathways
  • Transparency in decision-making
  • Opportunities for intervention

Robust Access Controls

Limit the permissions granted to AI agents. Apply the principle of least privilege to reduce potential damage.

Continuous Monitoring

Monitor both AI behaviour and human interaction patterns to detect anomalies early.

Psychological Resilience

Invest in training that enhances critical thinking, scepticism, and awareness of manipulation tactics.

Authentication Reinvention

Adopt stronger verification methods, such as multi-factor authentication and cryptographic identity systems, to counter impersonation risks.

The Future Outlook

Agentic AI is not a temporary trend—it represents a fundamental shift in how humans interact with technology. As these systems become more capable, the human-centric challenges will only intensify.

The key question is not whether agentic AI will introduce new risks—it already has. The real challenge lies in how effectively we adapt our behaviours, institutions, and security models to address them.

Conclusion

The rise of agentic AI reshapes the cybersecurity landscape by placing humans at the centre of risk. From social engineering and automation bias to identity erosion and cognitive overload, the challenges are deeply intertwined with human psychology and behaviour.

Addressing these issues requires more than technical solutions. It demands a holistic approach that considers human factors, organisational culture, and ethical responsibility.

In the age of agentic AI, security is no longer just about protecting systems—it is about understanding and safeguarding the human element within increasingly autonomous digital ecosystems.

Leave a Reply