Securing the Future in the Age of Generative and Agentic AI

July 18, 2025

The advent of artificial intelligence (AI) has driven accelerated, exponential growth across all industries. However, the very qualities that make AI indispensable today are also reshaping security dynamics: 73% of enterprises report breaches averaging $4.8 million, highlighting a disconnect between enterprise-scale AI adoption and corresponding security investments.

The AI revolution demands not just rethinking security but embracing agility and rapid adaptation to keep pace with evolving threats. Achieving safe and effective AI management is critical for leveraging organizational technology securely. To do so, it’s important to understand the key challenges posed by generative AI, large language models (LLMs), and, most critically, agentic AI.

The Expanding Threat Landscape of AI-Driven Attacks

Some well-known cybersecurity threats that have been weaponized through AI include identity theft, which can combine real personal data with fictional details, and deepfakes, which simulate images, videos, and audio, thereby impersonating individuals. AI makes these attacks more realistic, faster, and ultimately more effective, extending their impact beyond financial fraud to reputational and operational damage.

As offensive security practices evolve, Red Teams are beginning to explore AI-driven attack vectors that go beyond traditional methods. Model attacks—where adversaries manipulate machine learning systems to produce incorrect or harmful outputs—and AI bias, which can lead to unfair or discriminatory outcomes, are becoming increasingly relevant. Techniques like overwhelming human-in-the-loop (HITL) processes expose how attackers can exploit operational dependencies on human oversight. These developments illustrate the rapidly shifting threat landscape and underscore the need for security strategies that evolve alongside AI capabilities.

Managing GenAI and LLM Security: Emerging Risks and Controls

Securing Generative AI and large language models (LLMs) requires a multi-layered approach that accounts for the full spectrum of the AI lifecycle—from the attack surface where these services are deployed, to the integration of security controls during development via a Secure Software Development Life Cycle (SSDLC), and ongoing monitoring for potential incidents. While these measures align with standard security practices, the unique nature of GenAI technologies introduces additional risks and control requirements that organizations must proactively address.

Some of the emerging risks associated with GenAI and LLMs include prompt injection, where malicious prompts are crafted to manipulate model behavior or access restricted data; data breaches, often resulting from unintended exposure of sensitive information; and the generation of misinformation or disinformation, as LLMs may produce content that is misleading yet appears credible. Other significant threats include model theft, where adversaries replicate or exfiltrate proprietary models, and adversarial attacks, which involve feeding the model crafted inputs designed to trigger incorrect or harmful outputs. Notably, Gartner forecasts that by 2026, 40% of data breaches will be directly linked to the lack of security in GenAI systems.

These realities underscore the urgent need for robust governance, well-defined policies, and embedded security practices within organizations adopting generative AI. For instance, attacks such as prompt injection not only compromises the integrity of responses but may also result in outputs that perpetuate harmful stereotypes, offensive language, or discriminatory content, highlighting the importance of ethical and safe model behavior alongside technical safeguards.

Agentic AI: Designing for Safety in Autonomous Systems

Agentic AI introduces a distinct set of security risks due to its ability to interact with systems, process data, and trigger actions across complex environments. These agents often operate with elevated privileges, which can be exploited to bypass controls, manipulate systems, or access restricted data.

Emerging threats include decision-making vulnerabilities, where flawed inputs or logic lead agents to take incorrect actions; memory poisoning, where attackers insert misleading or false information into the agent’s memory or training data; and communication poisoning, which targets the integrity of information exchanged between agents or components. A particularly serious risk is the creation of rogue agents—entities that behave outside defined operational boundaries, potentially disrupting workflows or compromising security.

To address these threats, organizations must integrate security at every level of the agentic infrastructure. This begins with a clear threat model that identifies system components, attack surfaces, and corresponding safeguards. Securing data at rest, in transit, and in use, along with encrypting communications between components, is essential to maintaining integrity and confidentiality. Enforcing role-based access control (RBAC) helps ensure agents can only access what is necessary for their intended function.

Ongoing monitoring, validation, and testing are critical to maintaining a secure environment. By embedding security throughout the architecture—across communication layers, decision processes, and memory handling—organizations can reduce exposure and support the safe deployment of Agentic AI.

Building a Secure and Responsible Future for AI

As AI technologies continue to evolve, organizations must take proactive steps to ensure their secure and responsible integration. Security leaders are increasingly focused on how to protect their companies from emerging AI threats while leveraging these tools to enhance business performance. A comprehensive approach requires establishing clear security standards, safeguarding organizational assets, and equipping teams with the necessary training to recognize and respond to AI-driven risks. This strategy must address security across People, Processes, and Technology (PPT) to ensure resilience at every level.

At the same time, effective AI governance is essential to guide ethical, safe, and compliant use of technologies such as generative AI, LLMs, and AI agents. Organizations should implement internal policies, standards, and procedures that align with evolving global regulations, including the EU AI Act. This legislation introduces a risk-based classification system, defines compliance obligations, and promotes responsible innovation by balancing safety, transparency, and fundamental rights. Looking ahead, the secure adoption of AI depends not only on robust technical controls but also on clear governance frameworks that define how these systems are developed, deployed, and monitored within the enterprise.

At Globant, we focus on developing AI cybersecurity solutions that help organizations mitigate attacks and strengthen their response capabilities. As AI adoption accelerates, our goal is to enable responsible integration of these technologies by embedding clear safeguards, informed governance, and continuous risk monitoring. Through our Cybersecurity Studio, we offer a practical, end-to-end approach designed to evolve alongside emerging threats, ensuring businesses can innovate securely while remaining resilient and compliant in an increasingly complex digital landscape.

Trending Topics
Data & AI
Finance
Globant Experience
Healthcare & Life Sciences
Media & Entertainment
Salesforce

Subscribe to our newsletter

Receive the latests news, curated posts and highlights from us. We’ll never spam, we promise.

More From

The Cybersecurity Studio focuses on reducing our clients’ cybersecurity risks. To help businesses adapt, we established a Digital Cybersecurity Framework founded by our key practices. Our value proposition considers an active participation in the software development process and a proactive view on cybersecurity solutions that include regular vulnerability tests and threat intelligence.