What is Agentic AI?
Definition, Case Studies, and Risks

AI is evolving beyond static automation and reactive models. The next wave  –  agentic AI  –  introduces autonomous systems capable of making decisions and adapting to dynamic environments without human intervention.

Agentic AI can execute complex, multi-step workflows, but its use poses serious risks to security and data privacy. How can organizations safely address these challenges while still extracting the full value of agentic AI?

In this comprehensive resource, we’ll cover agentic AI and its use cases across different industries like healthcare and finance. We’ll also look at the risks of building and deploying AI agents and ways to address them.

What is Agentic AI?

Agentic AI refers to AI-driven autonomous systems that execute complex, multi-step processes with minimal human intervention. Unlike traditional AI models that require explicit prompts, agentic AI operates continuously, adapting to real-time data, making independent decisions, and optimizing workflows based on predefined objectives.

Three core features distinguish agentic AI from traditional AI systems:

  • Autonomy: It executes repetitive tasks end-to-end and solves complex workflows without human involvement.
  • Adaptability: It continuously learns from interactions and refines decision-making accordingly.
  • Goal orientation: It operates with a defined objective, optimizing for efficiency and accuracy.

Traditional AI models like ChatGPT rely on users prompting large language models (LLMs) to generate novel outputs, but such models are rapidly evolving. Gartner predicts that one-third of interactions with gen AI will use autonomous agents for task completion by 2028. Understanding how agentic AI works is key for organizations to leverage its capabilities effectively.

How does Agentic AI work?

Agentic AI combines advanced technologies and decision-making algorithms with goal-oriented behavior to operate without human intervention.

Agentic AI Four-Step Process

How agentic ai works - steps

These systems follow a four-step process to solve multi-step problems:

  1. Perceive: Agentic AI systems gather data from various sources, such as databases, sensors, and Internet of Things (IoT) devices. They identify patterns, extract insights, and recognize objects to understand their environment.
  2. Reason: LLMs are key components of agentic AI systems, acting as a reasoning engine—a system designed to mimic human-like decision-making capabilities. They “understand” customer queries and generate responses for tasks like content creation.
  3. Act: Agentic AI systems connect to external systems via APIs and execute tasks based on formulated plans. Guardrails place boundaries on what AI-powered agents can and cannot do. An example is requiring human interaction for loan approvals exceeding certain amounts.
  4. Learn: Agentic AI improves through a continuous feedback loop. As the system collects and processes more data, it refines its decision-making capabilities. For example, an AI agent can analyze outcomes from customer interactions and improve how it handles similar issues in the future.

Agentic AI at Darktrace

Some banking and financial institutions are using agentic AI solutions for autonomous threat detection. For example, Darktrace continuously monitors network traffic and detects cyber threats in real time. Its autonomous response system uses AI to take targeted action against attacks, such as blocking connections to certain endpoints.

Darktrace tracked 6.7 billion network events and autonomously investigated 23 million of those events over a 30-day period for Aviso, a Canadian wealth management firm. The system learns on an organization’s own data, so it can identify and flag unusual behaviors that deviate from normal patterns. It performs these steps autonomously, eliminating human bottlenecks and improving fraud response times from hours to seconds.

Let’s look at how agentic AI differs from other AI technologies.

Agentic AI vs generative AI vs LLM: What are the differences?

Interest in AI has surged dramatically over the last few years, thanks to ChatGPT. Even Apple has put its own spin on the tech, calling it “Apple Intelligence.”

Despite growing interest, AI isn’t new. The underlying technologies behind AI—machine learning and data science—have been around for decades. An early example is the nearest neighbor algorithm in 1967, which used pattern recognition to improve route optimization. Agentic AI builds on these early foundational technologies.

Agentic AI represents a leap forward in AI capabilities. It can act independently, carrying out and performing complex tasks with clear objectives. Another distinction is these systems are proactive; they don’t need to be prompted by an end user to make decisions. They also continuously learn from previous interactions and improve over time.

Generative AI, or gen AI for short, is a subset of AI that can generate content from the data it’s trained on. This includes text, images, audio, and videos. It requires “prompts” or language inputs to generate responses. Gen AI has limited autonomy and decision-making capabilities. It can process inputs and produce outputs, but it can’t execute actions or learn from past experiences.

LLMs are neural networks trained on vast datasets to understand and generate contextually relevant responses similar to what a human might write. LLMs are designed for text-based tasks like content generation, summarization, and translation. They can also perform certain types of reasoning through pattern recognition and statistical relationships. An example is revealing connections between scientific datasets. Agentic AI systems use LLMs to enhance their language processing capabilities and perform multi-step workflows to achieve complex goals.

The graphic below shows how gen AI and Agentic AI approach a task:

A Gen AI approach to task completion:

Diagram graphic showing how gen ai approach a task

An Agentic, “human-like” approach to task completion:

Diagram graphic showing how agentic ai approach a task
Both models serve distinct functions: gen AI excels at one-time tasks like producing new outputs, while agentic AI is designed for autonomous, goal-driven execution. For example, a financial institution might use gen AI to draft compliance reports and an agentic AI system to monitor transactions for fraud and take action based on predefined risk parameters.

Let’s look at more use cases for agentic AI.
Apple recognizes that consumers are increasingly concerned about their digital privacy. By highlighting features like cross-site tracking and email privacy protections, the company turns privacy into a compelling value proposition.

Comply with data protection laws

Privacy laws regulate how companies can collect and store PII. Non-compliance can result in massive penalties. Meta was fined a record-breaking $1.3 billion for violating EU privacy laws. By implementing the appropriate data safeguards and adhering to the strictest data privacy laws, businesses can comply with all policies.

Read more: A Brief History of Data Privacy, and What Lies Ahead

Protect against data breaches

Data breaches are devastating for companies that handle PII, especially those in the fintech and healthcare industries. Cybercriminals frequently target these organizations due to the sensitive data they have access to. A cyberattack targeting UnitedHealth Group exposed the personal data of 190 million people, demonstrating the scale and severity of cyber threats to healthcare organizations.

The average cost of a data breach is a record $4.88 million, but the financial losses aren’t the only risk to consider. Data breaches erode trust, which can be challenging to earn back. Proper PII protection can help prevent data breaches and minimize the impact of any given breach.

Read more: Data Breaches: The Problem is PII

Agentic AI use cases and examples

Agentic AI has the potential to reshape and automate workflows across various industries.
Agentic AI use cases across industries
Some of the use cases for agentic systems include:

Finance Case Studies

Agentic AI is transforming the financial sector. Advanced agents can analyze market trends and identify potential investment opportunities. They can even create personalized investment plans for individual clients based on their financial goals. Another use case is fraud detection.

HSBC partnered with Google to create Dynamic Risk Assessment (DRA)—an AI system with agentic-like capabilities—to fight financial crime. The Anti Money Laundering AI (AML AI) ingests data from different sources and creates customer risk scores. Cases are then automatically escalated based on risk tolerance.

Here’s how AML AI works:

Graphic showing how google's aml ai works
The built-in feedback mechanism enables the system to continuously learn from validated suspicious activity reports (SARs) and refine its detection models.

Healthcare Case Studies

Agentic AI systems can process vast datasets, such as patient histories, clinical notes, and diagnostic imaging. They can help doctors analyze medical records, automate data entry, and streamline decision-making.

Autonomous agents like Hippocratic AI can speak with patients and provide support through empathetic responses. For example, a radiotherapy patient can receive AI-generated messages explaining the treatment and reviewing appointment details. By handling patient-facing tasks, these agents can help reduce administrative burdens.AI has the potential to have a powerful impact in healthcare, but it also raises ethical concerns about healthcare access (or lack thereof). Insurance companies increasingly use AI to process claims, but a lawsuit alleges insurers used “faulty AI” to deny coverage. This demonstrates the real-world consequences that algorithmic decision-making can have without human checks.

Customer Service Case Studies

Companies have relied on basic chatbots to handle customer interactions for years. But these chatbots are often limited in their capabilities. They’re pre-programmed to respond to specific questions or requests (e.g., “What are your business hours?”)

Agentic AI enables more robust customer service capabilities beyond answering basic questions. It can understand customer intents, resolve complex issues, and anticipate customer needs.

For example, if a customer asks, “What’s my order status?” a traditional chatbot may simply provide a tracking number for them to check. But an agentic AI system can go further by automatically escalating the ticket to the logistics team and routing it through a faster shipping partner.

AT&T is using autonomous agents in its call centers to analyze customer accounts and provide relevant menu options. If a customer has wireless services, the agents can coordinate with other AI assistants and determine eligibility for additional services like AT&T Fiber and offer bundle deals. This is an example of a multi-agent workflow.

Though agentic AI systems can streamline customer service departments, organizations need to establish escalation paths for customers to speak with a human. A cautionary example is Air Canada’s chatbot, which gave a customer inaccurate information about a bereavement fare, misleading him into buying a full-price ticket. Air Canada was ordered to pay the difference between what the customer paid and the discounted bereavement fare. Incidents like these highlight the importance of maintaining human oversight.

Retail Case Studies

With its ability to handle multi-step workflows, agentic AI can help retailers optimize their operations and deliver better customer experiences through automated inventory management. The capabilities of agentic AI in retail become more evident when looking at real-world cases.

Saks deployed Agentforce, Salesforce’s agentic AI platform, to enhance its shopping experience. A video demonstration shows a customer taking a photo of a dress and asking for a recommendation. The agent replies with suggestions and confirms the order. It later assists with coordinating an exchange and even scheduling an in-store appointment.

Customer communicating with agentic ai customer service to exchange clothes
Another use case is an agentic AI system that monitors inventory levels in real time and automatically places orders based on predefined thresholds. AI agents can work with other agents—known as multi-agent systems—to coordinate deliveries and avoid supply chain bottlenecks.

Personal data protection in India

India’s Digital Personal Data Protection Act (DPDP) is one of the latest privacy laws passed in Asia. It regulates and protects the collection and processing of information on Indian residents. The penalties for non-compliance can be steep—up to ₹250, the equivalent of $30 million.

Read more: India’s DPDP Rules 2025: Critical Highlights & How to Comply

Data protection laws in the United States (US)

The United States has multiple state laws and one federal law for PII. These include:

  • California Consumer Privacy Act (CCPA): Gives California residents certain data rights, such as the right to access their data and opt out of data collection.
  • Virginia Consumer Data Protection Act (VCDPA): Provides Virginia residents with the right to access their personal data and opt out of its sale.
  • Colorado Privacy Act (CPA): Protects Colorado residents’ personal data and gives them the right to opt out of targeted advertising.
  • Utah Consumer Privacy Act (UCPA): Gives Utah residents the right to know what personal data businesses collect about them.
  • Privacy Act (1974): A federal law that governs the collection and use of data that federal agencies maintain about certain individuals.

PII laws in other regions

Other notable privacy laws include:

  • Australia’s Australian Privacy Principles (APP)
  • Indonesia’s Personal Data Protection (PDP)
  • Japan’s Act on Protection of Personal Information (APPI)
  • Singapore’s Personal Data Protection Act (PDPA)
  • South Africa’s Protection of Personal Information Act (POPIA)

Read more: How to Achieve Global Data Privacy Compliance

Compliance with privacy regulations is critical for businesses managing sensitive customer data. In addition to regional privacy laws, some industries have their own unique requirements for handling personal data.

Risks of Agentic AI and mitigation strategies

Agentic AI offers powerful capabilities to help organizations improve efficiency. However, their autonomous nature seriously affects how enterprises implement AI and oversight mechanisms. For example, if your organization uses agentic AI to monitor company accounts, how do you ensure it handles sensitive data responsibly?

As businesses grapple with these challenges, the adoption of agentic AI is rapidly accelerating. Gartner predicts that 33% of enterprise applications will use agentic AI by 2028, up from less than 1% in 2024.

Stat on organizations using AI

As enterprises increasingly deploy agentic AI systems, governance frameworks must evolve to address security, compliance, and operational risks. Plus, with low-code tools like Microsoft’s Copilot Studio for building intelligent agents, it won’t be long before more companies implement agentic AI in their processes—without the necessary expertise to make them secure. Understanding the risks of agentic AI is key to developing safe AI strategies.

Sensitive data exposure

Agentic AI systems continuously ingest and process data, often across multiple integrated platforms through APIs. However, this interconnectedness increases the risk of data breaches, unintentional data leaks, unauthorized access, and regulatory non-compliance.

APIs enable agentic AI systems to connect to different systems and perform complex, multi-step workflows. These connections are vulnerable to a range of attacks. PandaBuy, an international e-commerce platform, experienced a significant breach that exposed the data of over 1.3 million customers due to vulnerabilities in its API.

Security researchers demonstrated that Copilot is susceptible to prompt injections—a method of feeding AI systems malicious prompts. Attackers can use this attack method to steal sensitive data and manipulate it to make potentially harmful decisions. In one case, researchers manipulated a Copilot to direct users to a phishing site to steal their credentials.

Mitigation strategies:

  • Zero trust architecture: Limit AI access to sensitive data using a zero trust architecture based on least privilege principles.
  • Data privacy vaults: Tokenize or encrypt personally identifiable information (PII) before AI models process it. The purpose is to de-identify sensitive data to protect and secure the privacy of personal information. It also governs access to sensitive data, ensuring only authorized people can access the data they need for their roles.
  • Continuous monitoring: Implement real-time anomaly detection to flag unauthorized or unexpected AI behaviors.

Read more: Privacy in the Age of Generative AI

LLM AI sensitive data deidentification architecture diagram

Unpredictable decision-making and model drift

Agentic AI operates with autonomy, which means that, over time, models may develop behaviors that deviate from their intended objectives, especially in environments with dynamic inputs.

For example, a logistics company uses agentic AI to optimize delivery routes. Over time, the system may prioritize cost efficiency over customer service, delaying shipments without factoring in customer preferences.

Without explainable AI (XAI) mechanisms—insights that allow humans to understand an AI system’s outputs—tracing the cause of these decisions becomes difficult.

Mitigation strategies:

  • Explainable AI (XAI): Maintain decision and audit logs that allow for backtracking and analysis.
  • Automated model validation: Continuously test AI outputs to ensure they align with business rules.
  • Human oversight for edge cases: Ensure human workers can review and reverse high-impact decisions that Agentic AI systems make. Provide granular control over who can access sensitive data and maintain comprehensive audit logs.

Ethical and compliance imperatives

With agentic AI making autonomous decisions in regulated industries like finance, healthcare, and insurance, compliance violations and ethical concerns become critical risks. Among these concerns is bias in decision-making.

For example, if an AI agent is trained on data where insurance claims are disproportionately denied, it could perpetuate that bias in future decisions. In fact, insurance providers UnitedHealthcare, Humana, and Cigna faced lawsuits after their AI-driven claims processing systems were found to disproportionately deny claims.

The lack of oversight mechanisms led to systemic discrimination, exposing the providers to legal liability. Two families of deceased beneficiaries of UnitedHealthcare have filed a lawsuit against the company, claiming its algorithm denied medically-necessary care to elderly patients. Another case was brought by a student who claimed the company denied coverage for drugs that were deemed “not medically necessary” despite doctors recommending continued treatment.

Mitigation strategies:

  • Bias detection and model auditing: Regularly audit AI decision-making to ensure fairness and compliance
  • AI explainability mandates: Implement governance policies that require AI agents to justify key decisions.
  • Regulatory alignment: Adapt AI governance to frameworks like GDPR, HIPAA, and ISO 42001.

Best practices for building and deploying Agentic AI

Agentic AI can prove game-changing for organizations that successfully adopt it. However, training these systems on existing data sets raises privacy and compliance risks.

Simplify data governance using a data privacy vault

Agentic AI systems rely on LLMs to perform a wide range of natural language processing (NLP) tasks. However, LLMs can’t be easily governed, something Samsung discovered when its employees inadvertently leaked sensitive data while using ChatGPT.

The solution isn’t to keep sensitive data out of LLMs; organizations often need to train models with sensitive data for workflows like analytics. Organizations must correctly establish data governance policies—rules that specify who is allowed to access and use certain types of data. Proper access controls ensure only authorized individuals can access sensitive data (and even then those permissions must be carefully managed).

Data privacy vaults like Skyflow’s use an architectural structure that isolates and secures sensitive data. Our Detect Invoker Role simplifies data governance by allowing Vault Administrators to set and enforce access control levels for sensitive data.

The graphic below shows an example of an LLM Privacy Vault:

Graphic shows an example of an LLM Privacy Vault

With proper data governance, organizations can prevent unauthorized access to sensitive data and mitigate privacy risks of LLM-based AI agents by adhering to the principle of least privilege.

Read more:  Generative AI Data Privacy with Skyflow LLM Privacy Vault

Require human oversight

Almost two-thirds of respondents in a survey indicated that HITL is critical for responsible AI use. For agentic AI, “human-in-the-loop” (HITL) is essential when AI actions impact compliance, security, or high-value transactions.

Organizations must determine where to place human “checkpoints” to balance efficiency with risk mitigation. Examples include:

  • Dynamic oversight: AI agents operate independently but escalate critical decisions (e.g., approving a $1M transaction) to human reviewers.
  • Adaptive thresholds: HITL triggers can be adjusted dynamically based on risk levels and contextual factors.
  • Auditability: Human reviewers can intervene and override AI decisions where necessary.

For example, an AI agent reviewing loan applications in financial services may approve low-risk applications autonomously but escalate high-risk cases for manual review.

Implement AI guardrails

Given the autonomous nature of agentic systems, outputs can stray from desired results. Guardrails are frameworks that ensure agents operate within set boundaries and execute routine tasks correctly. For example, a guardrail could allow agents to process refunds up to a certain amount but require human approval above that amount.

Agentic AI systems can make questionable decisions and produce unexpected outcomes despite their advanced capabilities. Organizations can counter these risks by maintaining audit trails that log all system actions and data used to make decisions. Human reviewers can review these logs and reverse behaviors if necessary.

Data stack for AI and LLM data sources privacy security

Build secure and privacy-preserving AI agents

The rise of agentic AI marks a significant leap forward in artificial intelligence that offers more advanced decision-making capabilities.

However, as companies implement agentic AI into more of their business processes, they need to address risks like data privacy head-on. Without the proper guardrails, organizations risk their AI systems exposing sensitive data

AI-ready data vaults ensure agentic AI systems operate within a zero-trust framework, securing PII while enabling real-time decision-making without exposing raw data.

Watch this video explaining how to build privacy-preserving AI agents and how to architect AI systems that prioritize user privacy.