AI is evolving beyond static automation and reactive models. The next wave – agentic AI – introduces autonomous systems capable of making decisions and adapting to dynamic environments without human intervention.
Agentic AI can execute complex, multi-step workflows, but its use poses serious risks to security and data privacy. How can organizations safely address these challenges while still extracting the full value of agentic AI?
In this comprehensive resource, we’ll cover agentic AI and its use cases across different industries like healthcare and finance. We’ll also look at the risks of building and deploying AI agents and ways to address them.
Agentic AI refers to AI-driven autonomous systems that execute complex, multi-step processes with minimal human intervention. Unlike traditional AI models that require explicit prompts, agentic AI operates continuously, adapting to real-time data, making independent decisions, and optimizing workflows based on predefined objectives.
Three core features distinguish agentic AI from traditional AI systems:
Traditional AI models like ChatGPT rely on users prompting large language models (LLMs) to generate novel outputs, but such models are rapidly evolving. Gartner predicts that one-third of interactions with gen AI will use autonomous agents for task completion by 2028. Understanding how agentic AI works is key for organizations to leverage its capabilities effectively.
Agentic AI combines advanced technologies and decision-making algorithms with goal-oriented behavior to operate without human intervention.
These systems follow a four-step process to solve multi-step problems:
Some banking and financial institutions are using agentic AI solutions for autonomous threat detection. For example, Darktrace continuously monitors network traffic and detects cyber threats in real time. Its autonomous response system uses AI to take targeted action against attacks, such as blocking connections to certain endpoints.
Darktrace tracked 6.7 billion network events and autonomously investigated 23 million of those events over a 30-day period for Aviso, a Canadian wealth management firm. The system learns on an organization’s own data, so it can identify and flag unusual behaviors that deviate from normal patterns. It performs these steps autonomously, eliminating human bottlenecks and improving fraud response times from hours to seconds.
Agentic AI is transforming the financial sector. Advanced agents can analyze market trends and identify potential investment opportunities. They can even create personalized investment plans for individual clients based on their financial goals. Another use case is fraud detection.
HSBC partnered with Google to create Dynamic Risk Assessment (DRA)—an AI system with agentic-like capabilities—to fight financial crime. The Anti Money Laundering AI (AML AI) ingests data from different sources and creates customer risk scores. Cases are then automatically escalated based on risk tolerance.
Here’s how AML AI works:
Agentic AI systems can process vast datasets, such as patient histories, clinical notes, and diagnostic imaging. They can help doctors analyze medical records, automate data entry, and streamline decision-making.
Autonomous agents like Hippocratic AI can speak with patients and provide support through empathetic responses. For example, a radiotherapy patient can receive AI-generated messages explaining the treatment and reviewing appointment details. By handling patient-facing tasks, these agents can help reduce administrative burdens.AI has the potential to have a powerful impact in healthcare, but it also raises ethical concerns about healthcare access (or lack thereof). Insurance companies increasingly use AI to process claims, but a lawsuit alleges insurers used “faulty AI” to deny coverage. This demonstrates the real-world consequences that algorithmic decision-making can have without human checks.
Companies have relied on basic chatbots to handle customer interactions for years. But these chatbots are often limited in their capabilities. They’re pre-programmed to respond to specific questions or requests (e.g., “What are your business hours?”)
Agentic AI enables more robust customer service capabilities beyond answering basic questions. It can understand customer intents, resolve complex issues, and anticipate customer needs.
For example, if a customer asks, “What’s my order status?” a traditional chatbot may simply provide a tracking number for them to check. But an agentic AI system can go further by automatically escalating the ticket to the logistics team and routing it through a faster shipping partner.
AT&T is using autonomous agents in its call centers to analyze customer accounts and provide relevant menu options. If a customer has wireless services, the agents can coordinate with other AI assistants and determine eligibility for additional services like AT&T Fiber and offer bundle deals. This is an example of a multi-agent workflow.
Though agentic AI systems can streamline customer service departments, organizations need to establish escalation paths for customers to speak with a human. A cautionary example is Air Canada’s chatbot, which gave a customer inaccurate information about a bereavement fare, misleading him into buying a full-price ticket. Air Canada was ordered to pay the difference between what the customer paid and the discounted bereavement fare. Incidents like these highlight the importance of maintaining human oversight.
With its ability to handle multi-step workflows, agentic AI can help retailers optimize their operations and deliver better customer experiences through automated inventory management. The capabilities of agentic AI in retail become more evident when looking at real-world cases.
Saks deployed Agentforce, Salesforce’s agentic AI platform, to enhance its shopping experience. A video demonstration shows a customer taking a photo of a dress and asking for a recommendation. The agent replies with suggestions and confirms the order. It later assists with coordinating an exchange and even scheduling an in-store appointment.
Agentic AI offers powerful capabilities to help organizations improve efficiency. However, their autonomous nature seriously affects how enterprises implement AI and oversight mechanisms. For example, if your organization uses agentic AI to monitor company accounts, how do you ensure it handles sensitive data responsibly?
As businesses grapple with these challenges, the adoption of agentic AI is rapidly accelerating. Gartner predicts that 33% of enterprise applications will use agentic AI by 2028, up from less than 1% in 2024.
As enterprises increasingly deploy agentic AI systems, governance frameworks must evolve to address security, compliance, and operational risks. Plus, with low-code tools like Microsoft’s Copilot Studio for building intelligent agents, it won’t be long before more companies implement agentic AI in their processes—without the necessary expertise to make them secure. Understanding the risks of agentic AI is key to developing safe AI strategies.
Agentic AI systems continuously ingest and process data, often across multiple integrated platforms through APIs. However, this interconnectedness increases the risk of data breaches, unintentional data leaks, unauthorized access, and regulatory non-compliance.
APIs enable agentic AI systems to connect to different systems and perform complex, multi-step workflows. These connections are vulnerable to a range of attacks. PandaBuy, an international e-commerce platform, experienced a significant breach that exposed the data of over 1.3 million customers due to vulnerabilities in its API.
Security researchers demonstrated that Copilot is susceptible to prompt injections—a method of feeding AI systems malicious prompts. Attackers can use this attack method to steal sensitive data and manipulate it to make potentially harmful decisions. In one case, researchers manipulated a Copilot to direct users to a phishing site to steal their credentials.
Mitigation strategies:
Read more: Privacy in the Age of Generative AI
Agentic AI operates with autonomy, which means that, over time, models may develop behaviors that deviate from their intended objectives, especially in environments with dynamic inputs.
For example, a logistics company uses agentic AI to optimize delivery routes. Over time, the system may prioritize cost efficiency over customer service, delaying shipments without factoring in customer preferences.
Without explainable AI (XAI) mechanisms—insights that allow humans to understand an AI system’s outputs—tracing the cause of these decisions becomes difficult.
Mitigation strategies:
With agentic AI making autonomous decisions in regulated industries like finance, healthcare, and insurance, compliance violations and ethical concerns become critical risks. Among these concerns is bias in decision-making.
For example, if an AI agent is trained on data where insurance claims are disproportionately denied, it could perpetuate that bias in future decisions. In fact, insurance providers UnitedHealthcare, Humana, and Cigna faced lawsuits after their AI-driven claims processing systems were found to disproportionately deny claims.
The lack of oversight mechanisms led to systemic discrimination, exposing the providers to legal liability. Two families of deceased beneficiaries of UnitedHealthcare have filed a lawsuit against the company, claiming its algorithm denied medically-necessary care to elderly patients. Another case was brought by a student who claimed the company denied coverage for drugs that were deemed “not medically necessary” despite doctors recommending continued treatment.
Mitigation strategies:
Agentic AI can prove game-changing for organizations that successfully adopt it. However, training these systems on existing data sets raises privacy and compliance risks.
Agentic AI systems rely on LLMs to perform a wide range of natural language processing (NLP) tasks. However, LLMs can’t be easily governed, something Samsung discovered when its employees inadvertently leaked sensitive data while using ChatGPT.
The solution isn’t to keep sensitive data out of LLMs; organizations often need to train models with sensitive data for workflows like analytics. Organizations must correctly establish data governance policies—rules that specify who is allowed to access and use certain types of data. Proper access controls ensure only authorized individuals can access sensitive data (and even then those permissions must be carefully managed).
Data privacy vaults like Skyflow’s use an architectural structure that isolates and secures sensitive data. Our Detect Invoker Role simplifies data governance by allowing Vault Administrators to set and enforce access control levels for sensitive data.
The graphic below shows an example of an LLM Privacy Vault:
With proper data governance, organizations can prevent unauthorized access to sensitive data and mitigate privacy risks of LLM-based AI agents by adhering to the principle of least privilege.
Read more: Generative AI Data Privacy with Skyflow LLM Privacy Vault
Almost two-thirds of respondents in a survey indicated that HITL is critical for responsible AI use. For agentic AI, “human-in-the-loop” (HITL) is essential when AI actions impact compliance, security, or high-value transactions.
Organizations must determine where to place human “checkpoints” to balance efficiency with risk mitigation. Examples include:
For example, an AI agent reviewing loan applications in financial services may approve low-risk applications autonomously but escalate high-risk cases for manual review.
Given the autonomous nature of agentic systems, outputs can stray from desired results. Guardrails are frameworks that ensure agents operate within set boundaries and execute routine tasks correctly. For example, a guardrail could allow agents to process refunds up to a certain amount but require human approval above that amount.
Agentic AI systems can make questionable decisions and produce unexpected outcomes despite their advanced capabilities. Organizations can counter these risks by maintaining audit trails that log all system actions and data used to make decisions. Human reviewers can review these logs and reverse behaviors if necessary.
The rise of agentic AI marks a significant leap forward in artificial intelligence that offers more advanced decision-making capabilities.
However, as companies implement agentic AI into more of their business processes, they need to address risks like data privacy head-on. Without the proper guardrails, organizations risk their AI systems exposing sensitive data
AI-ready data vaults ensure agentic AI systems operate within a zero-trust framework, securing PII while enabling real-time decision-making without exposing raw data.
Watch this video explaining how to build privacy-preserving AI agents and how to architect AI systems that prioritize user privacy.