Securing Gen AI RAG Data using Azure AI Search



Large Language Models (LLMs) and Generative AI have inherent limitations, such as outdated knowledge, lack of private data access, and the potential for hallucinations. In this session, we will introduce a strategy for overcoming these challenges: Retrieval-Augmented Generation (RAG). Attendees will see how a GenAI RAG application can provide access to real-time, private data stored in an external knowledge base without needing to fine-tune the base LLM model.

 

With an understanding of the GenAI RAG application, we will explore an example cloud infrastructure hosting the application using Azure AI Search, Azure Storage, and Azure Container Apps. The cloud architecture review will uncover new attack vectors and cloud security misconfigurations that can unintentionally leak RAG data to an attacker. Attendees will see how these vulnerabilities can be used to gain unauthorized access to AI data. Then, we will look at the cloud security controls needed to authorize access to the RAG data.

 

Attendees will walk away with an understanding of GenAI RAG applications, the underlying cloud infrastructure powering these AI systems, and the security controls needed to protect sensitive RAG data.

 

Learning Objectives:

  • Review GenAI RAG application architecture
  • Identify misconfigurations in GenAI RAG cloud infrastructure
  • Learn GenAI RAG cloud security controls

Relevant Government Agencies

Other Federal Agencies, Federal Government, State & Local Government


Register


Register as Attendee


Add to Calendar


Event Type
Webcast


When
Tue, Dec 9, 2025, 12:00pm - 1:00pm ET


Cost
Complimentary:    $ 0.00


Website
Click here to visit event website


Organizer
SANS Institute


Contact Event Organizer



Return to search results