Preventing LLM Hallucinations



In this session attendees learned how to make AI truly dependable in critical operations. They were joined by John Bohannon, VP of Data Science at Primer, for an in-depth exploration of RAG-Verification (RAG-V) — a system designed to reduce hallucination rates in Large Language Models (LLMs) by 100x.

 

In this session attendees learned:

  • How RAG-V detects and corrects factual errors, using a fact-checking approach that’s tailored for complex, high-stakes environments.
  • How Primer breaks down intricate questions into individual claims, allowing for real-time, automated fact-checking that you can trust.
  • How to apply actionable insights for implementing trustworthy AI practices within your own data operations. 

If you’re ready to explore the future of trustworthy AI and elevate your data operations, view the on-demand webinar today! 

Speaker and Presenter Information

John Bohannon, VP Data Science, Primer.ai

Relevant Government Agencies

Other Federal Agencies, Federal Government, State & Local Government


Event Type
Webcast


This event has no exhibitor/sponsor opportunities


When
Tue, Nov 12, 2024, 2:00pm ET


Cost
Complimentary:    $ 0.00


Website
Click here to visit event website


Event Sponsors


Organizer
Primer.ai Government Team at Carahsoft


Contact Event Organizer



Return to search results