Scaling Generative AI with Cloudera and NVIDIA: Deploying LLMs with AI Inference



In this session, discover how to deploy scalable GenAI applications with NVIDIA NIM using the Cloudera AI Inference service. Learn how to manage and optimize AI workloads during the critical deployment phase of the AI lifecycle, focusing on Large Language Models (LLMs).

 

Why You Should Attend:

  • Understand how Cloudera AI Inference with NVIDIA enables scalable GenAI applications.
  • Gain insights into the deployment phase of AI which is critical for operationalizing AI workloads.
  • See practical demos on deploying LLMs with AI Inference.
  • Learn how NVIDIA’s GPU-accelerated infrastructure enhances performance for AI applications.
  • Join an interactive Q&A session to address your specific needs.

You'll leave this series with hands-on knowledge and strategies to implement AI solutions that will accelerate your organization’s innovation and efficiency.

 

Can’t make it? Register now and you’ll be granted access to all content on-demand to view at your convenience.

Speaker and Presenter Information

Director, Product Manager
Peter Ableda

Relevant Government Agencies

Other Federal Agencies, Federal Government, State & Local Government


Event Type
Webcast


This event has no exhibitor/sponsor opportunities


When
Wed, Jan 15, 2025, 1:00pm ET


Cost
Complimentary:    $ 0.00


Website
Click here to visit event website


Organizer
Cloudera


Contact Event Organizer



Return to search results