AI Scenarios in Which Small Language Models Outshine Large Language Models



While increasing scale has been the core driving trend in the development of large language models (LLMs), a contrarian trend has recently emerged: the development of small language models (SLMs). While LLMs have traditionally dominated the development of language models, SLMs offer potential solutions to key challenges identified by functional leaders, including budget constraints, data protection, privacy concerns and risk mitigation associated with AI. In this complimentary Gartner IT webinar, we compare SLMs to LLMs in 4 areas: generic language understanding and generation, in-context learning capabilities, computational requirements for serving and computational requirements for fine-tuning. We then discuss 5 scenarios in which SLMs outshine LLMs: multiple task-specialized models, high user interaction volumes, organizational language models, sensitive data or regulatory restrictions and edge use cases. You will walk away from this session with answers to your vital questions, a copy of the research slides and recommended actions to help you achieve your goals.

  • Understand what are small language models
  • Determine how do small language models compare to large language models
  • Explore scenarios where small language models outshine large language models

Contact us with questions about viewing this webinar.

Relevant Government Agencies

Other Federal Agencies, Federal Government, State & Local Government


Event Type
Webcast


This event has no exhibitor/sponsor opportunities


When
Thu, Sep 12, 2024, 10:00am - 11:00am ET


Cost
Complimentary:    $ 0.00


Website
Click here to visit event website


Organizer
Gartner


Contact Event Organizer



Return to search results