Questo sito utilizza cookie tecnici, analytics e di terze parti.
Proseguendo nella navigazione accetti l’utilizzo dei cookie.

Eventi>

Serving LLMs on Kubernetes

(Red Hat), (Red Hat)
Lingua: Inglese
Orario: 16:45  -  17:30

What are the key hurdles in running Large Language Models (LLMs) efficiently on Kubernetes? This session is crafted for MLOps and Platform Engineers seeking effective strategies for LLM integration. It will provide an overview of the current landscape for LLM deployment options, discussing the suitability of Kubernetes for these models.

The talk will dissect the complexities associated with the size, tuning, and scaling of LLMs, and explore technologies such as KServe, vLLM, KubeFlow Model Registry, and Ray