waves-default
Upcoming Webinar

AI Observability: How to Evaluate and Improve LLM Performance

. 60 minutes

The rapid growth of Generative AI assets introduces new risks and complexity for organizations. This session explores AI observability and model evaluation techniques for Generative AI Large Language Models (LLMs) in production.

We will cover strategies to ensure performance, accuracy, and reliability, through evaluation metrics, guardrails, and user feedback. Participants will learn to implement observability across environments, drawing insights from real-world use cases.

Speakers

Atalia Horenshtien
Atalia Horenshtien

AI/ML Lead - Americas Channels, DataRobot

Scott Munson
Scott Munson

VP of Data Science and AI, Evolutio