Discover a real-world blueprint for running thousands of ML models safely and efficiently; watch now to learn pipeline-first MLOps approaches, automation, and best practices for large-scale predictive AI. Scaling predictive AI from a few models to thousands is complex. Learn how to make large-scale MLOps safe, repeatable, and efficient with cloud-native pipelines, GitOps, and automation. Predictive AI powers critical systems like fraud detection, demand forecasting, and supply chains - but scaling from one model to thousands is still a major challenge. This episode presents a scalable blueprint for operating thousands of ML models on Kubernetes, based on practical, real-world use cases.
We shift the unit of scale from individual models to pipelines as first-class citizens, with a demo showing pipeline steps, Git-driven onboarding, continuous training with OpenShift and Kubeflow Pipelines, data and artifact versioning, model scanning and containerization, and GitOps promotion via Argo CD. We’ll also cover monitoring, drift detection, and model registry lineage, highlighting how cloud-native patterns extend across the ML lifecycle. Viewers will learn why pipelines, not individual models, are the true unit of scale, and how Git and automation make it safe to operate at 9,000+ models.
In this episode, we’ll cover:
- Overcome scaling challenges: why traditional model-centric approaches fail
- Adopt pipeline-first MLOps: streamline onboarding, training, and deployment
- Automate model lifecycles: using Tekton, Kubeflow Pipelines, DVC, and Argo CD
- Monitor and manage drift: maintain reliability, compliance, and trust
- Operate at scale safely: scale controllably by introducing relevant tools and practices
Cansu Kavili Oernek | Principal AI Consultant, AI Customer Adoption and Innovation, Red Hat
An expert in Red Hat technologies with a proven track record of delivering value quickly, creating customer success stories, and achieving tangible outcomes. Experienced in building high-performing teams across sectors such as finance, automotive, and public services, and currently helping organizations build machine learning platforms that accelerate the model lifecycle and support smart application development. A firm believer in the innovative power of Open Source, driven by a passion for creating customer-focused solutions.
Robert Lundberg | Principal AI Consultant, AI Customer Adoption and Innovation, Red Hat
Seasoned AI/ML practitioner focused on AI platforms, customer collaboration, and shaping product direction with real-world insight. With 10+ years building models and platforms—and experience founding an ML startup—he helps teams stand up AI platforms that shorten the path from idea to impact and power intelligent applications.