Operationalizing AI with containerized environments, CI pipelines, and model servers
AI and Machine Learning are becoming ubiquitous nowadays. Whereas these exciting technologies are opening up to the general public, they are complex underneath. Expert data scientists, who excel at building the core of these intelligent models, sometimes struggle with the deployment and integration of models, as well as with the infrastructure required to train and run them. In this episode, we will explain how MLOps aims at harnessing the power of DevOps principles, applied by the software industry with great success, into the AI landscape. We will discuss how Red Hat OpenShift AI provides an ideal platform for implementing MLOps practices and managing machine learning projects, with an emphasis on automation, consistency, and agility. You will learn how you can train ML models with pre-configured, containerized working environments. Next, we will explore how you can automate the training process by using data science pipelines. Finally, you will discover how easy it is to deploy a model to production.
Relatori
Jaime Ramírez Castillo | Senior Content Architect, Product and Technical Learning, Red Hat
Jaime is a Senior Content Architect at Red Hat, where he works creating developer-focused training on Red Hat products. His work includes courses such as “Developing and Deploying AI/ML Applications on Red Hat OpenShift AI”, and “Red Hat OpenShift Developer II: Building and Deploying Cloud-native Applications”. Jaime is also a seasoned software engineer, who has previously worked in a variety of Node.js and Python projects. Additionally, Jaime is passionate about emerging technologies and is currently pursuing a PhD on applied machine learning.