AI inference at the edge using OpenShift AI
Many organizations are looking to edge deployments of AI to provide real-time insights into their data. OpenShift AI can be used to deploy model serving, predict failure, detect anomalies, and do quality inspection in low-latency environments in near real-time. The demo shows how the model can be packaged into an inference container image and use data science pipelines to fetch models, build, test, deploy and update within a GitOps workflow. ArgoCD is used to detect changes and update the image at the edge devices if needed. Observability into the model's health and performance is provided by gathering metrics from edge devices and reporting back to centralized management.
Relatori
Myriam Fentanes Gutierrez | Product Manger, Red Hat
Myriam Fentanes is a Principal Product Manager, with over 15 years of experience working with customer in mission critical verticals, Myriam has helped in various capacities their journey to add intelligence in business applications from rules and automated processes to AI and foundation models at the edge. The main focus is always solve the pain points around integrating innovative technologies into existing live ecosystems.
Landon LaSmith | Principal Software Engineer, Red Hat
Landon LaSmith is a Principal Software Engineer at Red Hat. Landon's focus is on integrating OpenShift AI with GitOps tools to deliver reproducible accelerated AI/ML workflows to distributed Edge environments