Understanding edge to core data pipelines to train future AI/ML models

Given that a lot of data is generated at the edge, capturing and transporting the relevant pieces from the edge to a cloud environment is vital to train next-generation models of artificial intelligence and machine learning (AI/ML). Creating a flexible set of data pipelines that allows this transport becomes critical for any organization in this scenario. In this episode, we’ll cover a simple use case, where data collected can be captured, transformed, and transported using an event-driven architecture deployed across edge servers and the cloud environment. Our presenter will cover: - Capturing, transforming, and transporting data across edge servers and the cloud environment to train AI/ML models - Inputting data into a distributed architecture, allowing data engineers to build flexible data pipelines - How Red Hat® OpenShift® provides a platform to build and deploy data pipelines and intelligent applications for AI/ML use cases like Edge analytics - How Red Hat provides architecture design patterns for customers to implement edge to core data pipelines for AI/ML

主讲人

Hugo Guerrero | Senior Principal Product Marketing Manager, Red Hat

Hugo Guerrero is a developer advocate for application programming interfaces (APIs) and event-driven architecture. He assists organizations in this role by creating, editing, and curating product content that is shared with the community through webinars, conferences, and other activities. He works on open source software with major private and federal public sector clients looking to connect and extend their system architecture. Hugo has more than 20 years of experience as a Developer, Consultant, Architect, and Software Development Manager. He is an AsyncAPI ambassador and contributes to open source initiatives like Microcks, where he maintains the Docker Desktop extension.