AI inferencing for developers and administrators

Developers are increasingly being asked to add artificial intelligence (AI) to existing applications. In this session, we’ll explore different open source tools from Red Hat that can help developer and administrators address AI inference, including: - How to add AI to your application using Podman Desktop and AI recipes. - How to deploy AI tools to edge devices. - How to use, develop, and interact with AI Models from the command line using RamaLama. - How to use AI models with containers.

Speakers

Eric Curtin | Principal Software Engineer, Red Hat

Eric Curtin is a Principal Software Engineer at Red Hat. Maintain and contibute to projects like: llama.cpp, ramalama, inotify-tools, OSTree, CentOS Automotive SIG repos, etc. Most recently focused on making RamaLama a boring AI inferencing tool.

Stevan Le Meur | Senior Principal Product Manager, Red Hat

Stevan Le Meur is a Red Hat Product Manager focused on developer tools and cloud technologies, and working on Podman Desktop. Driven by the belief that great applications can be built only in exceptional development environments, Stevan works closely with customers and upstream communities welcoming opportunities to simplify developers’ lives.

Daniel Walsh | Senior Distinguished Engineer, Red Hat

Daniel Walsh has worked in the computer security field for over 40 years. Dan joined Red Hat in August 2001 and is a Senior Distinguished Engineer. Dan currently works for the Red Hat AI team. Dan was the lead architect of image mode for Red Hat Enterprise Linux (RHEL) and part of the RHEL for Edge team concentrating on the Red Hat In-Vehicle Operating System. Before this, Dan led the Container Runtime Engineering team focusing on Podman, CRI-O Container Runtime for Kubernets, and Buildah. Dan authored the Podman in Action book. He also led the SELinux project, concentrating on the application space and policy development. Dan helped developed sVirt, secure virtualization, as well as the SELinux Sandbox.