Red Hat, Inc. announced Podman AI Lab, an extension for Podman Desktop that gives developers the ability to build, test and run generative artificial intelligence (GenAI)-powered applications in containers using an intuitive, graphical interface on their local workstation. This contributes to the democratization of GenAI, and gives developers the benefits of convenience, simplicity and cost efficiency of their local developer experience while maintaining ownership and control over sensitive data. The recent surge of GenAI and open source large language models (LLMs) has ushered in a new era of computing that relies heavily on the use of AI-enabled applications, and organizations are moving quickly to establish expertise, processes and tools to remain relevant.

As AI and data science move into mainstream application development, tools like Podman AI Lab can help fuel developer adoption of GenAI for building intelligent applications or enhancing their workflow using AI-augmented development capabilities. AI Lab features a recipe catalog with sample applications that give developers a jump start on some of the more common use cases for LLMs, including: Chatbots that simulate human conversation, using AI to comprehend user inquiries and offer suitable responses. These capabilities are often used to augment applications that provide self-service customer support or virtual personal assistance.

Text summarizers, which provide versatile capabilities across many applications and industries, where they can deliver effective and efficient information management. Using this recipe, developers can build applications to assist with things like content creation and curation, research, news aggregation, social media monitoring, and language learning. Code generators, which empower developers to concentrate on higher-level design and problem-solving by automating repetitive tasks like project setup and API integration, or to produce code templates.

Object detection helps identify and locate objects within digital images or video frames. It is a fundamental component in various applications, including autonomous vehicles, retail inventory management, precision agriculture, and sports broadcasting. Audio-to-text transcription involves the process of automatically transcribing spoken language into written text, facilitating documentation, accessibility, and analysis of audio content.

These examples provide an entry point for developers where they can review the source code to see how the application is built and learn best practices for integrating their code with an AI model. For developers, containers have traditionally provided a flexible, efficient and consistent environment for building and testing applications on their desktops without worrying about conflicts or compatibility issues. Today, they are looking for the same simplicity and ease of use for AI models.

Podman AI Lab helps meet this need by giving them the ability to provision local inference servers, making it easier to run a model locally, get an endpoint, and start writing code to wrap new capabilities around the model. In addition, Podman AI Lab includes a playground environment that allows users to interact with models and observe their behavior. This can be used to test, experiment and develop prototypes and applications with the models.

An intuitive user prompt helps in exploring the capabilities and accuracy of various models and aids in finding the best model and the best settings for the use case in the application. As AI becomes more ubiquitous in the enterprise, Red Hat is leading the way in unlocking the potential for AI to drive innovation, efficiency and value through its portfolio of consistent, trusted and comprehensive AI platforms for the hybrid cloud. Podman AI Lab builds on the strength of Podman Desktop, an open source project founded at Red Hat which now has more than one million downloads.

It also offers tight integration with image mode for Red Hat Enterprise Linux, a new deployment method for the world?s leading enterprise Linux platform that delivers the operating system as a container image. This integration enables developers to more easily go from prototyping and working with models on their laptop to turning the new AI-infused application into a portable, bootable container that can easily be run anywhere across the hybrid cloud, from bare metal to a cloud instance, using Red Hat OpenShift. For more than 30 years, open source technologies have paired rapid innovation with greatly reduced IT costs and lowered barriers to innovation.

Red Hat has been leading this charge for nearly as long, from delivering open enterprise Linux platforms with RHEL in the early 2000s to driving containers and Kubernetes as the foundation for open hybrid cloud and cloud-native computing with Red Hat OpenShift. This drive continues with Red Hat powering AI/ML strategies across the open hybrid cloud, enabling AI workloads to run where data lives, whether in the datacenter, multiple public clouds or at the edge. More than just the workloads, Red Hat?s vision for AI brings model training and tuning down this same path to better address limitations around data sovereignty, compliance and operational integrity.

The consistency delivered by Red Hat?s platforms across these environments, no matter where they run, is crucial in keeping AI innovation flowing.