Catenaa, Monday, March 02, 2026- Red Hat unveiled a new AI platform, Red Hat AI Enterprise, designed to enable high-performance AI inference, model tuning, and agent deployment across any hardware and environment.
Built on Red Hat OpenShift and leveraging Kubernetes, the platform promises scalable and secure AI operations while supporting existing enterprise tools and workflows.
It aims to help organizations move from fragmented experimentation to repeatable, governed AI deployment.
Red Hat AI 3.3 introduces enhancements across the company’s AI portfolio, offering a “metal-to-agent” stack that integrates Linux infrastructure with advanced agentic and inference capabilities.
The company also launched Red Hat AI Factory, co-engineered with Nvidia, providing a unified foundation for AI deployments on-premises, in the cloud, or at the edge. The platform covers inference, model tuning, customization, and agent management, emphasizing enterprise-grade security and lifecycle governance.
Red Hat executives said the platform addresses the challenges of scaling AI from pilot projects to full enterprise operations.
Chris Wright, CTO of Red Hat, said the solution allows companies to manage AI with the same rigor as core IT platforms, while Joe Fernandes, VP of AI Business Unit, noted it bridges the gap between infrastructure and business logic.
Nvidia’s VP of Enterprise AI Platforms, Justin Boitano, added that the co-engineered platform supports agentic AI applications at scale, enabling firms to turn data into actionable intelligence.
IDC predicts enterprise AI spending could surpass $1 trillion by 2029, driven largely by agentic AI. Red Hat’s new offerings position the company to support this expansion while accelerating AI adoption across hybrid cloud environments.
