Red Hat OpenShift AI
Red Hat OpenShift AI is an artificial intelligence platform that runs on top of Red Hat OpenShift and provides tools across the AI/ML lifecycle.

What’s new in Red Hat OpenShift AI
Red Hat OpenShift AI is an artificial intelligence (AI) platform that provides tools to rapidly develop, train, serve, and monitor machine learning models on-site, in the public cloud, or at the edge.

Build smarter with Red Hat OpenShift AI
OpenShift AI gives data scientists and developers a powerful AI/ML platform for building AI-enabled applications. Data scientists and developers can collaborate to move from experiment to production in a consistent environment quickly.
OpenShift AI is available as an add-on cloud service to Red Hat OpenShift Service on AWS or Red Hat OpenShift Dedicated or as a self-managed software product. It provides an AI platform with popular open source tooling. Familiar tools and libraries like Jupyter, TensorFlow, and PyTorch along with MLOps components for model serving and data science pipelines are integrated into a flexible UI.
Introducing Red Hat AI Inference Server
Deploy your preferred models faster and more cost-effectively across the hybrid cloud with Red Hat AI Inference Server. Its vLLM runtime maximizes inference throughput and minimizes latency. A pre-optimized model repository ensures rapid model serving, while the LLM compressor reduces compute costs without sacrificing accuracy. Experience fast, accurate inference for a wide range of applications.
Red Hat AI Inference Server is included in Red Hat OpenShift AI and Red Hat Enterprise Linux AI and supported on Red Hat OpenShift and Red Hat Enterprise Linux.
