Red Hat Introduces NVIDIA NIM Integration on Red Hat OpenShift AI

0
red line

RedHat, Inc. NVIDIA NIM micro on Red Hat OpenShift AI to provide optimized inference for dozens of artificial intelligence (AI) models powered by a consistent, open-source AI/ML hybrid cloud platform announced the arrival of integration support for its services. Enterprises will be able to use Red Hat OpenShift AI together with NVIDIA NIM. Included as part of the NVIDIA AI Enterprise software platform, a set of easy-to-use inference microservices delivers productive AI (GenAI) applications for faster time to value. It will speed up the presentation of its content.

NVIDIA NIM support on Red Hat OpenShift AI builds on existing optimization of NVIDIA AI Enterprise on Red Hat’s industry-leading open hybrid cloud technologies such as Red Hat Enterprise Linux and Red Hat OpenShift. As part of this latest collaboration, NVIDIA is launching a core contribution to Red Hat OpenShift AI, a Kubernetes-based open source project for highly scalable AI use cases. It will ensure interoperability between KServe and NIM. Thus, continuous interoperability for NVIDIA NIM microservices will be possible in future releases of Red Hat OpenShift AI.

This integration enables businesses to expand customer service with virtual assistants, case summaries for IT requests, and accelerate business operations with domain-specific co-pilots. Increase productivity It makes it tick.

By using Red Hat OpenShift AI with NVIDIA NIM, organizations can benefit from:

  • To use NVIDIA NIM in a common workflow with other AI setups for greater consistency and easier management streamlined integration.
  • In coordination with other AI model deployments in hybrid cloud environments Integrated scaling and monitoring on NVIDIA NIM installations.
  • To ensure a smooth transition from prototype to production for businesses that run their businesses on artificial intelligence. Enterprise-level security, support and stability.
You may be interested.  MultiVersus Release Trailer Reveals New Characters

NVIDIA NIM microservices are designed to accelerate GenAI deployment in enterprises. NIM supports a wide variety of AI models, including open source community models, NVIDIA AI Foundation models, and custom models, through industry-standard application programming interfaces (APIs). It offers seamless, scalable AI inference on-premises or in the cloud.

Chris Wright, Red Hat senior vice president and chief technology officer of Global Engineering, commented on the subject: “Red Hat’s collaboration with NVIDIA job, artificial intelligence It is highly focused on removing the hurdles and complexities involved in quickly building, managing, and deploying featured applications. Red Hat OpenShift AI, NIM Micro Services to increase the access to a, flexible foundation, developers, developers formed from the ã¶n as the inclusion of the enclosure and the concern standard standard standard. All powered by open source innovation.â€

NVIDIA Vice President of Enterprise Products Justin Boitano makes the following assessments: “Every enterprise development team should implement productive artificial intelligence applications as quickly as possible.” Wants to get it into production safely and securely. Integrating NVIDIA NIM into Red Hat OpenShift AI allows developers to quickly build and deploy modern enterprise applications in any cloud or data center using performance-optimized underlying and embedded models. It marks a new milestone in our cooperation, as it will help scale it up.â€

Leave A Reply