At its GTC conference, Nvidia today announced Nvidia NIM, a new software platform designed to streamline the deployment of custom and pre-trained AI models into production environments. NIM takes the software work Nvidia has done around inferencing and optimizing models and makes it easily accessible by combining a given model with an optimized inferencing engine […]
© 2024 TechCrunch. All rights reserved. For personal use only.
At its GTC conference, Nvidia today announced Nvidia NIM, a new software platform designed to streamline the deployment of custom and pre-trained AI models into production environments. NIM takes the software work Nvidia has done around inferencing and optimizing models and makes it easily accessible by combining a given model with an optimized inferencing engine and then packing this into a container, making that accessible as a microservice.
Typically, it would take developers weeks — if not months — to ship similar containers, Nvidia argues — and that is if the company even has any in-house AI talent. With NIM, Nvidia clearly aims to create an ecosystem of AI-ready containers that use its hardware as the foundational layer with these curated microservices as the core software layer for companies that want to speed up their AI roadmap.
NIM currently includes support for models from NVIDIA, A121, Adept, Cohere, Getty Images, and Shutterstock as well as open models from Google, Hugging Face, Meta, Microsoft, Mistral AI and Stability AI. Nvidia is already working with Amazon, Google and Microsoft to make these NIM microservices available on SageMaker, Kubernetes Engine and Azure AI, respectively. They’ll also be integrated into frameworks like Deepset, LangChain and LlamaIndex.
“We believe that the Nvidia GPU is the best place to run inference of these models on […], and we believe that NVIDIA NIM is the best software package, the best runtime, for developers to build on top of so that they can focus on the enterprise applications — and just let Nvidia do the work to produce these models for them in the most efficient, enterprise-grade manner, so that they can just do the rest of their work,” said Manuvir Das, the head of enterprise computing at Nvidia, during a press conference ahead of today’s announcements.”
As for the inference engine, Nvidia will use the Triton Inference Server, TensorRT and TensorRT-LLM. Some of the Nvidia microservices available through NIM will include Riva for customizing speech and translation models, cuOpt for routing optimizations and the Earth-2 model for weather and climate simulations.
The company plans to add additional capabilities over time, including, for example, making the Nvidia RAG LLM operator available as a NIM, which promises to make building generative AI chatbots that can pull in with custom data a lot easier.
This wouldn’t be a developer conference without a few customer and partner announcements. Among NIM’s current users are the likes of Box, Cloudera, Cohesity, Datastax, Dropboxand NetApp.
“Established enterprise platforms are sitting on a goldmine of data that can be transformed into generative AI copilots,” said Jensen Huang, founder and CEO of NVIDIA. “Created with our partner ecosystem, these containerized AI microservices are the building blocks for enterprises in every industry to become AI companies.”
Leave a Reply