NVIDIA Brings Business Intelligence to Chatbots, Copilots and Summarization Tools With Enterprise-Grade Generative AI Microservice

.“Generative AI applications with RAG capabilities are the next killer app of the enterprise,” said Jensen Huang, founder and CEO of NVIDIA. “With NVIDIA NeMo Retriever, developers can create customized generative AI chatbots, copilots and summarization tools that can access their business data to transform productivity with accurate and valuable generative AI intelligence.”

Cadence serves companies across hyperscale computing, 5G communications, automotive, mobile, aerospace, consumer and healthcare markets. It is working with NVIDIA to develop RAG features for generative AI applications in industrial electronics design.

“Generative AI introduces innovative approaches to address customer needs, such as tools to uncover potential flaws early in the design process,” said Anirudh Devgan, president and CEO of Cadence. “Our researchers are working with NVIDIA to use NeMo Retriever to further boost the accuracy and relevance of generative AI applications to reveal issues and help customers get high-quality products to market faster.”

Unlike open-source RAG toolkits, NeMo Retriever supports production-ready generative AI with commercially viable models, API stability, security patches and enterprise support.

NVIDIA-optimized algorithms power the highest accuracy results in Retriever’s embedding models. The optimized embedding models capture relationships between words, enabling LLMs to process and analyze textual data.

Using NeMo Retriever, enterprises can connect their LLMs to multiple data sources and knowledge bases, so that users can easily interact with data and receive accurate, up-to-date answers using simple, conversational prompts. Businesses using Retriever-powered applications can allow users to securely gain access to information spanning numerous data modalities, such as text, PDFs, images and videos.

Enterprises can use NeMo Retriever to achieve more accurate results with less training, speeding time to market and supporting energy efficiency in the development of generative AI applications.

Companies can deploy NeMo Retriever-powered applications to run during inference on NVIDIA-accelerated computing on virtually any data center or cloud. NVIDIA AI Enterprise supports accelerated, high-performance inference with NVIDIA NeMo, NVIDIA Triton Inference Server™, NVIDIA TensorRT™, NVIDIA TensorRT-LLM and other NVIDIA AI software.

To maximize inference performance, developers can run their models on NVIDIA GH200 Grace Hopper Superchips with TensorRT-LLM software.

Developers can sign up for early access to NVIDIA NeMo Retriever.