Fully Connected — Weights & Biases, the AI developer platform, today announced an expanded integration with NVIDIA NIM microservices to enable enterprises to build custom AI applications and optimize inference for production. Building on Weights & Biases’ initial NIM integration announced last month at NVIDIA GTC, the additional customization capabilities announced today – currently in private preview – provide a more comprehensive and accessible way for developers using the Weights & Biases platform to customize and deploy domain-specific enterprise-grade AI applications.
“Enterprises today want to use LLMs to deploy custom applications trained on their own data, such as a customer support agent that quickly and correctly answers customers’ questions,” said Lukas Biewald, CEO at Weights & Biases. “Our expanded integration with NVIDIA NIM will give our customers the higher LLM performance they want with models that have been fine-tuned on their business data and optimized for performance and low latency. Enterprise AI also becomes faster to deploy since we’re closing the operational gap between training and inference.”
“Across industries, businesses are seeking an engine to supercharge their generative AI strategies,” said Manuvir Das, vice president of enterprise computing at NVIDIA. “Weights & Biases’ integration with NVIDIA NIM marks the debut of a massive boost in efficiency for AI development and deployment for companies adding enterprise-grade generative AI to their operations.”
Speed and Scale Development and Deployment with Weights & Biases and NVIDIA NIM
The Weights & Biases AI platform provides a comprehensive suite of tools for machine learning experiment tracking, model optimization, and automated model management with model lineage and dataset versioning. NVIDIA NIM microservices expands on these capabilities, offering optimized inference on more than two dozen popular AI models from NVIDIA and its partner ecosystem, enabling developers to run custom AI models across clouds, data centers, workstations and PCs.
Together, the Weights & Biases AI platform, with NVIDIA NIM functionality, gives developers a more comprehensive and simpler path to building generative AI applications with enterprise-level support, security and stability. NIM is included with the NVIDIA AI Enterprise software platform for development and deployment of production-grade generative AI applications.
W&B Launch, part of the Weights & Biases AI developer platform, provides the seamless ability to deploy AI workloads across compute clusters, allowing users to scale training up and out without infrastructure friction or permission sprawl. W&B Launch supports access to high-performance compute required for building and deploying AI models on AWS, Google Cloud, and Microsoft Azure.
The Weights & Biases private preview of the NVIDIA NIM integration allows developers to customize a broader variety of AI models, including NVIDIA AI Foundation and custom models. It ensures seamless, scalable AI inferencing, on-premises or in the cloud, leveraging industry standard APIs.
This new integration between Weights & Biases and NVIDIA NIM will allow enterprise developers to:
- Create a custom NIM with an optimized model and deploy in the customer’s infrastructure
- Track lineage of the optimized model for version control
- Enhance LLM performance achieved through domain adaptation and optimization for superior performance and reduced latency.
- Accelerate deployment timelines by bridging the gap between training and inference processes
- Implement enterprise-grade security measures, ensuring data protection and compliance, even with custom models.
- Access versatile deployment options – as both Weights & Biases and NIM are infrastructure agnostic – supporting cloud and on-premise high performance compute environments alike.
Availability
To discover more about the Weights & Biases AI developer platform and its new integration with NVIDIA NIM, currently in private preview, please sign up at wandb.me/nim.
The Weights & Biases AI developer platform enables seamless integration with NVIDIA microservices, also accessible for developers to experiment with at ai.nvidia.com. For enterprises looking to deploy production-grade NIM microservices, access is available through NVIDIA AI Enterprise on leading cloud platforms, ensuring enhanced performance and compatibility within the Weights & Biases ecosystem.
About Weights & Biases
Weights & Biases is the leading AI developer platform supporting end-to-end MLOps and LLMOps workflows. Used by over 30 foundation model builders and 1,000 companies to productionize machine learning at scale including teams at OpenAI, Toyota, and Microsoft. Weights & Biases is part of the new standard of best practices for machine learning.
View source version on businesswire.com: https://www.businesswire.com/news/home/20240418060357/en/