Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Snowflake and Nvidia have partnered to provide businesses a platform to create customized generative artificial intelligence (AI) applications within the Snowflake Data Cloud using a business’s proprietary data. The announcement came today at the Snowflake Summit 2023.
Integrating Nvidia’s NeMo platform for large language models (LLMs) and its GPU-accelerated computing with Snowflake’s capabilities will enable enterprises to harness their data in Snowflake accounts to develop LLMs for advanced generative AI services such as chatbots, search and summarization.
Manuvir Das, Nvidia’s head of enterprise computing, told VentureBeat that this partnership distinguishes itself from others by enabling customers to customize their generative AI models over the cloud to meet their specific enterprise needs. They can “work with their proprietary data to build … leading-edge generative AI applications without moving them out of the secure Data Cloud environment. This will reduce costs and latency while maintaining data security.”
Jensen Huang, founder and CEO of Nvidia, emphasized the importance of data in developing generative AI applications that understand each company’s unique operations and voice.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
“Together, Nvidia and Snowflake will create an AI factory that helps enterprises turn their valuable data into custom generative AI models to power groundbreaking new applications — right from the cloud platform that they use to run their businesses,” Huang said in a written statement.
>>Follow VentureBeat’s ongoing generative AI coverage<<
According to Nvidia, the collaboration will provide enterprises with new opportunities to utilize their proprietary data, which can range from hundreds of terabytes to petabytes of raw and curated business information. They can use this data to create and refine custom LLMs, enabling business-specific applications and service development.
Streamlining generative AI development through the cloud
Nvidia’s Das asserts that enterprises using customized generative AI models trained on their proprietary data will maintain a competitive advantage over those relying on vendor-specific models.
He said that employing fine-tuning or other techniques to customize LLMs produces a personalized AI model that enables applications to leverage institutional knowledge — the accumulated information pertaining to a company’s brand, voice, policies, and operational interactions with customers.
“One way to think about customizing a model is to compare a foundational model’s output to a new employee that just graduated from college, compared to an employee who has been at the company for 20+ years,” Das told VentureBeat. “The long-time employee has acquired the institutional knowledge needed to solve problems quickly and with accurate insights.”
Creating an LLM involves training a predictive model using a vast corpus of data. Das said that to achieve optimal results, it is essential to have abundant data, a robust model and accelerated computing capabilities. The new collaboration encompasses all three factors.
“More than 8,000 Snowflake customers store exabytes of data in Snowflake Data Cloud. As enterprises look to add generative AI capabilities to their applications and services, this data is fuel for creating custom generative AI models,” said Das. “Nvidia NeMo running on our accelerated computing platform and pre-trained foundation models will provide the software resources and compute inside Snowflake Data Cloud to make generative AI accessible to enterprises.”
Nvidia’s NeMo is a cloud-native enterprise platform that empowers users to build, customize and deploy generative AI models with billions of parameters. Snowflake intends to host and run NeMo within the Snowflake Data Cloud, allowing customers to develop and deploy custom LLMs for generative AI applications.
“Data is the fuel of AI,” said Das. “By creating custom models using their data on Snowflake Data Cloud, enterprises will be able to leverage the transformative potential of generative AI to advance their businesses with AI-powered applications that deeply understand their business and the domains they operate within.”
What’s next for Nvidia and Snowflake?
Nvidia also announced its commitment to offer accelerated computing and a comprehensive suite of AI software as part of the collaboration. The company stated that substantial co-engineering efforts are underway, intending to integrate the Nvidia AI engine into Snowflake’s Data Cloud.
Das said that generative AI is one of the most transformative technologies of our time, potentially impacting nearly every business function.
“Generative AI is a multi-trillion-dollar opportunity and has the potential to transform every industry as enterprises begin to build and deploy custom models using their valuable data,” said Das. “As a platform company, we are currently helping our partners and customers leverage the power of AI to solve humanity’s greatest problems with accelerated computing and full-stack software designed to serve the unique needs of virtually every industry.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.