Written by Anna Tong and Stephen Nellis
SAN JOSE (Reuters) – Nvidia on Monday expanded the chipmaker's offering with software aimed at making it easier for companies to incorporate artificial intelligence systems into their operations.
Joel Hellermark, chief executive officer of enterprise AI assistant maker Sana, said the release is an expansion of NVIDIA's presence in an area of AI application execution called inference, where the company's chips do not dominate the market. He said he was emphasizing.
Nvidia is best known for providing chips used to train so-called foundational models, such as OpenAI's GPT-4. Training involves ingesting large amounts of data and is primarily done by large technology companies focused on AI.
Businesses of all sizes are now scrambling to incorporate these foundational models into their operations, which can be complex. The Nvidia tools released on Monday are designed to easily modify and run a variety of AI models on Nvidia hardware.
“It's like buying a ready-made meal instead of going out and buying the ingredients yourself,” said Ben Metcalfe, a venture capitalist who founded Monochrome Capital.
“Google, Doordash, and Uber can do all this on their own, but now that Nvidia has more GPUs available, we need to enable more companies to extract value from GPUs. ” he said. Even less tech-savvy companies can use “prepared recipes” to get the system up and running, he said.
For example, ServiceNow, a company that provides software used by technical support staff within large enterprises, said it used Nvidia's tools to create a “copilot” that helps companies solve IT problems.
Nvidia has some big-name partners for new tools. Microsoft, Alphabet Inc.'s Google, and Amazon will offer them as part of their cloud computing services, and companies offering the model include Google, Kohia, Meta, and Mistral. But OpenAI, its financial backer Microsoft, and its largest provider of underlying models Anthropic are noticeably missing from the list.
Nvidia's tools offer the potential for increased revenue for chipmakers. These are part of an existing software suite that costs $4,500 per year per Nvidia chip when used in private data centers and $1 per hour in cloud data centers.
(Reporting by Stephen Nellis in San Jose and Anna Tong in San Francisco; Editing by Peter Henderson and Leslie Adler)