Welcome to BIGDATA AI News Briefs. This timely new feature provides the latest industry insights and perspectives surrounding AI areas such as deep learning, large-scale language models, generative AI, transformers, and more. We work tirelessly to unearth the most timely and interesting information underlying the day's most popular technologies. We recognize that this field is rapidly evolving and want to provide regular resources to keep you up to date with the latest information. enjoy!
Chaotic OpenAI
The turmoil at industry high-flyer OpenAI continued last Friday (November 17, 2023) when the company's board fired CEO and co-founder Sam Altman and replaced chief technology officer (CTO) Mira Murati as CEO. It started with the appointment of a deputy. The incident, which shocked the industry, prompted employees and investors to advocate for Altman's return. However, the board approached former GitHub executive Nat Friedman to take over as CEO, but he declined the offer. Shortly after, former Twitch CEO Emmett Shea agreed to take on the role. Early Monday morning, before the stock market opened, Microsoft announced it was hiring Altman and several others to spearhead a new business focused on AI technology. The drama didn't end there!
Mr. Altman's resignation follows a deliberation process by the Board of Directors, which concluded that he has not consistently been candid in his communications with the Board, which has interfered with the Board's ability to carry out its responsibilities. ,” the company said. “The board no longer has confidence in his ability to continue to lead OpenAI.”
By Monday afternoon, about 710 of OpenAI's 770 employees had revolted, signing a letter calling for board members to resign and demanding the reinstatement of Sam Altman and former president Greg Brockman. . The letter said employees were prepared to follow Altman and Brockman to Microsoft if they did not return. Board member and chief scientist Ilya Satskeva expressed regret for taking part in Altman's dismissal. Microsoft stock hit an all-time high on Monday after Altman's hiring announcement, adding more than $115 billion in value compared to Friday's market low.
Progress continued Monday, with a majority of the company's 24 key leaders signing the letter or resigning. The letter criticizes the board for undermining OpenAI's mission and ability to oversee AI technology development. The board has declined to share the circumstances that led to Altman's firing. Rumor has it that there was tension between Altman and the OpenAI board over issues of AI safety, the pace of technology development, and the company's commercialization strategy.
Altman's side ventures are believed to have contributed to the rift with the board. Before being ousted from OpenAI's board of directors, he was raising money in the Middle East for a new chip venture to rival NVIDIA. He was also working with former Apple design chief Jony Ive to raise funding for an AI-powered hardware device.
Fast forward to Tuesday, when Salesforce's Marc Benioff announced that his company was immediately offering all cash and stock to the Salesforce Einstein Trusted AI research team, led by Silvio Savarese, by matching OpenAI researchers who submitted their resignations for all of their cash and stock. Rumor has it that he has hinted that he intends to have them participate. Mr. Benioff is currently soliciting his resume. The same offer was also extended by his CTO at Microsoft, Kevin Scott, who will join Altman at Microsoft's new AI research lab. And OpenAI's board appears to have lobbied competitor Anthropic to merge the two companies in an effort to improve its capabilities. According to Steve Sloan of Menlo Ventures (an investor in Anthropic), all signs seem to point to Altman returning to his OpenAI by Wednesday.
“I think startups that have been coasting as wrappers or relying on OpenAI with advanced integrations could be at risk or at the mercy of Microsoft if the company goes out of business… Sarah Nagy, CEO and co-founder of Seek AI, says building private models could create a world where only companies with advanced AI teams win..“
Speculation in Silicon Valley suggests that the board apparently believes Altman is moving too quickly, contradicting OpenAI's nonprofit mission of “AGI for the benefit of humanity” and widespread concerns about the safety of consumer products. It seems that the company felt that it was placing too much emphasis on rapid deployment. There may have been a conflict behind Sam's attempt to achieve his AGI. Altman has told Congress that he takes the existential risks posed by AI seriously, but his actions speak louder than words, and it's likely that government officials won't buy it and have asked his board to rein him in. He said there may be political pressure to do so.
Coming full circle late Tuesday night, OpenAI issued a tweet announcing the company's return to OpenAI with Sam Altman as CEO and a new team consisting of Brett Taylor (chairman), Larry Summers, and Adam DeAngelo. It was announced that an agreement in principle has been reached to serve as an initial director. Phew! What a whirlwind.
Kyushu University raises $330 million to open source everything
Based in Paris, Kyutai is a non-profit research institute completely dedicated to open research in AI. The aim is to develop large-scale multimodal models that use not only text but also audio, images, etc., in particular, and to invent new algorithms to improve their power, reliability, and efficiency. It's about tackling challenges. To do this, the institute uses computing power provided by Scaleway, a subsidiary of the iliad Group. Scaleway supercomputers have the highest performance computing power.
AI applications have previously been introduced in Europe.Resolutely work towards democratization
AI, Kyutai has established itself as a leader in AI open science. Its goal is to share its progress with the entire AI ecosystem: the scientific community, developers, businesses, and society.
Large-scale decision makers in democracies. Kyushu University will also contribute to the development of future AI experts by providing internships for master's students and supervising doctoral students and postdocs.
Microsoft adds generative AI models to Azure model catalog
Microsoft has introduced new models to the Azure AI Model Catalog, including Nemotron-3 8B, Code Llama, and Mistral. We also introduced “Models as a Service” (MaaS), which makes it easier to integrate and customize AI models.
NVIDIA Introduces Nemotron-3 8B to Revolutionize Enterprise AI Development
NVIDIA introduced the Nemotron-3 8B family, a set of generative foundation models within the NeMo framework. This family of models is designed to improve enterprise AI applications and includes base, chat, and Q&A checkpoints. The Nemotron-3 8B model is available in the Azure AI Model Catalog, HuggingFace, and the NVIDIA AI Foundation Model hub and can be fine-tuned for custom use cases.
Nvidia's NeMo framework allows enterprises to quickly implement AI applications in a variety of environments. The Nemotron-3 8B family provides developers with an easy way to integrate basic models and fine-tuned versions of these models with company-specific requirements.
Trillion Parameter Consortium launches with dozens of founding partners around the world
A global consortium of scientists from federal laboratories, research institutions, academia, and industry is building large-scale artificial intelligence (AI) systems and advancing trusted and reliable AI for scientific discovery. was formed to address this issue.
The Trillion Parameter Consortium (TPC) brings together a team of researchers engaged in creating large-scale generative AI models to address key challenges in advancing scientific AI. These challenges include developing scalable model architectures and training strategies, and organizing and curating scientific data for training models. Optimize your AI libraries for current and future exascale computing platforms. Developing a detailed assessment platform to assess learning and reliability progress on scientific tasks.
TPC represents a practical approach to overcome existing limitations in AI model training and data processing. We address some of the key technical challenges in advancing AI applications in scientific research by focusing on optimizing AI libraries for exascale computing and developing effective evaluation methods.
SambaNova comments on NVIDIA chips and disruptors
Ahead of Q3 earnings and the launch of NVIDIA's H200 chip (scheduled to be available by next summer), Rodrigo Liang, CEO and co-founder of SambaNova, commented:
“While NVIDIA continues to perform well and the demand for AI is clearly surging, end users are demanding choice and suddenly have more options for training and running generative AI models. For example, Microsoft just announced two alternatives to NVIDIA's chips for the Azure cloud.
As AI workloads become more prevalent, larger, and more complex, new chips, models, and solutions are emerging to help customers run their workloads more efficiently. As the market shows that people want more innovation to choose from, lanes will emerge within the industry where competitors with disruptive technologies offer customers more choice and expertise. I'll start.
In a technology that NVIDIA CEO Jensen Huang himself said will be bigger than the Internet, innovators will come out on top, and there are plenty of competitors looking to follow in NVIDIA's footsteps. ”
Additionally, considering the impending chip shortage, NVIDIA isn't the only chip. SambaNova Systems, a manufacturer of purpose-built full-stack AI platforms, recently announced its latest generation chip, the SN40L. This “truly intelligent” chip includes both dense and sparse computing, and both large and fast memory. Here are the details about the SN40L:
- Sequence lengths of 256k or more are possible on a single system node, providing a 5 trillion parameter model. This enables high-quality models with faster inference and training at a lower total cost of ownership.
- Larger memory capacity unlocks the true multimodal capabilities of LLM, allowing businesses to easily search, analyze, and generate data across these modalities.
- LLM inference runs more efficiently, reducing the total cost of ownership (TCO) of your AI models.
AI compatible
As competition from Microsoft intensifies, Amazon plans to offer free “AI Ready” courses to teach AI skills to 2 million people by 2025.
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW