After OpenAI announced ChatGPT late last year, 2023 has been a watershed year for AI, marked by advances in generative AI, fierce competition, and growing concerns about ethics and safety.
The sector's rapid growth this year has brought both technological innovations and significant challenges. From a change in leadership at OpenAI to challenges from new players like Google's Gemini and Anthropic's Claude, the year saw many big changes in the generative AI landscape. Alongside these developments, the industry grappled with cybersecurity risks and debated the ethical implications of fast-paced advances in AI.
OpenAI fires and rehires CEO Sam Altman
One of the most surprising pieces of news this year was the abrupt firing of OpenAI co-founder and CEO Sam Altman by the company's board on November 17, citing a lack of candor in his communications. Shortly after Altman's departure, Microsoft announced it would hire him and OpenAI president and co-founder Greg Brockman to a new AI research division.
Altman's departure and the tumultuous period that followed sparked a widespread backlash within OpenAI, with 95% of employees threatening to resign in protest of the board's decision. Within a week of Altman's initial firing, OpenAI reinstated him as CEO. This decision was influenced by extensive board negotiations and enthusiastic employee support.
While Altman's return was a relief to many, the event also highlighted fundamental challenges within OpenAI: the tension between its dual incentives as a profit-driven and mission-driven organization, and the company's It exposed the extent to which its viability is tied to Mr. Altman. Himself. These moves, along with the appointment of new business-minded board members such as former Harvard University president Larry Summers and former Salesforce co-CEO Brett Taylor, raise questions about the company's future direction. .
ChatGPT competitors emerge
ChatGPT started its generative AI hype in November 2022. OpenAI dominated the headlines again this year, but 2023 also saw the rise of many competitors.
While the company's DeepMind Lab has historically been an AI pioneer, Google initially lagged in generative AI, with its Bard chatbot plagued by inconsistencies and hallucinations after its launch earlier this year. However, the company's outlook could change to 2024 after releasing its multimodal platform model Gemini earlier this month. Gemini, which Google has announced will enhance Bard and his other Google applications, integrates text, image, audio and video functionality and could reinvigorate Google's position in the generative AI field .
Meanwhile, Anthropic, an AI startup founded by former OpenAI staff, has launched Claude 2, a large-scale language model that aims to address security and data privacy concerns while performing at levels that compete with ChatGPT. announced. Features like the ability to analyze large files and Anthropic's safety-focused features set Claude apart from rivals like his ChatGPT and Bard.
And IBM has rebranded its long-standing Watson AI system and entered the fray with Watsonx, a generative AI platform targeted to enterprise needs, with a focus on data governance, security, and model customization. But despite its differentiated approach and focus on hybrid cloud, IBM must overcome challenges related to market speed and competition from both startups and established technology giants.
Open source AI becomes increasingly useful
In addition to the wide range of commercial options, the open source AI landscape is also expanding. Open source AI models offer an alternative to the generative AI services of large cloud providers and allow businesses to customize models using their own data. Training and customizing open source models offers greater control and potential cost savings, but it can also present challenges for enterprises.
In February, AWS partnered with Hugging Face, a prominent hub for open source AI models. The partnership makes training, fine-tuning, and deploying LLM and vision models more accessible and represents Amazon's strategic response to the generative AI movement from competitors Microsoft and Google. This partnership gives Hugging Face access to his extensive AWS infrastructure and developer ecosystem.
Also in February, Meta entered the generative AI market with its proprietary LLM, Llama. Llama was originally intended for research use under a non-commercial license and was designed to be a smaller, easier to manage foundational model. However, despite Mehta's plans to restrict access to academics, government agencies, and other research institutions, Rama was leaked online shortly after its release.
Meta's release of the upgraded Llama 2 in July marked a significant development in the generative AI market as an open source LLM available for both research and commercial purposes. Meta partnered with Microsoft to make Llama 2 available in the AI Model Catalog on Azure, and optimized the model for Windows, increasing its enterprise appeal.
OpenAI expands its products and commercial footprint
After the highly successful release of ChatGPT in 2022, OpenAI has introduced several new products in 2023. Some of the most notable products include:
- Introduction of paid tiers. In February, we introduced ChatGPT Plus for individuals and small teams, and in August we introduced the Enterprise level for large organizations. Both offer increased service availability and advanced features such as plug-ins and Internet browsing.
- Upgraded to OpenAI's flagship LLM in March. GPT-4 is a multimodal version of his GPT model with superior performance compared to his previous GPT-3.5, which powers the free version of ChatGPT.
- ChatGPT's new data privacy feature for April — namely, an option for users to disable chat history to prevent OpenAI from using conversations to retrain AI models.
- In October, OpenAI's image generation model Dall-E 3 was integrated into ChatGPT Plus and Enterprise.
- Some announcements from OpenAI's first Dev Day conference in November. These include GPT-4 Turbo, a cheaper version of GPT-4 with a larger context window, and GPT-4 Turbo, a customizable version of ChatGPT that users can customize for specific tasks without writing code. Includes GPT launch.
Concerns emerge regarding AI safety and security
As generative AI gains traction in 2023, the debate over AI security and safety intensifies. Popular media frequently highlighted concerns about artificial general intelligence (AGI). AGI is a hypothetical form of AI that has capabilities that match or exceed human intelligence and capabilities.
Turing Award winner Jeffrey Hinton has left Google, citing concerns about the safety of AI. “Things like GPT-4 know a lot more than we do,” he said. MIT Technology ReviewEmTech Digital 2023 conference held in May. His comments echoed similar concerns in a widely circulated letter in March advocating for a moratorium on AI development, which argued that the development of AI that “competes with humans” is a “civilizational challenge.” He questioned whether there was a “risk of losing control of the vehicle.”
However, many other AI researchers and ethicists argue that these existential risk concerns are hyperbolic because AGI remains speculative. It is currently unclear whether this technology will be realized. In this view, focusing on AGI distracts from current concrete issues such as algorithmic bias and the use of existing AI systems to generate harmful content. There is also a competitive element to the AGI debate, which serves the interests of large AI companies by presenting AI as too powerful a technology to safely extend access to smaller companies.
Among the existing AI dangers, clearly the most prominent are cybersecurity vulnerabilities, such as ChatGPT's ability to increase the success and prevalence of phishing scams. Chester Wisniewski, director and global field CTO of security software and hardware vendor Sophos, explained in an interview earlier this year how easily ChatGPT can be manipulated for malicious purposes.
”[ChatGPT is] “They're much better at writing fishing lures than real people, at least the people who are creating fishing lures,” he told TechTarget editorial staff Esther Ajao in January. “Most of the people creating phishing attacks don't have advanced English skills, so they're not as successful at compromising people. What I'm really concerned about is that ChatGPT It’s about how the social aspect can be used by people to attack us.”
Lev Craig is the site editor for TechTarget Enterprise AI, covering AI and machine learning.