It was a busy week in AI news, with new technological developments bringing new risks and ongoing federal regulatory discussions.
Read on to find out what to look out for in the ever-changing AI landscape.
Tools and Advancements
ChatGPT users will soon be able to test new features with the recent announcement of GPT-4o, the successor to the GPT-4 model, reports The New York Times. The new model integrates artificial general intelligence (AGI) into the machine, capable of analyzing and generating ideas at a level comparable to the human brain. This new technology will enable communicators to independently develop products such as chatbots, digital assistants, search engines, and image generators.
Meanwhile, OpenAI has also come under fire for adding a controversial upgrade to its chatbot: a new voice feature called “Sky,” which reads responses aloud and is linked to the voice used by actress Scarlett Johansson in the 2013 film “Her.” OpenAI CEO Sam Altman released a statement saying that Sky's voice is not Scarlett Johansson's and the company has never attempted to imitate her, despite past social media posts saying otherwise.
This deepfake incident highlights how AI is becoming increasingly harmful to an organization's intellectual property. Work with your legal and IT teams to develop a plan for mitigating future risks by considering how to respond and communicate if a problem occurs.
The Verge reports that Apple is exploring new ways to integrate AI into its systems, with reported features including transcription, auto-generated emojis, and improved search capabilities. The Voice Memos app is also rumored to be getting an AI upgrade, with technology to record interviews and generate transcripts of presentations. The company is also reportedly set to unveil a “Smart Recap” feature that will summarize missed texts, notifications, web pages, news, and other media. For busy people, the upgrade will be a convenient way to stay informed while minimizing notification “noise.”
According to Bloomberg, Apple has also inked a deal with OpenAI to integrate ChatGPT into iOS 18. However, a partnership between Google and Anthropic is also rumored, so your favorite chatbot could be integrated soon.
Risks and Regulations
Google's new “AI Overview” provided users with misleading search results: According to an NBC investigation, queries like “How many legs does an elephant have” and “How many Muslim presidents has the United States had” returned false, misleading and politically incorrect answers.
“The examples we saw were generally highly unusual search queries and are not representative of the majority of people's search experiences,” a Google spokesperson said in a statement shared with NBC, but posts sharing these results have gone viral online.
Mishaps like this are a reminder that tools can be hallucinating, and this is the latest reminder that AI summaries and content should always be fact-checked for accuracy, even if they are comically wrong.
The latest call for federal AI regulation comes from former OpenAI executives. In an op-ed for The Economist, Helen Toner and Tasha McCauley write:
“Certainly, there are many sincere efforts in the private sector to responsibly steer the development of this technology, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will ultimately be unworkable, especially under the pressure of enormous profit incentives. Governments must play an active role.”
The two of them, Laissez-faire Similar to the approach to the internet in the 1990s, the enormous risks of AI development require universal constraints. Toner and McCauley envision regulations that ensure the benefits of AI are responsibly and broadly realized. Specifically, these policies could include requirements for transparency, tracking of incidents, and government visibility into progress.
Amid a flurry of lawsuits and criticism, OpenAI is forming a safety and security committee to explore how to address the risks posed by GPT-4o and future models. “We are proud to build and release models that lead the industry in both functionality and safety, but we welcome robust discussion at this critical time,” the company said in a press release. Interested parties can read the committee's recommendations after the board's review in 90 days.
Given recent news that Washington is beginning to push for new AI regulations, it will be worth keeping an eye on how Toner and McCauley's op-ed will affect future legislation. The NO FAKES Act (Nurturing Originals, Cultivating the Arts, Keeping Entertainment Safe) is a bipartisan proposal that senators aim to introduce as early as June. The legislation would ban individuals or companies from using AI to create unauthorized digital reproductions of one's likeness or voice.
A new report from the Bank for International Settlements (BIS) surveyed 32 of the 63 central bank member countries about their interest in applying AI to cybersecurity.
71% of respondents are already using generative AI, and 26% plan to incorporate the tool into their operations within the next 1-2 years. Respondents' biggest concerns are the risks associated with social engineering, zero-day attacks, and unauthorized data disclosure.
According to respondents, the top cybersecurity benefits include automating routine tasks, improved response times and deep learning insights, allowing cybersecurity teams to augment traditional capabilities with AI. The data shows that experts believe AI has the potential to detect threats sooner by analyzing patterns that go beyond human capabilities.
While these features may be eye-catching, the costs associated with deploying these tools remain a concern for some companies. While it's not surprising that BIS expects this move to replace staff and “free up resources” to be reallocated, it's important for communicators to understand how AI-enhanced cybersecurity detection can augment or strengthen their company's crisis communications strategy in the event of a cyberattack.
Finally, here are some lessons from Meta about forming or reorganizing your organization's advisory groups. A new group recently formed to advise Meta on AI and technology product strategy has been criticized for its apparent lack of diversity. “If we don't have people with diverse perspectives in the process of building, developing, and using these systems, we run a significant risk of perpetuating bias,” Alyssa Lefaivre Škopac, head of global partnerships and growth at the Responsible AI Institute, told CIO.
The composition of advisory groups is not considered a business practice that reflects diversity, equity, and inclusion (DE&I) efforts. A recent Gartner study, “How to evolve AI without sacrificing diversity, equity, and inclusion,” found that rapid AI integration and inherent bias in models are creating trade-offs for companies' DE&I initiatives.
The Meta situation and Gartner research demonstrate the need for diverse and representative advisory groups that truly reflect the breadth of backgrounds, identities, perspectives and lived experiences of all stakeholders. This is a wake-up call for organizations to avoid drawing talent not only from different backgrounds such as race and gender, but also from business functions such as compliance, legal, HR and tech procurement.
Labor force
In 2017, Alibaba founder Jack Ma predicted that a robot would grace the cover of Time magazine as the top CEO in 30 years' time, but new research suggests that prediction could come true sooner than 2047.
The latest demographic of employees to be threatened by AI are CEOs: According to EdX survey data from last summer, nearly half of executives believe most or all of the CEO's role should be fully automated or replaced by AI. Of course, this data is almost a year old, so take it with a pinch of salt.
Remember, humans have many assets that machines don't: accountability, leadership and responsibility are three capabilities that technology doesn't yet have.
On the contrary, Netflix co-CEO Ted Sarandos says AI will augment jobs, not replace them.
In a recent interview with The New York Times, Sarandos told reporter Lulu Garcia-Navarro, “AI is not going to take your job. Someone who uses AI well may take your job,” repeating a line often uttered at Lagann AI training sessions.
Last year, Netflix posted a machine learning job offer in the middle of the Hollywood strike that paid up to $900,000, sending a signal to screenwriters in the process. It's unclear how Sarandos thinks AI will relate to the protections screenwriters won during negotiations, but it's a lingering question whose impact will send a broader signal to content creators across the industry.
What trends and news are you following in the AI space? What would you like to see covered in our 100% human-written bi-weekly AI roundup? Let us know in the comments.
Callie Krosin is a reporting and editing intern at Ragan and PRDaily. Follow her on Twitter and LinkedIn.