new york
CNN
—
pope francis He wears a giant white puffer coat. Elon Musk walks hand in hand with rival GM CEO Mary Barra. Former President Donald Trump was detained by police in dramatic fashion.
None of these events actually happened, but AI-generated images depicting them have gone viral online over the past week.
The images ranged from clearly fake to, in some cases, what appeared to be real, fooling some social media users.For example, model and television personality Chrissy Teigen tweeted She said she thought the pope's puffer coat was real. There is no way you can survive in the technological future. The image also garnered numerous headlines as news organizations rushed to debunk the false images, particularly those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not yet been arrested. Ta.
This situation illustrates the new online reality. The rise of hot new artificial intelligence tools has made it cheaper and easier than ever to create audio and video, as well as realistic images. And these images are likely to appear more and more often on social media.
While these AI tools may enable new avenues for creative expression, the proliferation of computer-generated media also threatens to further contaminate the information ecosystem. This risks further increasing the challenge of scrutinizing what is true for users, news organizations, and social media platforms, which have grappled with online misinformation for years that features far less sophisticated visuals. accompanied by. There are also concerns that AI-generated images could be used to harass or further divide divided internet users.
“There is so much false and very real content online that most people are led to act on their tribal instincts, guided by what they think is real, rather than their actually informed opinions. “I'm worried that this will happen, based on verified evidence,” said Henry Ajdar, a synthetic media expert who works as an advisor to companies and government agencies, including MetaReality Lab's European Advisory Board. he said.
CNN Photo Illustration/From Midjourney/Elliott Higgins/Twitter
Elliott Higgins, founder and creative director of research group Bellingcat, posted a fake image of former President Donald Trump on Twitter last week. Mr Higgins said he created these using Midjourney, an AI image generator.
Clare Leibowitz, head of AI and media integrity at Partnership on AI, said images compare to AI-generated text, which has proliferated in recent years, also thanks to tools like ChatGPT. It states that it can sometimes be particularly powerful in evoking emotions. Non-profit industry association. This can make it difficult for people to slowly assess whether what they are seeing is real or fake.
In addition, organized bad actors end up creating large amounts of fake content or pretending that genuine content is computer-generated in order to confuse Internet users and induce them to behave in certain ways. You may try to suggest that.
Ben Decker, CEO of threat intelligence group Memetica, said: “The impending paranoia of President Trump…the possibility of an arrest has created a very useful case study in understanding the potential impact.'' I think we were very lucky that things didn't get worse.” . “Because I think if more people had brought those ideas together in an organized way, there would be a world where we would start to see an online-to-offline effect.”
From Photoshopped images of sharks swimming across flooded highways that were repeatedly shared during natural disasters, to a website that began churning out non-existent but largely unconvincing fake photos four years ago, computers Image generation technology has advanced rapidly in recent years. people.
Many of the recent AI-generated images were created by a tool called Midjourney. Midjourney is a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as a “small self-funded team” with just 11 full-time staff members.
A quick look at a popular Facebook page among Midjourney users shows an AI-generated, clearly inebriated Pope Francis, geriatric versions of Elvis and Kurt Cobain, and Mr. Musk wearing a robotic Tesla bodysuit. , and many creepy animal creations were revealed. And that's what's happened in the last few days.
Robert/Adobe Stock
Midjourney has emerged as a popular tool for users to create AI-generated images.
The latest version of Midjourney is only available to some paying users, Midjourney CEO David Holtz told CNN in an email Friday. According to Holtz's Discord post, Midjourney suspended access to the initial version's free trial this week, citing “unusual demand and trial abuse,” but Holtz told CNN that it was not a viral image. said it was unrelated. The creator of the image of President Trump's arrest also claimed to have been banned from the site.
The rules page on the company's Discord site asks users to: This includes gore and adult content. ”
“Moderation is difficult. We'll be shipping an improved system soon,” Holtz told CNN. “We're incorporating a lot of feedback and ideas from experts and the community, and we're trying to be really thoughtful.”
In most cases, the creators of recent viral images do not appear to be acting with malicious intent. The image of President Trump's arrest was created by the founder of the online investigative news agency Bellingcat, who clearly labeled it his own fabrication, even if other social media users were less insightful.
Efforts are being made by platforms, AI technology companies, and industry groups to improve transparency when content is computer-generated.
Meta's platforms, including Facebook, Instagram, Twitter, and YouTube, have policies that limit or prohibit the sharing of manipulated media that may mislead users. However, as the use of AI-generated technologies increases, even such policies can erode user trust. For example, if a fake image accidentally slips past a platform's detection system, “it could give people a false sense of trust,” Ajder said. “They'll say, 'It must be real because we have a detection system that knows it's real.'”
For example, technological solutions are being developed to watermark AI-generated images or include transparent labels in the image metadata so that anyone viewing the image over the internet can see it. You will now know that it was created by a computer. The Partnership on AI has developed a set of standard and responsible practices for synthetic media with partners including: ChatGPT's creators, OpenAI, TikTok, Adobe, Bumble, and BBC, include recommendations on how to disclose AI-generated images and how companies can share data about such images. .
“The idea is that all of these institutions are focused on disclosure, consent and transparency,” Leibovitz said.
A group of technology leaders, including Musk and Apple co-founder Steve Wozniak, wrote an open letter this week to the Institute for Artificial Intelligence, citing “serious risks to society and society” and calling for the most powerful AI systems It called for the suspension of training for at least six months. Human race. ” Still, it's unclear whether any labs will take such steps. And as technology rapidly advances and becomes accessible beyond a relatively small group of companies committed to responsible practices, , lawmakers may also need to be involved, Ajder said.
“This new age of AI cannot be left in the hands of a few large companies that are making their fortunes off these tools. We need to democratize this technology,” he said. “At the same time, there are very real and legitimate concerns that taking a radically open approach, such as open sourcing tools or placing minimal restrictions on their use, would lead to massive harm. There are concerns…and I think we need legislation.''Perhaps it will play a dominant role in some of the more radically open models. ”