As artificial intelligence (AI) becomes more prevalent online and on social media, platforms are taking steps to help users identify computer-generated content. Facebook users are reportedly experiencing significant issues with AI-generated content, with increasingly disturbing images appearing in their feed.
While those posts are easy to spot on close inspection, they received a surprising number of likes and engagement, which doesn't bode well for fake content that could influence public opinion and behavior.
Meta, owned by Facebook, requires users to add a “Made with AI” label when they upload their own content. As part of Meta's expanded AI disclosure agreement, its automated detection tools will now tag AI-generated content as “Made with AI.” This label will appear the same to viewers whether it's added automatically or manually by a user.
Meanwhile, TikTok has become the first video-sharing platform to implement the Coalition for Content Provenance and Authenticity's (C2PA) content authentication technology, a continuation of the app's efforts to “empower creators to express their creativity safely and responsibly with AI-generated content (AIGC),” according to a TikTok blog.
“AI opens up amazing creative opportunities, but it can also confuse or mislead viewers if they don't know the content was generated by AI,” TikTok said in announcing the C2PA partnership. “Labeling helps clarify that context. That's why we label AIGC made with TikTok AI Effects and have asked creators to label realistic AIGC for over a year. We also built a first-of-its-kind tool to make this easy, which has been used by more than 37 million creators since last fall.”
TikTok has expanded automated labeling using C2PA's tools to AIGC created on other platforms. The technology helps instantly recognize and label such content in images and videos, and will soon be deployed to audio content.
(Photo: Getty Images)
Follow JCK on Instagram: translation:
Follow JCK on Twitter: translation:
Follow JCK on Facebook: translation: