A week is traditionally a long time in politics, but when it comes to AI it's a yawning chasm. The pace of innovation by major providers is important. The ferocity of innovation in the face of increased competition is something else entirely. But are the ethical implications of AI technology being left behind by this rapid pace?
Claude's creator, Anthropic, released Claude 3 this week, claiming it is a “new standard in intelligence” and has soared ahead of competitors such as ChatGPT and Google's Gemini. The company also says it has achieved “near-human” proficiency in a variety of tasks. In fact, as pointed out by his engineer Alex Albert at Anthropic Prompt, during the testing phase of Claude 3 Opus, the most powerful LLM (Large-Scale Language Model) variant, the model was evaluated by showed signs of recognition.
Moving from text to images, Stability AI announced an early release of Stable Diffusion 3 at the end of February, days after OpenAI announced Sora, a new AI model that can generate near-realistic high-resolution videos from simple text prompts. A preview has been released.
Although progress continues, perfection remains difficult to achieve. Google's Gemini model was criticized for producing historically inaccurate images, which the publication said “rekindled concerns about bias in AI systems.”
Getting this right is a key priority for everyone. Google responded to Gemini's concerns by pausing the creation of images of people for the time being. in a statement, the company said that Gemini's AI image generation “generates a wide range of people…and people all over the world are using it, so that's generally a good thing.” But we're missing the point here. ” In previewing Stable Diffusion 3, Stability AI said the company believes in safe and responsible AI practices. “Security begins when you begin training a model and continues through testing, evaluation, and deployment,” the statement reads. OpenAI takes a similar approach to Sora. In January, the company announced an initiative to promote responsible use of AI among families and educators.
This is from a vendor perspective, but how are leading organizations tackling this issue? Look at how the BBC is leveraging generative AI and putting its value first. Let's. In October, Rhodri Talfan-Davies, the BBC's national director, referred to a three-pronged strategy to always act in the best interests of the public. Always prioritize talent and creativity. And be open and transparent.
Last week, the BBC outlined a series of pilots based on these principles that laid out more of these bones. One example is reformatting existing content to broaden its appeal, such as capturing live sports radio commentary and rapidly converting it to text. Additionally, our editorial guidelines regarding AI have been updated to state that “all use of AI will be subject to active human oversight.”
It's also worth noting that the BBC doesn't believe its data should be scraped without permission to train other generative AI models, so it bans crawlers such as OpenAI and Common Crawl. This will be another convergence point that stakeholders will need to agree on going forward.
Another major company taking its responsibility for ethical AI seriously is Bosch. He has five guidelines for the home appliance manufacturer's code of ethics. First, all Bosch AI products must reflect an “Invented for life'' ethos, which combines the pursuit of innovation with a sense of social responsibility. The second one is a copy of the BBC. AI decisions that affect humans should not be made without a human arbiter. Meanwhile, his other three principles explore safe, robust and explainable AI products. Trust; Comply with legal requirements and adhere to ethical principles.
When the guidelines were first announced, the company hoped its AI Code of Ethics would contribute to the public debate about artificial intelligence. “AI will change every aspect of our lives,” said Volkmar Denner, Bosch's CEO at the time. “For this reason, discussions like this are essential.”
It is in this spirit that the free virtual AI World Solutions Summit event presented by TechForge Media will be held on March 13th. Mr. Sudhir Tiku, Bosch's Singapore vice president for Asia Pacific, will be the keynote speaker for the 12:45 session. GMT explores the complexities of scaling AI securely and considers the ethical considerations, responsibilities, and governance of its implementation. Another session, held at 14:05 pm Japan time, will explore the long-term impact on society and how business culture and mindset can change to increase trust in AI.
Reserve your free pass today to access live virtual sessions.
Learn about other upcoming enterprise technology events and webinars from TechForge here.
Photo by Jonathan Chng on Unsplash