Five prominent Senate Democrats sent a letter to OpenAI CEO Sam Altman, demanding clarification about the company's safety and hiring practices.
The letter, signed by Senators Brian Schatz, Ben Ray Lujan, Peter Welch, Mark R. Warner, and Angus S. King Jr., comes in response to recent reports questioning OpenAI's commitment to the goal of safe and responsible AI development.
The senators emphasized that AI safety is crucial to the nation's economic competitiveness and geopolitical standing. They point to OpenAI's partnerships with the U.S. government and national security agencies to develop cybersecurity tools, underscoring the importance of secure AI systems.
“National and economic security is one of the U.S. government's most important responsibilities, and insecure or vulnerable AI systems are unacceptable,” the letter reads.
Lawmakers are requesting detailed information on several key areas by August 13, 2024, including:
- OpenAI has committed to dedicating 20% of its computing resources to AI safety research.
- The company's stance regarding non-disparagement agreements with current and former employees.
- Procedures for employees to raise concerns about cybersecurity and safety.
- Security protocols to prevent theft of AI models, research, and intellectual property.
- OpenAI adheres to its own supplier code of conduct regarding its non-retaliation policy and whistleblowing channels.
- OpenAI plans to have its system tested and evaluated by independent experts before releasing it.
- The company is committed to providing future base models to U.S. government agencies for pre-deployment testing.
- Post-release monitoring practices and learning from deployed models.
- Plans to publish a retrospective impact assessment of the deployed model.
- A document to fulfill voluntary safety and security commitments to the Biden-Harris Administration.
The senators' investigation touches on recent controversies surrounding OpenAI, including internal disputes over safety measures and reports of alleged cybersecurity breaches. They specifically ask whether OpenAI “commits to removing other clauses from employment contracts that could be used to punish employees who publicly raise concerns about the company's practices.”
The congressional investigation comes at a time of heightened debate over AI regulation and safety measures. The letter noted voluntary commitments made by major AI companies to the White House last year, calling them a “significant step toward building trust” in AI safety and security.
Kamala Harris may become the next US president in elections later this year. At the AI Safety Summit in the UK last year, Harris said: “Let me be clear: there are further threats that require our action. The same threats that are causing harm today are also existential threats for many. People around the world are being bombarded with AI-driven myths and disinformation, making it difficult to distinguish fact from fiction.”
Chelsea Alves, a consultant at UNMiss, commented: “As Kamala Harris enters the presidential race, her approach to regulating AI and big tech companies is timely and important. Her policies could set a new standard for how we address the complexities of modern technology and personal privacy.”
OpenAI’s response to these inquiries could have significant implications for the future of AI governance and the relationship between tech companies and government oversight bodies.
(Photo: Darren Halstead)
reference: OpenResearch reveals potential impacts of Universal Basic Income
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London – this comprehensive event will take place alongside other major events such as Intelligent Automation Conference, BlockX, Digital Transformation Week and Cyber Security & Cloud Expo.
Find out about upcoming enterprise technology events and webinars hosted by TechForge here.