The Biden-Harris administration announced that it has secured a second round of voluntary security clearances from eight prominent AI companies.
Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability were present at the White House for the announcement. These eight companies are committed to playing a key role in accelerating the development of safe, secure, and trustworthy AI.
The Biden-Harris administration is actively developing executive orders and pursuing bipartisan legislation to help the United States lead the way in responsible AI development that manages risk while unlocking its potential.
These companies' efforts revolve around three fundamental principles: safety, security, and trust. They promised:
- Make sure your product is safe before deployment.
Companies are committed to conducting rigorous internal and external security testing of their AI systems before making them available to the public. This includes independent expert assessments and helps prevent significant AI risks such as biosecurity, cybersecurity and wider societal impact.
We also plan to actively share information on AI risk management with governments, civil society, academia, and industry at large. This collaborative approach includes sharing safety best practices, information on attempts to circumvent safety devices, and technical cooperation.
- We build systems with security as our top priority.
Both companies have committed to investing in cybersecurity and insider threat protection to protect the weight of their proprietary and unreleased models. We recognize that these model weights in AI systems are extremely important and are committed to releasing model weights only when intended and only when security risks are properly addressed.
Additionally, the companies will facilitate the discovery and reporting of vulnerabilities in their AI systems by third parties. This proactive approach allows you to quickly identify and resolve issues even after the AI system is deployed.
- Earn public trust:
To enhance transparency and accountability, the companies plan to develop robust technical mechanisms, such as a watermarking system, to indicate that content has been generated by AI. This step is intended to encourage creativity and productivity while reducing the risk of fraud and deception.
We will also publicly report on the capabilities, limitations, and areas of appropriate and inappropriate use of AI systems, covering both security and social risks, including fairness and bias. In addition, these companies are committed to prioritizing research into the social risks posed by AI systems, including addressing harmful bias and discrimination.
These leading AI companies develop and deploy advanced AI systems to address critical societal challenges, from cancer prevention to climate change mitigation, contributing to prosperity, equality, and security for all.
The Biden-Harris Administration's commitment to these commitments extends beyond the United States, with discussions involving many international partners and allies. These efforts complement global efforts such as the UK's AI Safety Summit, Japan's leadership in the G-7 Hiroshima Process, and India's leadership as chair of the Global Partnership on AI.
This announcement is an important milestone in our efforts towards responsible AI development, with industry leaders and governments working together to ensure that AI technologies benefit society while mitigating their inherent risks. .
(Photo by Tabrez Syed on Unsplash)
See also: UK AI ecosystem to reach £24bn by 2027, ranking third in global competition
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expos in Amsterdam, California, and London. This comprehensive event coincides with Digital Transformation Week.
Learn about other upcoming enterprise technology events and webinars from TechForge here.