Microsoft, Google, xAI: Gov't Tests AI for Cybersecurity
Summary
Google, Microsoft, and xAI are providing their unreleased artificial intelligence models to the government. This is to help mitigate cybersecurity vulnerabilities. The National Institute of Standards and Technology announced this collaboration. It follows concerns raised by Anthropic's Mythos AI model last month. The White House is now considering a formal evaluation process for AI. These agreements allow the Centre for AI Standards and Innovation, or CAISI, to assess new AI models. They will look at potential impacts on national security and public welfare before deployment. CAISI also conducts post-deployment research and testing, with over 40 evaluations already completed. CAISI Director Chris Fall states that rigorous measurement science is crucial for understanding frontier AI and its national security implications. These partnerships boost CAISI's capacity to serve the public. Anthropic's Mythos AI model, which they call "significantly advanced" in cybersecurity, has raised concerns among government bodies and financial institutions. Anthropic is limiting access to select entities and has briefed U.S. government officials. OpenAI also announced it will offer access to its advanced AI models to vetted government tiers to prevent AI threats. These partnerships are expected to enhance CAISI's testing efforts by providing more resources. The White House is consulting experts on.
This is an AI-generated audio summary. Always check the original source for complete reporting.