US to Assess AI Models Before Release: New Policy Shift
Summary
The US government will now evaluate new artificial intelligence models from tech giants like Google DeepMind, Microsoft, and xAI *before* they are released to the public. This marks a significant policy change. Here's the thing: this new approach replaces the Trump administration's earlier hands-off stance on AI regulation. These partnerships stem from agreements made under Joe Biden, now renegotiated by Donald Trump. The Center for AI Standards and Innovation, or CAISI, will conduct these pre-deployment evaluations to assess AI capabilities and boost security. What's interesting is the immediate catalyst for this shift: a powerful new AI model called Mythos, developed by Anthropic. This model can identify software vulnerabilities, and Anthropic has declined to release it publicly due to its potential impact. The National Security Agency is reportedly already testing Mythos. The bottom line: the government wants to understand and secure powerful AI before it's unleashed, impacting everything from cybersecurity to daily life.
This is an AI-generated audio summary. Always check the original source for complete reporting.