Tech giants agree to US government AI testing programme
CAISI has already completed 40 model evaluations as the companies join a government review process focused on cybersecurity, biosecurity and chemical weapons risks.
5 Articles
5 Articles
Google, xAI, and Microsoft agree to government screening before releasing updated AI models
Amid growing cybersecurity concerns, top artificial intelligence companies are now agreeing to let the federal government see what is coming before the public does. According to CNN, Google, Microsoft, and xAI will give federal officials access to unreleased AI models for cybersecurity screening, a sign that concerns about next-generation systems are rising quickly. The new agreements, announced Tuesday by the National Institute of Standards an…
Google, Microsoft and xAI will provide the US government with early insights into new AI models. According to the Wall Street Journal, the Center for AI Standards and Innovation (CAISI) is expected to test models before they are released to the public, especially assessing possible security risks and particularly high-performance capabilities. Apple has been supporting LGBTQ+ communities for years. The agency is located at the US Department of C…
Center for AI Standards and Innovation (CAISI) to vet AI models!
Computerworld.com reported that “The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available.” The May 6, 2026 article entitled ” US government agency to safety test frontier AI models before release” (https://w…
The New Standard: Microsoft, Google, and xAI Back Government-Led AI Model Testing
Microsoft, Google, and xAI have agreed to submit their most advanced AI systems to government-led testing in both the US and UK, marking a notable shift in how frontier models are evaluated before deployment. The collaboration will see these companies work with the US Center for AI Standards and Innovation (CAISI) and the UK’s AI Security Institute (AISI) to assess risks tied to increasingly capable AI systems. The initiative focuses on stress t…
Coverage Details
Bias Distribution
- 50% of the sources lean Left, 50% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium



