Hopp til hovedinnhold
 AI-nyheter, ferdig filtrert for ledere
SISTE:

OpenAI beklager manglende varsling før Tumbler Ridge-skytingen • DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitet • OpenAI lanserer GPT-5.5 for ChatGPT og Codex

Google reportedly opens AI models for classified Pentagon use
CIOCEOStyreAI StrategySecurityCloud

Google reportedly opens AI models for classified Pentagon use

JH
Joachim Høgby
28. april 202628. april 20263 min lesingKilde: The Verge

Google has reportedly signed a classified agreement that opens its AI models for use by the U.S. Department of Defense.

The facts first: The Verge covers a report from The Information saying the deal gives the Department of Defense access to Google’s AI systems for “any lawful government purpose”. Google says in a statement that it is part of a broad consortium of AI labs, technology and cloud providers supporting national security, and that AI should not be used for domestic mass surveillance or autonomous weapons without appropriate human oversight.

This is still a reported classified agreement, not a fully public contract. The details should therefore be treated carefully. But the strategic direction is important: frontier models are moving from productivity tools and customer service into state security environments with higher demands for control, logging and accountability.

For CIOs and boards, the management implication is not the Pentagon itself. It is that a vendor’s policy, contract terms and real-world usage boundaries are now part of the risk picture. If a model can be used in sensitive U.S. government workflows, the same technology platform will also be considered for police, defense, emergency response, healthcare and critical infrastructure in Europe. It is no longer enough to ask whether the model is “good”. Leaders need to ask who can change safety filters, what data is logged, which jurisdiction applies and whether the customer can document purpose limitation and human control.

The report also sits in a broader competitive context. OpenAI and xAI are described as already having classified U.S. AI agreements, while Anthropic reportedly ended up outside after disagreement over weapon- and surveillance-related guardrails. That makes AI vendor selection about more than capability and price. It is about supply chain, geopolitics, compliance and whether the vendor’s safety posture matches the organization’s mandate.

Assessment: Organizations should use this as a governance checkpoint. Update vendor reviews with questions about public-sector and military contracts, model policy, audit logs, data region, access control and options for independent assurance. For public-sector and regulated organizations, those requirements should be written into procurement and DPIA work before sensitive AI use cases are scaled.

The practical advice is simple: do not stop all use of large models, but classify the use. Low-risk productivity can follow one approval path. Decision support in security, healthcare, HR or critical infrastructure needs stricter contracts, logging, evaluation requirements and named human accountability. This report shows why that distinction is no longer theoretical.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.