Hopp til hovedinnhold
 AI-nyheter, ferdig filtrert for ledere
SISTE:

DeepSeek V4 utløser rush etter Huawei-brikker i Kina • USA kan overstyre AI-risikoflagg i Anthropic-strid • EU klarte ikke å enes om mykere AI Act-regler

Alphabet investors press for stronger controls over AI and cloud use
CIOCEOStyreAI StrategySecurityCloud

Alphabet investors press for stronger controls over AI and cloud use

JH
Joachim Høgby
29. april 202629. april 20264 min lesingKilde: Reuters

Reuters reports that Alphabet investors are asking the Google owner to explain how it governs the risk that its AI and cloud services are used by governments for surveillance.

The trigger is an investor letter seen by Reuters, signed by 42 organizations and 14 individuals. Reuters says the signatories manage a combined $1.15 trillion in assets and own roughly $2.2 billion of Alphabet shares. They are seeking a meeting with management after Alphabet urged shareholders to vote against a resolution calling for more reporting on human-rights due diligence, surveillance risk and governance of technology use.

The facts matter. The investors want to understand how Alphabet assesses and mitigates misuse risk, and whether government contracts give the company authority to intervene or cancel agreements if risks escalate. Reuters says the letter cites Google services for U.S. immigration authorities, Project Nimbus with Israel, and Alphabet's operations in Saudi Arabia. Alphabet argues in its shareholder response that its existing privacy, security and transparency disclosures are sufficient. It did not immediately respond to Reuters on the investor letter.

This is bigger than Alphabet. Reuters frames the letter as part of a broader investor push around data privacy, AI governance and public-sector use of cloud and AI platforms at companies including Microsoft, Amazon and Apple. That should matter to Nordic executives because the same hyperscalers are becoming infrastructure for case handling, security operations, analytics, customer dialogue and, increasingly, autonomous agents.

The leadership consequence is practical: AI governance has to move from principles documents into contracts, access controls and exit rights. When one supplier provides models, data platform, identity, logging and public-sector cloud, it is not enough to ask about data residency and a processor agreement. CIOs, CISOs and boards should ask which use cases the supplier can reject, how government requests are handled, how high-risk use is logged, and whether the contract gives the customer a right to suspend functionality when regulatory or geopolitical risk changes.

For European leaders, the GDPR angle is not abstract. Reuters notes the possible exposure to litigation, regulatory action and fines, including penalties of up to four percent of revenue under European data-protection rules. That makes AI cloud governance an enterprise-risk issue, not a procurement detail. It belongs next to concentration risk, resilience and compliance in the board pack.

The recommended next step is concrete: create a dedicated checklist for AI and cloud contracts in public-sector and regulated environments. It should cover purpose limitation, government access, military or surveillance-related use, subcontractors, model and log data, audit rights and termination rights. Do not wait for the next procurement cycle. Start with the largest hyperscaler agreements that already carry critical data and AI functions.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.