Microsoft tightens governance and security controls around Copilot
Microsoft has published a new Microsoft 365 Copilot update that is less about flashy demos and more about the thing that actually determines whether large organizations will deploy AI at scale: control.
The core of the update is stronger governance inside Microsoft Purview. Organizations can now use Data Loss Prevention to protect Copilot prompts that contain sensitive information, and Microsoft is also extending that protection to web searches in Copilot and Copilot Chat. In practice, that means companies can prevent employees from sending sensitive data into search while still allowing responses grounded in internal sources such as Work IQ.
Microsoft is also adding stronger tools to reduce oversharing. Admins can identify and remediate shared links across SharePoint at greater scale, reducing the risk that Copilot pulls in content that was exposed too broadly. Inside the admin center, teams also get better visibility into how many sensitive Copilot interactions are protected and where gaps still remain.
This is not a model launch, but it is still meaningful enterprise AI news. For CIOs and security teams, this is the kind of update that helps move AI from pilot mode into real production use. Microsoft is effectively trying to remove one of the biggest blockers to enterprise AI adoption: fear of data leakage and weak governance.
The short version is simple: Microsoft is making Copilot a little less magical, and a lot more usable inside organizations that care about governance, compliance, and control.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.