Hopp til hovedinnhold
 AI-nyheter, ferdig filtrert for ledere
SISTE:

Anthropic åpner Claude Security for Enterprise-kunder • Britisk AISI: GPT-5.5 når Mythos-nivå i cybertester • OpenAI åpner GPT-5.5-Cyber kun for utvalgte forsvarere

Musk says xAI partly used OpenAI models to train Grok
CIOCEOStyreAI StrategySecurityKode

Musk says xAI partly used OpenAI models to train Grok

JH
Joachim Høgby
30. april 202630. april 20263 min lesingKilde: The Verge

Elon Musk said in court that xAI has partly used OpenAI models to improve its own models.

Fact: The Verge reported on April 30, 2026 from a federal courtroom in California that Musk was asked whether xAI had distilled OpenAI technology. After first saying that “generally all the AI companies” do that, he answered: “Partly.” WIRED reported the same exchange from the cross-examination and wrote that Musk argued it is standard practice to use other AI systems for validation.

Model distillation is not automatically illegal. A larger model can act as the teacher for a smaller or cheaper model. The risk appears when a supplier uses a competitor’s service, API or outputs in a way that may breach terms, intellectual-property rights or security expectations. OpenAI, Anthropic and Google have already described distillation as a real competitive and security issue, especially in debates around Chinese models.

This is therefore more than Musk courtroom drama. For executives, the story is a warning that the AI supply chain is getting harder to attest. If a model can be trained, validated or fine-tuned with outputs from another model, it is not enough to ask whether the dataset is “clean.” Buyers need to ask what the supplier actually did with third-party model APIs, synthetic data, benchmark data and customer data.

The assessment: CIO and legal teams should move model provenance into procurement and governance. Contracts should require suppliers to document which third-party models were used for training, evaluation and distillation. They should also make clear who carries the risk if a model later becomes disputed because of terms-of-service or IP violations.

For companies building agents on top of external models, the advice is concrete: do not let development teams use competing model outputs as training data without policy, logging and legal review. Separate validation, prompt testing, synthetic data generation and actual model training. These are different practices with different risk levels.

The story is still based on courtroom reporting, not a full technical postmortem from xAI or OpenAI. But the leadership consequence is clear enough: AI governance has to cover model origin and supplier method, not only privacy and prompt security.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.