OpenAI launches Safety Fellowship for external AI safety research
OpenAI has launched a new Safety Fellowship for external researchers, engineers, and practitioners focused on AI safety and alignment. Announced on April 6, 2026, the program signals that OpenAI wants more safety work to happen in collaboration with experts outside the company.
The fellowship will run from September 14, 2026 to February 5, 2027. Participants will receive a stipend, mentorship, compute support, and API credits, with OpenAI expecting concrete outputs such as papers, benchmarks, or datasets. The application deadline is May 3.
OpenAI highlighted priority areas including safety evaluations, robustness, privacy-preserving safety methods, agentic oversight, and prevention of high-severity misuse across domains such as cyber and bio.
For CIOs and AI leaders, the bigger signal is the growing pressure for demonstrable model safety. OpenAI is pushing parts of its safety agenda into a broader research ecosystem, which could accelerate both better methodology and clearer expectations for responsible AI deployment.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.
Relaterte saker
OpenAI launches GPT-5.5 for ChatGPT and Codex
OpenAI is rolling out GPT-5.5 across ChatGPT and Codex with a clear focus on agentic work, tool use, and higher coding quality.