Hopp til hovedinnhold
 AI-nyheter, ferdig filtrert for ledere
SISTE:

OpenAI beklager manglende varsling før Tumbler Ridge-skytingen • DeepSeek åpner V4 Preview med 1M kontekst og API-kompatibilitet • OpenAI lanserer GPT-5.5 for ChatGPT og Codex

OpenAI turns the issue tracker into a control room for Codex agents
CIOAI AgentsKodeSecurity

OpenAI turns the issue tracker into a control room for Codex agents

JH
Joachim Høgby
27. april 202627. april 20264 min lesingKilde: OpenAI

OpenAI published Symphony on April 27, an open specification for orchestrating Codex agents from a project-management tool such as Linear.

The factual change is straightforward: Symphony turns the issue tracker into the control plane. Each active task can get an isolated agent workspace, agents keep running, and humans move from constant steering to review, prioritization and approval. OpenAI says some internal teams saw a 500 percent increase in landed pull requests during the first three weeks.

That is not proof that every engineering organization will see the same result. The number comes from OpenAI’s own environment, with agent-friendly repositories, tests and guardrails. But the direction matters: the next productivity layer is not another chat window. It is an operating model for many concurrent agents.

For CIOs, this moves AI coding from individual tool adoption into software delivery architecture. If developers manually launch three to five agent sessions and babysit them in terminals, human attention becomes the bottleneck. Symphony points to a different model: work is described in Jira, Linear or GitHub Issues; workflow policy is versioned in the repo; the agent gets a bounded workspace; CI, tests and review decide whether the work moves forward.

The leadership consequence is twofold. First, measurement has to change. Counting Copilot seats or prompt volume is not enough. Track cycle time for small changes, review load, rework, defect rates and how much routine implementation work senior engineers actually stop doing. Second, governance needs to become stricter, not looser. An orchestrator that can start many agents in parallel needs explicit limits for repository access, secrets, production data, spending and approval rights.

OpenAI is clear that the specification does not mandate one sandbox or approval model. That matters for regulated organizations. A sensible enterprise pilot should start in a low-risk repository with strong tests, no production secrets and clear rules for what an agent may do without human confirmation.

The assessment: Symphony is more important as an architecture pattern than as a single OpenAI project. The practical next step is to create a small “agent delivery lane”: an issue template, a repo-owned WORKFLOW.md, isolated workspaces, budget limits, CI gates and mandatory human review before merge. If it works, scale it. If it does not, the organization has still learned where its codebase, tests and documentation are not ready for agentic work.

For the executive team, the question is no longer whether developers should use AI assistants. They already do. The question is who owns the control plane when agents begin working in parallel on real tasks.

📬 Likte du denne?

AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.