25% regression rate on the best model is the number people should be sitting with here. That means 1 in 4 commits from your agent is breaking something that used to work. In any human team that would get you a serious conversation. We keep benchmarking agents like they're taking a test but the actual failure mode in production is slow accumulation of regressions nobody catches until the whole thing is on fire.
The resolve rate numbers are interesting but I keep coming back to the regression question. In my experience doing code review on a real codebase, the hard part of maintenance is not fixing the thing that broke. It is understanding whether your fix preserves the invariants the original author had in mind but did not write down.
A benchmark that checks CI pass/fail captures the first part. It cannot capture the second. An agent that makes CI green by weakening an assertion or bypassing a check will score well here but create a time bomb.
The monorepo point from yuyuqueen hits this. When the agent can see the full dependency graph, it is less likely to fix something locally while breaking a downstream assumption. The biggest maintenance failures I have seen are not wrong logic. They are fixes that are locally correct but violate an unwritten contract between components.
CI pass/fail captures regression, but there's a layer beneath it that benchmarks can't touch: what exactly did the agent submit to each external API, and can you prove it after the fact?
In the benchmark context this doesn't matter everything runs locally. In production it does. The agent calls a third-party service at 2am, the service claims it returned an error, your agent retried and billed you twice. Your logs say one thing, their logs say another.
The integrity problem isn't just "did the code work" it's "what was the exact request/response pair, timestamped, by whom, provably." CI solves the first. Something else has to solve the second.
> It is understanding whether your fix preserves the invariants the original author had in mind but did not write down.
This may also be the limit to the quality of an automated port to another language. What isn't encoded as automated tests or manual test procedure cannot be verified.
So often I'm amazed at what it's possible to accomplish from a prompt that's certainly insufficient with insufficient context. "It should have been necessary to specify more context there," or "I would have thought that it wasn't possible to do that without reading in more context than just one source code file," and then a few prompts later, "there's where we failed for trying to skimp on context"
To prevent architectural rework as a human developer also requires substantial ahead-of-time codebase review.
Are AGENTS.md files the best place to summarize more comprehensive codebase review and useful dense context like guidelines for testing and architectural components in order to avoid rework?
Note: later GPT versions seem to be only available within openAi's proprietary codex cli, so can't be tested - and if tested via the codex cli "harness" it wouldn't be a pure model-to-model comparison any more.
---
Of course, the interesting follow-up question is: How well perform these models with added agent tooling ("harness") ?
Maybe someone has tokens to burn and can run a matrix of agent tools over the top models and provide the results?
5.3-codex is only available via the Responses API, not the Completions API. Two different APIs for model access. If you were using Completions you have to port to Responses. It's not that hard. I did this for my own agent the other week. I think it might be like that for all their new models from now on. Responses is a much more powerful API. It's more like a front to ChatGPT than the underlying models.
>if tested via the codex cli "harness" it wouldn't be a pure model-to-model comparison any more.
Well that's already not a very fair comparison, we've known for years (one of the early-ish LLM papers, maybe someone knows which one) that prompting makes an enormous difference on agent performance, and most strikingly, the same prompt that massively boosts performance on one model, can massively reduce performance on another.
So you already need to fine-tune the prompts for the model, if you want anything approaching best results.
Now what's really amusing is that if you run models without their official harness, they can actually do way better on some benchmarks! [0] e.g. On Terminal Bench 2, Claude Opus 4.6 goes from #33 (Claude Code) to #5 (custom harness). Similar results for Codex.
Now, this is "for this one very specific benchmark", but I still thought it was funny, since you'd expect "the harness made by the same company" to be the best for all tasks, but that's clearly not the case. (For specific tasks, it's actually quite trivial to outperform a general purpose harness.)
I reached the same conclusion. I tried using both for my personal investment ambient using agent-pair programming to build and agentic intelligence layer for stocks and the difference between the 2 models if astounding.
We are working on supporting agent harnesses @ www.cliwatch.com, so both 1. LLM model as well 2. LLM model + harness performance can be evaluated against your software/CLI. We also support building evals against your doc suite. End result is that you’ll feel more comfortable shipping CLIs that work for your agentic users!:)
Passing tests doesn’t mean you have a working codebase.
Benchmarks that rely on a fixed test suite create a real optimization problem agents (or/and even humans) learn to satisfy the tests rather than preserve the deeper properties that make the system maintainable. AI write test cases which it thinks is easier for it to satisfy and not adhere-ing to business logic
We see this firsthand at Prismor with auto generated security fixes. Even with the best LLMs, validating fixes is the real bottleneck our pipeline struggles to exceed 70% on an internal golden dataset (which itself is somewhat biased).
Many patches technically fix the vulnerability but introduce semantic regressions or architectural drift. Passing tests is a weak signal and proving a fix is truly safe to merge is much harder
Unfortunately the paper doesn’t include gpt 5.3 which was released around the same time as opus 4.6 and also gpt 5.4 few days back. Both are available via api
IMHO The harness must be used when running these experiments. The model vendors know best on giving the best harness with gpt 5.4 and codex or Claude code with opus 4.6 which makes a big difference if you are running any kind of agentic coding tasks.
I see both Claude and gpt to be neck and neck in coding. Every other model+harness is definitely 3-6 months behind. Right now codex seems to be the best in terms of solving complex bugs, long running tasks, much higher limits and even speed while Claude seems to do well in front end and their cli ux seems nice! Codex app is very good though (wish it wasn’t electron as a memory hog but it’s good)
I'd unironically (and privately) want to do that with the code of both myself and those around me - to maybe see who I should listen more to, as well as who maybe less (ideally down to the feature level), because everyone has opinions, sometimes loud ones, but some approaches lead to a lot of churn and issues over the years.
> The benchmark comprises 100 tasks, each corresponding on average to an evolution history spanning 233 days and 71 consecutive commits in a real-world code repository.
This seems like a really cool thing to benchmark! Technically it'd be possible to take GitHub repos that the AI orgs probably already have, cross-reference the code against the issues and regressions, and train/validate on that.
"Vibe coded stuff gets hard to maintain and will end up buggy." Yeah, so make models that deal with that better, optimize for maintainability and consistency.
The regression rates match what I saw early on with Claude Code on my monorepo. The fix was structural, not model-level: keeping everything in a single tree (packages, tests, docs, CI config) so the agent sees downstream effects of any change. When context is split across repos, agents cheerfully break imports because they literally can't see what depends on what.
Something hard to capture in benchmarks: project-level conventions. A well-maintained CLAUDE.md at the repo root — describing architecture, naming patterns, test conventions — gives the agent context it internalizes before touching code. My regression rate dropped noticeably once I started maintaining that kind of project metadata. Model choice is only half the equation — the other half is how well you've structured the information environment the agent works in.
To me using agents daily, the long term vision with maintainability in mind really makes the difference between us humans and agents, I like the idea. However evaluating long term maintainability over an average of just 500 loc changes does not sound like long term maintainability being measured here
I can't help but notice that they're benchmarking Opus 4.6 (Anthropic's latest and greatest model) against GPT-5.2 (which is three generations behind OpenAI's latest coding models: GPT-5.2-Codex, GPT-5.3-Codex and the latest GPT-5.4).
Would have loved to see a more detailed breakdown of performance by task type. The commit metadata is right there, seems straightforward to tag commits as feature vs refactor vs bug fix vs API change and report per-category numbers.
It interesting to see that the eval set becoming more and more expensive. Previously we just need to evaluate one test set, right now we need to create a lot of diffs and run a lot of tests.
Really long-term task benchmark showing significant improvements in very recent models, while also showing really bad regression rates across the board.
I've been building a similar loop with jetty.io for the last few months exclusively focused on data science workflows. I think that there's a lot of hill-climbing that can be accomplished by having a clear runbook.
reply