Skip to the content.

Claude 3 beats GPT-4 on Aider’s code editing benchmark

benchmark results

Anthropic just released their new Claude 3 models with evals showing better performance on coding tasks. With that in mind, I’ve been benchmarking the new models using Aider’s code editing benchmark suite.

Claude 3 Opus outperforms all of OpenAI’s models, making it the best available model for pair programming with AI.

To use Claude 3 Opus with aider:

pip install aider-chat
export ANTHROPIC_API_KEY=sk-...
aider --opus

Aider’s code editing benchmark

Aider is an open source command line chat tool that lets you pair program with AI on code in your local git repo.

Aider relies on a code editing benchmark to quantitatively evaluate how well an LLM can make changes to existing code. The benchmark uses aider to try and complete 133 Exercism Python coding exercises. For each exercise, Exercism provides a starting python file with stubs for the needed functions, a natural language description of the problem to solve and a test suite to evaluate whether the coder has correctly solved the problem.

The LLM gets two tries to solve each problem:

  1. On the first try, it gets the initial stub code and the English description of the coding task. If the tests all pass, we are done.
  2. If any tests failed, aider sends the LLM the failing test output and gives it a second try to complete the task.

Benchmark results

Claude 3 Opus

Claude 3 Sonnet

Code editing

It’s highly desirable to have the LLM send back code edits as some form of diffs, rather than having it send back an updated copy of the entire source code.

Weaker models like GPT-3.5 are unable to use diffs, and are stuck sending back updated copies of entire source files. Aider uses more efficient search/replace blocks with the original GPT-4 and unified diffs with the newer GPT-4 Turbo models.

Claude 3 Opus works best with the search/replace blocks, allowing it to send back code changes efficiently. Unfortunately, the Sonnet model was only able to work reliably with whole files, which limits it to editing smaller source files and uses more tokens, money and time.

Other observations

There are a few other things worth noting: