Ollama

Aider can connect to local Ollama models.

First, install aider:

python -m pip install aider-install
aider-install

Then configure your Ollama API endpoint (usually the default):

export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux
setx   OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx

Start working with aider and Ollama on your codebase:

# Pull the model
ollama pull <model>

# Start your ollama server, increasing the context window to 8k tokens
OLLAMA_CONTEXT_LENGTH=8192 ollama serve

# In another terminal window, change directory into your codebase
cd /to/your/project

aider --model ollama_chat/<model>

Using ollama_chat/ is recommended over ollama/.

See the model warnings section for information on warnings which will occur when working with models that aider is not familiar with.

API Key

If you are using an ollama that requires an API key you can set OLLAMA_API_KEY:

export OLLAMA_API_KEY=<api-key> # Mac/Linux
setx   OLLAMA_API_KEY <api-key> # Windows, restart shell after setx

Setting the context window size

Ollama uses a 2k context window by default, which is very small for working with aider. It also silently discards context that exceeds the window. This is especially dangerous because many users don’t even realize that most of their data is being discarded by Ollama.

By default, aider sets Ollama’s context window to be large enough for each request you send plus 8k tokens for the reply. This ensures data isn’t silently discarded by Ollama.

If you’d like you can configure a fixed sized context window instead with an .aider.model.settings.yml file like this:

- name: ollama/qwen2.5-coder:32b-instruct-fp16
  extra_params:
    num_ctx: 65536