Ollama
Aider can connect to local Ollama models.
# Pull the model
ollama pull <model>
# Start your ollama server
ollama serve
# In another terminal window...
python -m pip install -U aider-chat
export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux
setx OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx
aider --model ollama/<model>
In particular, llama3:70b
works well with aider:
ollama pull llama3:70b
ollama serve
# In another terminal window...
export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux
setx OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx
aider --model ollama/llama3:70b
See the model warnings section for information on warnings which will occur when working with models that aider is not familiar with.