Model warnings

Aider tries to sanity check that it is configured correctly to work with the LLM you specified:

  • It checks to see that all required environment variables are set for the model. These variables are required to configure things like API keys, API base URLs, etc. These settings are required to be correct.
  • It checks a metadata database to look up the context window size and token costs for the model. It’s usually OK if this extra metadata isn’t available.

Sometimes one or both of these checks will fail, so aider will issue some of the following warnings.

Missing environment variables

Model azure/gpt-4-turbo: Missing these environment variables:
- AZURE_API_BASE
- AZURE_API_VERSION
- AZURE_API_KEY

You need to set the listed environment variables. Otherwise you will get error messages when you start chatting with the model.

Unknown which environment variables are required

Model gpt-5: Unknown which environment variables are required.

Aider is unable verify the environment because it doesn’t know which variables are required for the model. If required variables are missing, you may get errors when you attempt to chat with the model. You can look in the litellm provider documentation to see if the required variables are listed there.

Context window size and token costs unavailable.

Model foobar: Unknown model, context window size and token costs unavailable.

If you specify a model that aider has never heard of, you will get an “unknown model” warning. This means aider doesn’t know the context window size and token costs for that model. Some minor functionality will be limited when using such models, but it’s not really a significant problem.

Aider will also try to suggest similarly named models, in case you made a typo or mistake when specifying the model name.

Adding settings for missing models

You can register model settings used by aider for unknown models. Create a .aider.models.yml file in one of these locations:

  • Your home directory.
  • The root if your git repo.
  • The current directory where you launch aider.
  • Or specify a specific file with the --model-settings-file <filename> switch.

If the files above exist, they will be loaded in that order. Files loaded last will take priority.

The yaml file should be a a list of dictionary objects for each model, as follows:

- name: "gpt-3.5-turbo"
  edit_format: "whole"
  weak_model_name: "gpt-3.5-turbo"
  use_repo_map: false
  send_undo_reply: false
  accepts_images: false
  lazy: false
  reminder_as_sys_msg: true
  examples_as_sys_msg: false
- name: "gpt-4-turbo-2024-04-09"
  edit_format: "udiff"
  weak_model_name: "gpt-3.5-turbo"
  use_repo_map: true
  send_undo_reply: true
  accepts_images: true
  lazy: true
  reminder_as_sys_msg: true
  examples_as_sys_msg: false

Specifying context window size and token costs

You can register context window limits and costs for models that aren’t known to aider. Create a .aider.litellm.models.json file in one of these locations:

  • Your home directory.
  • The root if your git repo.
  • The current directory where you launch aider.
  • Or specify a specific file with the --model-metadata-file <filename> switch.

If the files above exist, they will be loaded in that order. Files loaded last will take priority.

The json file should be a dictionary with an entry for each model, as follows:

{
    "deepseek-chat": {
        "max_tokens": 4096,
        "max_input_tokens": 32000,
        "max_output_tokens": 4096,
        "input_cost_per_token": 0.00000014,
        "output_cost_per_token": 0.00000028,
        "litellm_provider": "deepseek",
        "mode": "chat"
    }
}

See litellm’s model_prices_and_context_window.json file for more examples.