Skip to content

Model Providers

Sloppy supports multiple LLM providers. Each provider is configured as an entry in the models array inside sloppy.json (or via the Dashboard UI). At runtime, models are resolved by prefix (openai:, gemini:, anthropic:, ollama:) and routed to the corresponding provider implementation.

Supported providers

ProviderPrefixDefault API URLEnv variableAuth
OpenAI APIopenai:https://api.openai.com/v1OPENAI_API_KEYAPI key
OpenAI Codex (OAuth)openai:https://chatgpt.com/backend-apiOAuth device code
Google Geminigemini:https://generativelanguage.googleapis.comGEMINI_API_KEYAPI key
Anthropicanthropic:https://api.anthropic.comANTHROPIC_API_KEYAPI key
Ollamaollama:http://127.0.0.1:11434None

Environment variables

Environment variables provide a way to configure API keys without writing them into sloppy.json. When both an environment variable and a config key are set, the config key takes precedence.

VariableProviderDescription
OPENAI_API_KEYOpenAIAPI key for OpenAI models
GEMINI_API_KEYGeminiAPI key for Google Gemini models
ANTHROPIC_API_KEYAnthropicAPI key for Anthropic Claude models
BRAVE_API_KEYSearchAPI key for Brave web search tool
PERPLEXITY_API_KEYSearchAPI key for Perplexity web search tool

Config file format

Each model entry in sloppy.json has four fields:

json
{
  "models": [
    {
      "title": "openai-api",
      "apiKey": "",
      "apiUrl": "https://api.openai.com/v1",
      "model": "gpt-4.1-mini"
    }
  ]
}
FieldDescription
titleIdentifier used to infer the provider when the model string has no prefix. Must contain the provider name (e.g. openai-api, gemini, anthropic, ollama-local).
apiKeyAPI key for authenticated providers. Leave empty to use the environment variable.
apiUrlBase URL for the provider API. Override for proxied or self-hosted endpoints.
modelModel identifier passed to the provider. Can include a prefix (openai:gpt-4.1-mini) or be plain (gpt-4.1-mini).

Provider examples

OpenAI

json
{
  "title": "openai-api",
  "apiKey": "",
  "apiUrl": "https://api.openai.com/v1",
  "model": "gpt-4.1-mini"
}

With OPENAI_API_KEY set in the environment, apiKey can stay empty. Supports Chat Completions and Responses API variants with automatic fallback.

Google Gemini

json
{
  "title": "gemini",
  "apiKey": "",
  "apiUrl": "https://generativelanguage.googleapis.com",
  "model": "gemini-2.5-flash"
}

Get an API key from Google AI Studio. The probe endpoint fetches the full model list from the Gemini API.

Anthropic

json
{
  "title": "anthropic",
  "apiKey": "",
  "apiUrl": "https://api.anthropic.com",
  "model": "claude-sonnet-4-20250514"
}

Get an API key from Anthropic Console. Available models include Claude Sonnet 4, Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 3.5 Haiku, and Claude 3 Opus.

Ollama

json
{
  "title": "ollama-local",
  "apiKey": "",
  "apiUrl": "http://127.0.0.1:11434",
  "model": "qwen3"
}

No API key needed. Point apiUrl at any running Ollama instance. The probe endpoint queries /api/tags to list locally available models.

Multiple providers

The models array supports multiple entries. Core builds a composite model provider that routes requests based on the model prefix:

json
{
  "models": [
    {
      "title": "openai-api",
      "apiKey": "",
      "apiUrl": "https://api.openai.com/v1",
      "model": "gpt-4.1-mini"
    },
    {
      "title": "gemini",
      "apiKey": "",
      "apiUrl": "https://generativelanguage.googleapis.com",
      "model": "gemini-2.5-flash"
    },
    {
      "title": "anthropic",
      "apiKey": "",
      "apiUrl": "https://api.anthropic.com",
      "model": "claude-sonnet-4-20250514"
    }
  ]
}

Model selection for agents

Each agent has a selectedModel field in its config that determines which model it uses. The value includes the provider prefix:

ProviderExample selectedModel
OpenAIopenai:gpt-4.1-mini
Geminigemini:gemini-2.5-flash
Anthropicanthropic:claude-sonnet-4-20250514
Ollamaollama:qwen3

Set this via:

  • Dashboard — Agent settings page, model dropdown
  • APIPUT /v1/agents/:id/config with { "selectedModel": "gemini:gemini-2.5-flash" }
  • Onboarding — model selection step during first-run setup

Model resolution flow

  1. Core reads the models array from config at startup.
  2. Each entry is resolved to a prefixed identifier (e.g. openai:gpt-4.1-mini) using either an explicit prefix in the model field or by inferring the provider from title and apiUrl.
  3. Factory classes build provider instances for each recognized prefix.
  4. A CompositeModelProvider combines all active providers.
  5. When an agent runs, its selectedModel is matched against supported models and routed to the correct provider.

Adding providers via Dashboard

Onboarding

The first-run onboarding wizard (step 2) shows all providers as cards. Select a provider, enter the API key, click Test connection to probe, then select a model from the returned list.

Settings

Open Settings → Providers in the Dashboard. Click a provider card to open its configuration modal. Enter the API key and API URL, select a model, and click Save Provider. The config is saved to sloppy.json immediately.

Provider probe API

The /v1/providers/probe endpoint tests connectivity for any provider:

bash
curl -X POST http://localhost:25101/v1/providers/probe \
  -H "Content-Type: application/json" \
  -d '{"providerId": "gemini", "apiKey": "YOUR_KEY"}'

Supported providerId values: openai-api, openai-oauth, gemini, anthropic, ollama.

The response includes ok, message, and a models array with available model options.

Built from docs/ and styled to match the live Dashboard shell.