Skip to main content
Configure custom models by adding providers to settings.json. The same file works globally (~/.zencoder/settings.json) or per project (.zencoder/settings.json), letting you expose local, VPC, or third-party endpoints right inside the Zencoder model selector. Model selector showing BYOM models

Model Definition

Everything—examples, provider lists, reference properties, and troubleshooting—lives inside the JSON structure shown below.
{
  "providers": {
    "ollama-local": {
      "mode": "direct",
      "type": "openai-compatible",
      "baseUrl": "http://localhost:11434/v1",
      "apiKey": "not-needed",
      "models": {
        "gpt-oss-20b": {
          "name": "gpt-oss:20b",
          "displayName": "GPT OSS 20B (Local)",
          "capabilities": [],
          "options": {
            "temperature": 0.7,
            "maxOutputTokens": 4096
          }
        },
        "gpt-oss-120b": {
          "name": "gpt-oss:120b",
          "displayName": "GPT OSS 120B (Local)",
          "capabilities": [],
          "options": {
            "temperature": 0.7,
            "maxOutputTokens": 4096
          }
        },
        "qwen3-coder-30b": {
          "name": "qwen3-coder:30b",
          "displayName": "QWEN3 Coder 30B (Local)",
          "capabilities": [],
          "options": {
            "temperature": 0.7,
            "maxOutputTokens": 4096
          }
        },
        "deepseek-r1-70b": {
          "name": "deepseek-r1:70b",
          "displayName": "DeepSeek R1 70b (Local)",
          "capabilities": [],
          "options": {
            "temperature": 0.7,
            "maxOutputTokens": 4096
          }
        }
      }
    },
    "ollama-cloud": {
      "mode": "direct",
      "type": "openai-compatible",
      "baseUrl": "https://ollama.com/v1",
      "apiKey": "KEY",
      "models": {
        "gpt-oss-20b-cloud": {
          "name": "gpt-oss:20b",
          "displayName": "GPT OSS 20B (Cloud)",
          "capabilities": [],
          "options": {
            "temperature": 0,
            "maxOutputTokens": 4096
          }
        },
        "gpt-oss-120b-cloud": {
          "name": "gpt-oss:120b",
          "displayName": "GPT OSS 120B (Cloud)",
          "capabilities": [],
          "options": {
            "temperature": 0,
            "maxOutputTokens": 4096
          }
        },
        "kimi-k2-cloud": {
          "name": "kimi-k2-thinking:cloud",
          "displayName": "Kimi K2 Thinking (Cloud)",
          "capabilities": [],
          "options": {
            "temperature": 0,
            "maxOutputTokens": 4096
          }
        }
      }
    }
  }
}

Adding More Providers

Replicate the outer "providers": { ... } structure with new keys for each vendor or environment. Mix local runtimes, VPC gateways, and SaaS APIs—Zencoder lists every declared model in the selector.

Reference

  • mode – leave set to direct for BYOM endpoints.
  • typeopenai-compatible, openai, gemini, or anthropic depending on the API surface.
  • baseUrl – the root URL for the inference API (can be localhost, VPC, or SaaS).
  • apiKey – inline credential or omit it and rely on ZENCODER_<PROVIDER>_API_KEY.
  • name – the identifier the provider expects in requests.
  • displayName – friendly label in the model selector.
  • capabilities – optional list (for example imagesInput) describing special inputs.
  • options – temperature, max tokens, or other knobs the API supports.

Set useDefaultProviders to false when you want Zencoder to hide the built-in catalog so the selector only shows the providers you declare. Leave it out (or set to true) to keep Zencoder defaults side by side with your private endpoints.

{
  "useDefaultProviders": false,
  "providers": {
    "ollama-local": { }
  }
}
If the chat UI returns “Something went wrong” after wiring up a provider, email support@zencoder.ai so we can help troubleshoot the configuration.

Where to Configure

Use the same JSON schema at either scope depending on how broadly you want to share the providers.
  1. Create or edit ~/.zencoder/settings.json.
  2. Paste your providers block (plus useDefaultProviders if you need to hide Zencoder-managed models).
  3. Run zen settings reload so the CLI and UI pick up the changes.

To avoid storing secrets in the file, export ZENCODER_<PROVIDER>_API_KEY in your shell and omit apiKey from the JSON.

  • Create .zencoder/settings.json inside the repository.
  • Commit it so teammates and CI inherit the same provider list.

Project files override the machine-wide file, so a single repo can target private endpoints even if your global default stays on Managed Cloud.