Skip to main content

Documentation Index

Fetch the complete documentation index at: https://help.teable.ai/llms.txt

Use this file to discover all available pages before exploring further.

Available for Pro plan and above
Custom model configuration applies to all Teable AI features, including AI Chat, AI Fields, App Builder, and Automations.Starting April 9, 2026, Teable’s agent engine has been upgraded to enhance AI capabilities across AI Chat Agent mode and App Builder. As part of this upgrade, these features currently only support Anthropic-compatible API endpoints. Using incompatible endpoints may result in errors.
  • Cloud users with BYOK: If your custom model provider does not support the Anthropic Messages API format, AI Chat Agent and App Builder will not function with your BYOK configuration. Please switch to an Anthropic-compatible provider (e.g. Anthropic API, OpenRouter) or use the default Teable model.
  • Self-hosted users: Please ensure your configured LLM endpoint is Anthropic-compatible. Alternatively, you can wait for our upcoming OpenAI-compatible endpoint support and pull the latest image once it’s available.
  • OpenAI-compatible endpoint support is on our roadmap and will be added in a future release.
We recommend using Teable Credits. Teable Credits are currently offered at cost — no middleman, no markup — giving you access to top-tier AI models at the most competitive pricing with the smoothest experience.
After setup, the models you add can be used by AI Fields, Automations, AI Chat, and App Builder in the current space.

Where to Configure It

  1. Open the target space.
  2. Click Settings in the top right corner.
  3. Go to AI settings.

Setup Steps

Under AI Capabilities, turn on what you need:
  • AI Field
  • AI Chat

Add LLM Provider

Click Add LLM provider and fill in the following:
  • Name: Used to distinguish different providers.
  • Provider type: Select the provider type.
  • Base URL: Enter the provider’s API endpoint.
  • API Key: Enter the API key from the provider.
  • Models: Enter the model names you want to connect. Separate multiple models with English commas.

Test Model Capabilities

There are currently three ways to test:
  • Click Test on the LLM provider row.
  • Click Test on an individual model row.
  • Click Test Model Capabilities in the top-right corner of the list to batch-test all configured models.
If a model is meant for image generation, check Image Generation Model before running the test. Once checked, Teable tests it as an image model for text-to-image and image-to-image capabilities. If it is not checked, the model is tested as a regular text model.

Configuration Tips

Base URL

Use the API endpoint URL, not the provider’s website URL. Most OpenAI-compatible endpoints need to end with /v1. Unless the provider’s documentation explicitly says otherwise, do not add an extra trailing /.
Teable does not support Coding Plan API keys. Please use a standard API key created in your provider dashboard. Coding Plan API keys usually only work inside specific coding tools and cannot be used as a general API key in Teable. If you use this kind of key, it is normal for model testing to fail.
Common Base URL examples:
ProviderBase URL Format
Anthropichttps://api.anthropic.com/v1
OpenAIhttps://api.openai.com/v1
Googlehttps://generativelanguage.googleapis.com/v1beta
DeepSeekhttps://api.deepseek.ai/v1
Azurehttps://{your-resource-name}.openai.azure.com
Mistralhttps://api.mistral.ai/v1
Qwenhttps://dashscope.aliyuncs.com/compatible-mode/v1
Zhipu AIhttps://open.bigmodel.cn/api/paas/v4
XAI (Grok)https://api.x.ai/v1
OpenRouterhttps://openrouter.ai/api/v1
TogetherAIhttps://api.together.xyz/v1
Ollama (Local)http://localhost:11434

Models

Model names must match the provider documentation exactly and are case-sensitive. Separate multiple models with English commas.
Common model examples:
ProviderExample Models
OpenAIgpt-5.4, gpt-5.4-mini, gpt-5
Anthropicclaude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5
Googlegemini-3.1-pro-preview, gemini-3-flash, gemini-2.5-flash
Azuregpt-5.4, gpt-5, gpt-5-mini, gpt-4o
DeepSeekdeepseek-chat, deepseek-reasoner
XAIgrok-4, grok-4.1-fast
Qwenqwen3.5-plus, qwen3-max
OpenRouteranthropic/claude-opus-4-6, google/gemini-3.1-pro-preview
TogetherAIdeepseek-ai/DeepSeek-R1, meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
Mistralmistral-large-latest, mistral-medium-latest, codestral-latest
Ollamaqwen3.5:9b, gemma3:12b, llama3.2:8b
Notes:
  • Model names are case-sensitive. Use the exact names from the provider documentation.
  • Some providers require no spaces after commas, for example gpt-4o,gpt-4-turbo.
  • OpenRouter uses the format provider/model-name, for example openai/gpt-4o.
  • For Azure, use the deployment name you created in Azure AI Foundry or Azure OpenAI Studio, not the base model name.

FAQ

Check whether the Base URL is correct, whether it was mistakenly filled with the provider homepage URL, or whether it has an extra trailing /. If you are using an OpenAI-compatible endpoint, also confirm that /v1 is not missing.
Check whether the API Key is valid and whether the account still has credits or permission to call the model.
Check whether the Base URL is correct and whether your current environment can reach that address.
Make sure the Models value matches the provider documentation exactly, including case and separator format.
Teable does not support this kind of key. Create a standard API Key in your provider dashboard and test again.
If the model is meant for image generation, check Image Generation Model first and then test again. Once checked, Teable tests text-to-image and image-to-image capabilities instead.
Last modified on April 23, 2026