Skip to main content
Available for Pro plan and above
Custom model configuration applies to all Teable AI features, including AI Chat, AI Fields, App Builder, and Automations.Starting April 9, 2026, Teable’s agent engine has been upgraded to enhance AI capabilities across AI Chat Agent mode and App Builder. As part of this upgrade, these features currently only support Anthropic-compatible API endpoints. Using incompatible endpoints may result in errors.
  • Cloud users with BYOK: If your custom model provider does not support the Anthropic Messages API format, AI Chat Agent and App Builder will not function with your BYOK configuration. Please switch to an Anthropic-compatible provider (e.g. Anthropic API, OpenRouter) or use the default Teable model.
  • Self-hosted users: Please ensure your configured LLM endpoint is Anthropic-compatible. Alternatively, you can wait for our upcoming OpenAI-compatible endpoint support and pull the latest image once it’s available.
  • OpenAI-compatible endpoint support is on our roadmap and will be added in a future release.
We recommend using Teable Credits. Teable Credits are currently offered at cost — no middleman, no markup — giving you access to top-tier AI models at the most competitive pricing with the smoothest experience.

Integrate AI Models in Teable

Steps:

  1. Open a Space: First, navigate to and enter the specific space in Teable where you want to integrate the AI model.
  2. Access Settings: In the current space’s interface, locate and click the “Settings” option in the top right corner.
  3. Go to AI Settings: In the settings menu, select and click “AI Settings.”
  4. Add LLM Provider: Click the ”+ Add LLM provider” button.
  5. Configure Provider Information:
    • Provide a name for your model provider (e.g., “OpenAI”, “Claude”).
    • Select the provider type (OpenAI, Anthropic, Google, etc.).
    • Set your model provider’s Base URL.
    • Enter the corresponding API Key.
    • Enter the model names (comma-separated, e.g., gpt-4o, gpt-4-turbo).
  6. Complete Addition: Once configured, click the “Add” button.
  1. Configure Model Preferences: After adding the provider, configure which model to use:
    • Chat model - For AI Chat, planning, coding, and other complex reasoning tasks. Recommended: claude-opus-4-6 (required to enable custom model)
  2. Enable Custom Model: Toggle the switch to enable your custom model configuration.
After completing these steps, you’ll see a distinction between “Space Models” and “System Models” within Teable’s AI features. The AI model you just configured will appear under the “Space” section.

Configuration Tips

Base URL Guidelines

The Base URL must be the API endpoint URL, not the provider’s website URL. Make sure to include the complete path (usually ending with /v1 or similar). Do not include a trailing slash (/) at the end of the URL — for example, use https://generativelanguage.googleapis.com/v1beta instead of https://generativelanguage.googleapis.com/v1beta/.
Teable does not support coding plan keys. Please use a standard API key created in your provider dashboard. Coding plan keys usually only work in specific coding tools and cannot be used as a general API key in Teable. If you use this kind of key, it is normal for model testing to fail.
ProviderBase URL Format
Anthropichttps://api.anthropic.com/v1
OpenAIhttps://api.openai.com/v1
Googlehttps://generativelanguage.googleapis.com/v1beta
DeepSeekhttps://api.deepseek.ai/v1
Azurehttps://{your-resource-name}.openai.azure.com
Mistralhttps://api.mistral.ai/v1
Qwenhttps://dashscope.aliyuncs.com/compatible-mode/v1
Zhipuhttps://open.bigmodel.cn/api/paas/v4
XAI (Grok)https://api.x.ai/v1
OpenRouterhttps://openrouter.ai/api/v1
TogetherAIhttps://api.together.xyz/v1
Ollama (Local)http://localhost:11434

Models Field Guidelines

Enter model names exactly as specified by your provider. Multiple models should be separated by commas.
Common examples by provider:
ProviderExample Models
OpenAIgpt-5.4, gpt-5.4-mini, gpt-5
Anthropicclaude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5
Googlegemini-3.1-pro-preview, gemini-3-flash, gemini-2.5-flash
Azuregpt-5.4, gpt-5, gpt-5-mini, gpt-4o
DeepSeekdeepseek-chat, deepseek-reasoner
XAIgrok-4, grok-4.1-fast
Qwenqwen3.5-plus, qwen3-max
OpenRouteranthropic/claude-opus-4-6, google/gemini-3.1-pro-preview
TogetherAIdeepseek-ai/DeepSeek-R1, meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
Mistralmistral-large-latest, mistral-medium-latest, codestral-latest
Ollamaqwen3.5:9b, gemma3:12b, llama3.2:8b
Important notes:
  • Model names are case-sensitive - use the exact names from your provider’s documentation
  • Do not add spaces after commas for some providers (e.g., gpt-4o,gpt-4-turbo)
  • For OpenRouter, use the format: provider/model-name (e.g., openai/gpt-4o)
  • For Azure, use the deployment name you created in Azure AI Foundry (or Azure OpenAI Studio), not the base model name. For example, if you deployed gpt-5.4 with deployment name my-gpt54, enter my-gpt54
  • You can add models later by updating the provider configuration

Troubleshooting

IssueSolution
All model tests failedCheck if your Base URL ends with a trailing slash (/) and remove it. Also verify your API key has the required API enabled (e.g., Generative Language API for Google)
“Test failed” errorVerify your API key is valid and has sufficient credits
Connection timeoutCheck if your Base URL is correct and accessible
Model not foundEnsure the model name matches exactly with the provider’s documentation
Test fails with a coding plan keyTeable does not support this kind of key. Create a standard API key in your provider dashboard and test again
Cannot enable custom modelMake sure you’ve configured the Chat model first
App Builder or AI Chat Agent not working with custom modelSince April 9, 2026, these features require an Anthropic-compatible endpoint. Please switch to an Anthropic-compatible provider (e.g. Anthropic API, OpenRouter), use the default Teable model, or wait for upcoming OpenAI-compatible endpoint support
In addition to setting up space-level AI models, administrators can also configure instance-level AI settings in the Admin Panel (available in SaaS and self-hosted versions).
Last modified on April 7, 2026