Skip to main content
Available for Pro plan and above
Custom model configuration applies to AI Chat, AI Fields, and Automations. App Builder currently does not support BYOK — it uses the system default model and cannot be switched to a custom model.

Integrate AI Models in Teable

Steps:

  1. Open a Space: First, navigate to and enter the specific space in Teable where you want to integrate the AI model.
  2. Access Settings: In the current space’s interface, locate and click the “Settings” option in the top right corner.
  3. Go to AI Settings: In the settings menu, select and click “AI Settings.”
  4. Add LLM Provider: Click the ”+ Add LLM provider” button.
  5. Configure Provider Information:
    • Provide a name for your model provider (e.g., “OpenAI”, “Claude”).
    • Select the provider type (OpenAI, Anthropic, Google, etc.).
    • Set your model provider’s Base URL.
    • Enter the corresponding API Key.
    • Enter the model names (comma-separated, e.g., gpt-4o, gpt-4-turbo).
  6. Complete Addition: Once configured, click the “Add” button.
  1. Configure Model Preferences: After adding the provider, configure which model to use:
    • Chat model - For AI Chat, planning, coding, and other complex reasoning tasks. Recommended: claude-opus-4-6 (required to enable custom model)
  2. Enable Custom Model: Toggle the switch to enable your custom model configuration.
After completing these steps, you’ll see a distinction between “Space Models” and “System Models” within Teable’s AI features. The AI model you just configured will appear under the “Space” section.

Configuration Tips

Base URL Guidelines

The Base URL must be the API endpoint URL, not the provider’s website URL. Make sure to include the complete path (usually ending with /v1 or similar). Do not include a trailing slash (/) at the end of the URL — for example, use https://generativelanguage.googleapis.com/v1beta instead of https://generativelanguage.googleapis.com/v1beta/.
ProviderBase URL Format
OpenAIhttps://api.openai.com/v1
Anthropichttps://api.anthropic.com/v1
Googlehttps://generativelanguage.googleapis.com/v1beta
DeepSeekhttps://api.deepseek.ai/v1
Azurehttps://{your-resource-name}.openai.azure.com
Mistralhttps://api.mistral.ai/v1
Qwenhttps://dashscope.aliyuncs.com/compatible-mode/v1
Zhipuhttps://open.bigmodel.cn/api/paas/v4
XAI (Grok)https://api.x.ai/v1
OpenRouterhttps://openrouter.ai/api/v1
TogetherAIhttps://api.together.xyz/v1
Ollama (Local)http://localhost:11434

Models Field Guidelines

Enter model names exactly as specified by your provider. Multiple models should be separated by commas.
Common examples by provider:
ProviderExample Models
OpenAIgpt-5.2, o3, gpt-4o-mini
Anthropicclaude-sonnet-4-6, claude-opus-4-6, claude-sonnet-4-5
Googlegemini-2.5-flash, gemini-3.1-pro-preview, gemini-3-flash-preview
DeepSeekdeepseek-chat, deepseek-reasoner
XAIgrok-4, grok-4-fast
Qwenqwen3.5-plus, qwen3-max
OpenRouteranthropic/claude-sonnet-4-6, google/gemini-2.5-flash
TogetherAIdeepseek-ai/DeepSeek-R1, meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
Mistralmistral-large-latest, codestral-latest
Ollamaqwen3:8b, gemma3:12b, llama3.2:8b
Important notes:
  • Model names are case-sensitive - use the exact names from your provider’s documentation
  • Do not add spaces after commas for some providers (e.g., gpt-4o,gpt-4-turbo)
  • For OpenRouter, use the format: provider/model-name (e.g., openai/gpt-4o)
  • You can add models later by updating the provider configuration

Troubleshooting

IssueSolution
All model tests failedCheck if your Base URL ends with a trailing slash (/) and remove it. Also verify your API key has the required API enabled (e.g., Generative Language API for Google)
“Test failed” errorVerify your API key is valid and has sufficient credits
Connection timeoutCheck if your Base URL is correct and accessible
Model not foundEnsure the model name matches exactly with the provider’s documentation
Cannot enable custom modelMake sure you’ve configured the Chat model first
In addition to setting up space-level AI models, administrators can also configure instance-level AI settings in the Admin Panel (available in SaaS and self-hosted versions).
Last modified on March 5, 2026