Skip to main content
Available for Pro plan and above

Integrate AI Models in Teable

Steps:

  1. Open a Space: First, navigate to and enter the specific space in Teable where you want to integrate the AI model.
  2. Access Settings: In the current space’s interface, locate and click the “Settings” option in the top right corner.
  3. Go to AI Settings: In the settings menu, select and click “AI Settings.”
  4. Add LLM Provider: Click the ”+ Add LLM provider” button.
  5. Configure Provider Information:
    • Provide a name for your model provider (e.g., “OpenAI”, “Claude”).
    • Select the provider type (OpenAI, Anthropic, Google, etc.).
    • Set your model provider’s Base URL.
    • Enter the corresponding API Key.
    • Enter the model names (comma-separated, e.g., gpt-4o, gpt-4-turbo).
  6. Complete Addition: Once configured, click the “Add” button.
  1. Configure Model Preferences: After adding the provider, configure which models to use for different tasks:
    • Advanced chat model - For planning, coding, and other complex reasoning tasks (required to enable custom model)
    • Medium chat model - For AI formula generation
    • Small chat model - For auto-generating chat titles and base names
  2. Enable Custom Model: Toggle the switch to enable your custom model configuration.
After completing these steps, you’ll see a distinction between “Space Models” and “System Models” within Teable’s AI features. The AI model you just configured will appear under the “Space” section.

Configuration Tips

Base URL Guidelines

The Base URL must be the API endpoint URL, not the provider’s website URL. Make sure to include the complete path (usually ending with /v1 or similar).
ProviderBase URL Format
OpenAIhttps://api.openai.com/v1
Anthropichttps://api.anthropic.com/v1
Googlehttps://generativelanguage.googleapis.com/v1beta
DeepSeekhttps://api.deepseek.ai/v1
Azurehttps://{your-resource-name}.openai.azure.com
Mistralhttps://api.mistral.ai/v1
Qwenhttps://dashscope.aliyuncs.com/compatible-mode/v1
Zhipuhttps://open.bigmodel.cn/api/paas/v4
XAI (Grok)https://api.x.ai/v1
OpenRouterhttps://openrouter.ai/api/v1
TogetherAIhttps://api.together.xyz/v1
Ollama (Local)http://localhost:11434

Models Field Guidelines

Enter model names exactly as specified by your provider. Multiple models should be separated by commas.
Common examples by provider:
ProviderExample Models
OpenAIgpt-4o, gpt-4o-mini, gpt-4-turbo
Anthropicclaude-sonnet-4-20250514, claude-3-5-haiku-20241022
Googlegemini-2.0-flash, gemini-1.5-pro
DeepSeekdeepseek-chat, deepseek-reasoner
XAIgrok-2, grok-3
Qwenqwen-plus, qwen-max
OpenRouteropenai/gpt-4o, anthropic/claude-3.5-sonnet
TogetherAIdeepseek-ai/DeepSeek-V3, mistralai/Mistral-7B-Instruct-v0.3
Mistralmistral-large-latest, open-mistral-nemo
Ollamallama3.1:8b, llama3.1:70b
Important notes:
  • Model names are case-sensitive - use the exact names from your provider’s documentation
  • Do not add spaces after commas for some providers (e.g., gpt-4o,gpt-4-turbo)
  • For OpenRouter, use the format: provider/model-name (e.g., openai/gpt-4o)
  • You can add models later by updating the provider configuration

Troubleshooting

IssueSolution
”Test failed” errorVerify your API key is valid and has sufficient credits
Connection timeoutCheck if your Base URL is correct and accessible
Model not foundEnsure the model name matches exactly with the provider’s documentation
Cannot enable custom modelMake sure you’ve configured the Advanced chat model (lg) first
In addition to setting up space-level AI models, administrators can also configure instance-level AI settings in the Admin Panel (available in SaaS and self-hosted versions).