Available for Pro plan and above
Integrate AI Models in Teable
Steps:
- Open a Space: First, navigate to and enter the specific space in Teable where you want to integrate the AI model.
- Access Settings: In the current space’s interface, locate and click the “Settings” option in the top right corner.
- Go to AI Settings: In the settings menu, select and click “AI Settings.”
- Add LLM Provider: Click the ”+ Add LLM provider” button.
- Configure Provider Information:
- Provide a name for your model provider (e.g., “OpenAI”, “Claude”).
- Select the provider type (OpenAI, Anthropic, Google, etc.).
- Set your model provider’s Base URL.
- Enter the corresponding API Key.
- Enter the model names (comma-separated, e.g.,
gpt-4o, gpt-4-turbo).
- Complete Addition: Once configured, click the “Add” button.
-
Configure Model Preferences: After adding the provider, configure which models to use for different tasks:
- Advanced chat model - For planning, coding, and other complex reasoning tasks (required to enable custom model)
- Medium chat model - For AI formula generation
- Small chat model - For auto-generating chat titles and base names
-
Enable Custom Model: Toggle the switch to enable your custom model configuration.
After completing these steps, you’ll see a distinction between “Space Models” and “System Models” within Teable’s AI features. The AI model you just configured will appear under the “Space” section.
Configuration Tips
Base URL Guidelines
The Base URL must be the API endpoint URL, not the provider’s website URL. Make sure to include the complete path (usually ending with /v1 or similar).
| Provider | Base URL Format |
|---|
| OpenAI | https://api.openai.com/v1 |
| Anthropic | https://api.anthropic.com/v1 |
| Google | https://generativelanguage.googleapis.com/v1beta |
| DeepSeek | https://api.deepseek.ai/v1 |
| Azure | https://{your-resource-name}.openai.azure.com |
| Mistral | https://api.mistral.ai/v1 |
| Qwen | https://dashscope.aliyuncs.com/compatible-mode/v1 |
| Zhipu | https://open.bigmodel.cn/api/paas/v4 |
| XAI (Grok) | https://api.x.ai/v1 |
| OpenRouter | https://openrouter.ai/api/v1 |
| TogetherAI | https://api.together.xyz/v1 |
| Ollama (Local) | http://localhost:11434 |
Models Field Guidelines
Enter model names exactly as specified by your provider. Multiple models should be separated by commas.
Common examples by provider:
| Provider | Example Models |
|---|
| OpenAI | gpt-4o, gpt-4o-mini, gpt-4-turbo |
| Anthropic | claude-sonnet-4-20250514, claude-3-5-haiku-20241022 |
| Google | gemini-2.0-flash, gemini-1.5-pro |
| DeepSeek | deepseek-chat, deepseek-reasoner |
| XAI | grok-2, grok-3 |
| Qwen | qwen-plus, qwen-max |
| OpenRouter | openai/gpt-4o, anthropic/claude-3.5-sonnet |
| TogetherAI | deepseek-ai/DeepSeek-V3, mistralai/Mistral-7B-Instruct-v0.3 |
| Mistral | mistral-large-latest, open-mistral-nemo |
| Ollama | llama3.1:8b, llama3.1:70b |
Important notes:
- Model names are case-sensitive - use the exact names from your provider’s documentation
- Do not add spaces after commas for some providers (e.g.,
gpt-4o,gpt-4-turbo)
- For OpenRouter, use the format:
provider/model-name (e.g., openai/gpt-4o)
- You can add models later by updating the provider configuration
Troubleshooting
| Issue | Solution |
|---|
| ”Test failed” error | Verify your API key is valid and has sufficient credits |
| Connection timeout | Check if your Base URL is correct and accessible |
| Model not found | Ensure the model name matches exactly with the provider’s documentation |
| Cannot enable custom model | Make sure you’ve configured the Advanced chat model (lg) first |
In addition to setting up space-level AI models, administrators can also configure instance-level AI settings in the Admin Panel (available in SaaS and self-hosted versions).