Available for Pro plan and above
Custom model configuration applies to all Teable AI features, including AI Chat, AI Fields, App Builder, and Automations.Starting April 9, 2026, Teable’s agent engine has been upgraded to enhance AI capabilities across AI Chat Agent mode and App Builder. As part of this upgrade, these features currently only support Anthropic-compatible API endpoints. Using incompatible endpoints may result in errors.
- Cloud users with BYOK: If your custom model provider does not support the Anthropic Messages API format, AI Chat Agent and App Builder will not function with your BYOK configuration. Please switch to an Anthropic-compatible provider (e.g. Anthropic API, OpenRouter) or use the default Teable model.
- Self-hosted users: Please ensure your configured LLM endpoint is Anthropic-compatible. Alternatively, you can wait for our upcoming OpenAI-compatible endpoint support and pull the latest image once it’s available.
- OpenAI-compatible endpoint support is on our roadmap and will be added in a future release.
We recommend using Teable Credits. Teable Credits are currently offered at cost — no middleman, no markup — giving you access to top-tier AI models at the most competitive pricing with the smoothest experience.
Integrate AI Models in Teable
Steps:
- Open a Space: First, navigate to and enter the specific space in Teable where you want to integrate the AI model.
- Access Settings: In the current space’s interface, locate and click the “Settings” option in the top right corner.
- Go to AI Settings: In the settings menu, select and click “AI Settings.”
- Add LLM Provider: Click the ”+ Add LLM provider” button.
- Configure Provider Information:
- Provide a name for your model provider (e.g., “OpenAI”, “Claude”).
- Select the provider type (OpenAI, Anthropic, Google, etc.).
- Set your model provider’s Base URL.
- Enter the corresponding API Key.
- Enter the model names (comma-separated, e.g.,
gpt-4o, gpt-4-turbo).
- Complete Addition: Once configured, click the “Add” button.
-
Configure Model Preferences: After adding the provider, configure which model to use:
- Chat model - For AI Chat, planning, coding, and other complex reasoning tasks. Recommended:
claude-opus-4-6 (required to enable custom model)
-
Enable Custom Model: Toggle the switch to enable your custom model configuration.
After completing these steps, you’ll see a distinction between “Space Models” and “System Models” within Teable’s AI features. The AI model you just configured will appear under the “Space” section.
Configuration Tips
Base URL Guidelines
The Base URL must be the API endpoint URL, not the provider’s website URL. Make sure to include the complete path (usually ending with /v1 or similar). Do not include a trailing slash (/) at the end of the URL — for example, use https://generativelanguage.googleapis.com/v1beta instead of https://generativelanguage.googleapis.com/v1beta/.
Teable does not support coding plan keys. Please use a standard API key created in your provider dashboard. Coding plan keys usually only work in specific coding tools and cannot be used as a general API key in Teable. If you use this kind of key, it is normal for model testing to fail.
| Provider | Base URL Format |
|---|
| Anthropic | https://api.anthropic.com/v1 |
| OpenAI | https://api.openai.com/v1 |
| Google | https://generativelanguage.googleapis.com/v1beta |
| DeepSeek | https://api.deepseek.ai/v1 |
| Azure | https://{your-resource-name}.openai.azure.com |
| Mistral | https://api.mistral.ai/v1 |
| Qwen | https://dashscope.aliyuncs.com/compatible-mode/v1 |
| Zhipu | https://open.bigmodel.cn/api/paas/v4 |
| XAI (Grok) | https://api.x.ai/v1 |
| OpenRouter | https://openrouter.ai/api/v1 |
| TogetherAI | https://api.together.xyz/v1 |
| Ollama (Local) | http://localhost:11434 |
Models Field Guidelines
Enter model names exactly as specified by your provider. Multiple models should be separated by commas.
Common examples by provider:
| Provider | Example Models |
|---|
| OpenAI | gpt-5.4, gpt-5.4-mini, gpt-5 |
| Anthropic | claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5 |
| Google | gemini-3.1-pro-preview, gemini-3-flash, gemini-2.5-flash |
| Azure | gpt-5.4, gpt-5, gpt-5-mini, gpt-4o |
| DeepSeek | deepseek-chat, deepseek-reasoner |
| XAI | grok-4, grok-4.1-fast |
| Qwen | qwen3.5-plus, qwen3-max |
| OpenRouter | anthropic/claude-opus-4-6, google/gemini-3.1-pro-preview |
| TogetherAI | deepseek-ai/DeepSeek-R1, meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 |
| Mistral | mistral-large-latest, mistral-medium-latest, codestral-latest |
| Ollama | qwen3.5:9b, gemma3:12b, llama3.2:8b |
Important notes:
- Model names are case-sensitive - use the exact names from your provider’s documentation
- Do not add spaces after commas for some providers (e.g.,
gpt-4o,gpt-4-turbo)
- For OpenRouter, use the format:
provider/model-name (e.g., openai/gpt-4o)
- For Azure, use the deployment name you created in Azure AI Foundry (or Azure OpenAI Studio), not the base model name. For example, if you deployed
gpt-5.4 with deployment name my-gpt54, enter my-gpt54
- You can add models later by updating the provider configuration
Troubleshooting
| Issue | Solution |
|---|
| All model tests failed | Check if your Base URL ends with a trailing slash (/) and remove it. Also verify your API key has the required API enabled (e.g., Generative Language API for Google) |
| “Test failed” error | Verify your API key is valid and has sufficient credits |
| Connection timeout | Check if your Base URL is correct and accessible |
| Model not found | Ensure the model name matches exactly with the provider’s documentation |
| Test fails with a coding plan key | Teable does not support this kind of key. Create a standard API key in your provider dashboard and test again |
| Cannot enable custom model | Make sure you’ve configured the Chat model first |
| App Builder or AI Chat Agent not working with custom model | Since April 9, 2026, these features require an Anthropic-compatible endpoint. Please switch to an Anthropic-compatible provider (e.g. Anthropic API, OpenRouter), use the default Teable model, or wait for upcoming OpenAI-compatible endpoint support |
In addition to setting up space-level AI models, administrators can also configure instance-level AI settings in the Admin Panel (available in SaaS and self-hosted versions).