Documentation Index
Fetch the complete documentation index at: https://help.teable.ai/llms.txt
Use this file to discover all available pages before exploring further.
Custom model configuration applies to all Teable AI features, including AI Chat, AI Fields, App Builder, and Automations.Starting April 9, 2026, Teable’s agent engine has been upgraded to enhance AI capabilities across AI Chat Agent mode and App Builder. As part of this upgrade, these features currently only support Anthropic-compatible API endpoints. Using incompatible endpoints may result in errors.
- Cloud users with BYOK: If your custom model provider does not support the Anthropic Messages API format, AI Chat Agent and App Builder will not function with your BYOK configuration. Please switch to an Anthropic-compatible provider (e.g. Anthropic API, OpenRouter) or use the default Teable model.
- Self-hosted users: Please ensure your configured LLM endpoint is Anthropic-compatible. Alternatively, you can wait for our upcoming OpenAI-compatible endpoint support and pull the latest image once it’s available.
- OpenAI-compatible endpoint support is on our roadmap and will be added in a future release.
Where to Configure It
- Open the target space.
- Click Settings in the top right corner.
- Go to AI settings.
Setup Steps
Under AI Capabilities, turn on what you need:- AI Field
- AI Chat
Add LLM Provider
Click Add LLM provider and fill in the following:- Name: Used to distinguish different providers.
- Provider type: Select the provider type.
- Base URL: Enter the provider’s API endpoint.
- API Key: Enter the API key from the provider.
- Models: Enter the model names you want to connect. Separate multiple models with English commas.
Test Model Capabilities
There are currently three ways to test:- Click Test on the LLM provider row.
- Click Test on an individual model row.
- Click Test Model Capabilities in the top-right corner of the list to batch-test all configured models.
Configuration Tips
Base URL
Common Base URL examples:| Provider | Base URL Format |
|---|---|
| Anthropic | https://api.anthropic.com/v1 |
| OpenAI | https://api.openai.com/v1 |
https://generativelanguage.googleapis.com/v1beta | |
| DeepSeek | https://api.deepseek.ai/v1 |
| Azure | https://{your-resource-name}.openai.azure.com |
| Mistral | https://api.mistral.ai/v1 |
| Qwen | https://dashscope.aliyuncs.com/compatible-mode/v1 |
| Zhipu AI | https://open.bigmodel.cn/api/paas/v4 |
| XAI (Grok) | https://api.x.ai/v1 |
| OpenRouter | https://openrouter.ai/api/v1 |
| TogetherAI | https://api.together.xyz/v1 |
| Ollama (Local) | http://localhost:11434 |
Models
Model names must match the provider documentation exactly and are case-sensitive. Separate multiple models with English commas.
| Provider | Example Models |
|---|---|
| OpenAI | gpt-5.4, gpt-5.4-mini, gpt-5 |
| Anthropic | claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5 |
gemini-3.1-pro-preview, gemini-3-flash, gemini-2.5-flash | |
| Azure | gpt-5.4, gpt-5, gpt-5-mini, gpt-4o |
| DeepSeek | deepseek-chat, deepseek-reasoner |
| XAI | grok-4, grok-4.1-fast |
| Qwen | qwen3.5-plus, qwen3-max |
| OpenRouter | anthropic/claude-opus-4-6, google/gemini-3.1-pro-preview |
| TogetherAI | deepseek-ai/DeepSeek-R1, meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 |
| Mistral | mistral-large-latest, mistral-medium-latest, codestral-latest |
| Ollama | qwen3.5:9b, gemma3:12b, llama3.2:8b |
- Model names are case-sensitive. Use the exact names from the provider documentation.
- Some providers require no spaces after commas, for example
gpt-4o,gpt-4-turbo. - OpenRouter uses the format
provider/model-name, for exampleopenai/gpt-4o. - For Azure, use the deployment name you created in Azure AI Foundry or Azure OpenAI Studio, not the base model name.
FAQ
All model tests failed
All model tests failed
Check whether the Base URL is correct, whether it was mistakenly filled with the provider homepage URL, or whether it has an extra trailing
/. If you are using an OpenAI-compatible endpoint, also confirm that /v1 is not missing.I get a “Test failed” error
I get a “Test failed” error
Check whether the API Key is valid and whether the account still has credits or permission to call the model.
Connection timeout
Connection timeout
Check whether the Base URL is correct and whether your current environment can reach that address.
Model not found
Model not found
Make sure the Models value matches the provider documentation exactly, including case and separator format.
Testing fails when I use a Coding Plan key
Testing fails when I use a Coding Plan key
Teable does not support this kind of key. Create a standard API Key in your provider dashboard and test again.
The image generation model test result looks wrong
The image generation model test result looks wrong
If the model is meant for image generation, check Image Generation Model first and then test again. Once checked, Teable tests text-to-image and image-to-image capabilities instead.

