Before you start
Before you open the AI Settings page, make sure the following items are ready. Both AI Chat Runtime and App Builder rely on Vercel.| Requirement | Required? | Notes |
|---|---|---|
| Vercel account | Required | You need a working Vercel account |
| Vercel API key | Required | Used for AI Chat Runtime |
| Team / Project | Required | You need a usable Team and Project. If needed, you can create them in the page |
| AI Gateway API key | When using AI Gateway | Only needed if you choose AI Gateway as your model connection method |
| Public access | Required | Your Teable instance and object storage (MinIO / S3) must be publicly accessible |
| Item | Notes |
|---|---|
| Vercel base cost | About $20/month for App deployment and AI runtime infrastructure |
| AI Gateway model usage | Charged separately based on actual usage |
Setup steps
Follow the Pending configuration panel in order. Turn the required items green first, then test the features. If a button is disabled or a feature still does not work, it usually means an earlier requirement has not been completed yet. Follow these four steps:1. Configure LLM API
Start by choosing how you want to connect your models in LLM API.- AI Gateway (Recommended)
- Custom provider
Enter your AI Gateway API key, then click Test. After the test succeeds, continue to the next step.
2. Configure Recommended Models
This step decides which models users can choose from. Start with a clear primary set so users can choose by task complexity. A common setup is:| Model | Best for |
|---|---|
| Opus | Complex tasks and high-quality output |
| Sonnet | Everyday tasks with a good balance of quality and cost |
| Haiku | Lightweight tasks and fast responses |
3. Set chat model
This step sets the default model for sidebar AI Chat. The model must come from the recommended model list, so if this step is disabled, the previous step is usually incomplete. You should choose a stable model that supports tool calling and has already been tested. Otherwise, Agent capability will be noticeably limited.4. Configure the runtime and App Builder
Complete AI Chat Runtime first, then complete App Builder.Finish AI Chat Runtime
In AI Chat Runtime, enter and verify the Vercel API key, then select Team and Project, and finally click Test Connection. After the test succeeds, the matching item in the Pending configuration panel will turn green.
| Optional item | When you need it |
|---|---|
| Custom domain | When you want to publish apps under your own domain |
| Vercel API proxy URL | When your server cannot reach the Vercel API directly, or when you use a proxy or gateway |
FAQ
The chat model option is disabled
The chat model option is disabled
Go back and finish Configure recommended models first. The chat model can only be selected from the recommended model list.
I entered the API key, but the right-side status is still not green
I entered the API key, but the right-side status is still not green
Check the Pending configuration panel first to see which item is still missing. In many cases, the current step is saved correctly, but an earlier requirement is still incomplete.
The page says Vercel Sandbox configuration is still missing
The page says Vercel Sandbox configuration is still missing
This usually means AI Chat Runtime is not fully configured yet. Make sure all three items below are completed:
- Vercel API key
- Team
- Project
I get a credit exceeded error
I get a credit exceeded error
Check your Vercel account
Credit or balance first. Top up if needed, and make sure auto top-up is enabled.If I use Vercel AI Gateway, will AI models process my data?
If I use Vercel AI Gateway, will AI models process my data?
If you use Vercel AI Gateway, request data will still be sent to the model provider you selected for processing, so third-party providers may be involved.If you have data security or compliance requirements, enable this option in Vercel AI Gateway:
- Only allow providers with Zero Data Retention (ZDR)
Even with ZDR enabled, requests are still sent to the selected model provider for processing. The difference is that usage is limited to providers that support zero data retention.

