Skip to main content

Building Tips

1. Define data first

Teable App Builder works on top of your existing Teable tables — your tables and fields are your schema, and the AI reads them directly when generating UI and logic. So before you start building, define your data model clearly. The more precise your field types, links, and read/write paths are, the higher the quality of what the AI produces.

2. Plan before you build

Once your data model is in place, still resist the urge to jump straight into building the UI. Run a planning pass with the AI first. Say something like: “Let’s not write code yet — let’s make a plan.” Describe the problem you’re solving, the target users, and the rough feature set. Let the AI draft a structured proposal. Review it, adjust it, and only start building once you agree the plan makes sense.
A few extra minutes aligning on direction up front saves hours of rework later.

3. Start small

Don’t try to cram every feature into a single prompt. Describe the core functionality first, get a minimal working version running, then add one thing at a time: one interaction, one style tweak, one piece of logic. Verify each change before moving on. When something breaks, you only need to roll back one small change instead of starting over.

4. Be specific, not abstract

Descriptions like “make it prettier” or “make the interactions feel more natural” carry almost no information for the AI. Effective prompts are concrete: which page, which area, what behavior you want, what you don’t want. Attaching screenshots or reference UIs helps a lot.
Treat your prompt like a brief for a smart person who knows nothing about your project. The more precise the instructions, the closer the output gets to what you imagined.

5. Diagnose before fixing

When the app behaves unexpectedly, resist the urge to tell the AI to “just fix it.” Vague fix instructions push the AI into blindly patching things, often introducing new bugs along the way. A better approach is two steps:
1

Ask the AI to analyze first

Describe the symptoms and ask the AI to list likely causes and possible approaches — without touching the code yet.
2

Pick a direction, then implement

Decide which explanation is most plausible, and tell the AI to proceed along that path.
If several fix attempts in a row fail, roll back to the last known-good version and start fresh. It’s usually faster than patching on top of patches.

6. Lean on version rollbacks

Every conversation with the AI produces changes. The recommended rhythm: finish one feature module, confirm it works, then move on. Don’t juggle multiple unfinished features at once. Later change broke something? Roll back to the last stable version and try again with a clearer prompt.

FAQ

Handling 429 errors

The Teable API is currently limited to 10 QPS (10 requests per second). Apps generated by App Builder can hit 429 errors during normal use if request handling isn’t optimized. Our engineering team is actively working on optimizing API performance, and we may adjust this limit in the future.
There are three broad strategies to address this:

Caching

Reduce duplicate requests

Pagination & batching

Shrink per-request payloads

Debounce & throttle

Lower request frequency
Each section below lists common scenarios, the fix, and a reference prompt you can reuse. Caching — reduce duplicate requests
Scenario: A dashboard page has multiple charts, stat cards, and lists. Each component independently queries a different table, so the moment the page opens, concurrent requests exceed the limit.Fix: Cache data in app memory after loading (1–3 minute TTL is a good start), and serve cached data on subsequent views. Lazy-load below-the-fold components to stagger request timing.
Reference prompt: “Cache data locally after the page loads, with a 1-minute TTL. Don’t re-request the API within the TTL window. Delay loading for non-critical components by 500 ms.”
Scenario: Three components on the same page each need data from the same table and each fires its own request — when one would have been enough.Fix: Centralize data fetching so the same dataset is loaded once and shared across components.
Reference prompt: “If multiple components need data from the same table, fetch it once and share it across all of them. Do not fire duplicate requests.”
Scenario: Users move back and forth between pages. Each return triggers a fresh fetch, even when nothing has changed.Fix: Within the cache TTL, reuse the previously loaded data instead of re-requesting.
Reference prompt: “When a user returns to a page, if it’s been less than 1 minute since the last load, use the cached data. Do not re-request the API.”
Scenario: Choosing one field triggers a load for the next level’s options. Multi-level cascades rack up several requests per interaction.Fix: Preload the related data once and filter locally, or cache cascade data after the first load.
Reference prompt: “Cache cascading selector option data locally after loading. When the user changes a parent option, filter from the cache instead of re-requesting.”
Scenario: Poor state management causes components to re-fetch data on every render.Fix: Trigger data fetching on specific events (initial mount, explicit user action), not on every render. Use caching as a safety net.
Reference prompt: “Only fetch data on first page load or explicit user actions. Do not re-fetch on re-renders — use cached data instead.”
Pagination & batching — shrink per-request payloads
Scenario: Loading all records at once produces a flood of API calls when the dataset grows.Fix: Paginate. Only fetch the current page’s data.
Reference prompt: “Show 20 rows per page. Only load the next page when the user navigates to it. Do not load everything at once.”
Scenario: After loading a list, you fetch linked-table details for each record one at a time. Loading 50 projects and then 50 owner lookups = 50 extra requests in an instant.Fix: Fetch all linked data in one batch, not row by row.
Reference prompt: “When loading a list, batch-fetch all linked data in a single request. Do not loop through records to fetch their linked information individually.”
Scenario: Bulk-updating multiple records by sending one update request per record instead of a single batch request.Fix: Use the batch update API to submit all changes in one call.
Reference prompt: “For bulk operations, merge multiple record changes into a single batch request. Do not send one update per record.”
Scenario: A for loop processes records one at a time, calling the API each iteration.Fix: Collect all the IDs first, then issue a single batch request.
Reference prompt: “Do not call the API inside a loop. Collect all required IDs first, then issue a single batch request.”
Debounce & throttle — lower request frequency
Scenario: Every keystroke in a search box fires a request. Typing a 4-character query produces 4 requests.Fix: Debounce the input — wait 300–500 ms after the user stops typing before firing the request.
Reference prompt: “Debounce the search input. Only send a request 300 ms after the user stops typing. Do not send requests mid-typing.”
Scenario: Rapid-fire submit clicks, quick filter toggling, fast pagination — each action fires a request immediately.Fix: Apply debounce or throttle. Disable submit buttons until the request completes to prevent double-submits.
Reference prompt: “Disable the submit button after click and re-enable it once the request resolves. Debounce filter changes so rapid changes within 300 ms fire only one request.”
Scenario: Every field change saves immediately. Filling out a form can trigger a dozen writes.Fix: Switch to explicit save on button click, or debounce auto-save so it fires once after a pause in editing.
Reference prompt: “Do not save on every field change. Save on explicit button click, or auto-save once after the user has paused editing for 2 seconds.”
Scenario: Data refreshes every few seconds, producing sustained high-frequency traffic.Fix: Lengthen the poll interval to something reasonable (30 seconds or more), or switch to manual refresh.
Reference prompt: “Set the auto-refresh interval to 60 seconds. Add a manual refresh button so users can pull the latest data on demand.”
Scenario: Several components on a page each set up their own poll timer. The combined load easily exceeds the limit.Fix: Centralize polling. Run one periodic fetch, then fan the result out to every component that needs it.
Reference prompt: “Don’t let each component set up its own poll timer. Use a single refresh mechanism that fetches everything on a schedule and distributes the data to the components.”
Last modified on April 10, 2026