Building Tips
1. Define data first
Teable App Builder works on top of your existing Teable tables — your tables and fields are your schema, and the AI reads them directly when generating UI and logic. So before you start building, define your data model clearly. The more precise your field types, links, and read/write paths are, the higher the quality of what the AI produces.2. Plan before you build
Once your data model is in place, still resist the urge to jump straight into building the UI. Run a planning pass with the AI first. Say something like: “Let’s not write code yet — let’s make a plan.” Describe the problem you’re solving, the target users, and the rough feature set. Let the AI draft a structured proposal. Review it, adjust it, and only start building once you agree the plan makes sense.3. Start small
Don’t try to cram every feature into a single prompt. Describe the core functionality first, get a minimal working version running, then add one thing at a time: one interaction, one style tweak, one piece of logic. Verify each change before moving on. When something breaks, you only need to roll back one small change instead of starting over.4. Be specific, not abstract
Descriptions like “make it prettier” or “make the interactions feel more natural” carry almost no information for the AI. Effective prompts are concrete: which page, which area, what behavior you want, what you don’t want. Attaching screenshots or reference UIs helps a lot.5. Diagnose before fixing
When the app behaves unexpectedly, resist the urge to tell the AI to “just fix it.” Vague fix instructions push the AI into blindly patching things, often introducing new bugs along the way. A better approach is two steps:Ask the AI to analyze first
Describe the symptoms and ask the AI to list likely causes and possible approaches — without touching the code yet.
6. Lean on version rollbacks
Every conversation with the AI produces changes. The recommended rhythm: finish one feature module, confirm it works, then move on. Don’t juggle multiple unfinished features at once. Later change broke something? Roll back to the last stable version and try again with a clearer prompt.FAQ
Handling 429 errors
There are three broad strategies to address this:Caching
Reduce duplicate requests
Pagination & batching
Shrink per-request payloads
Debounce & throttle
Lower request frequency
Too many requests fired on page load
Too many requests fired on page load
Scenario: A dashboard page has multiple charts, stat cards, and lists. Each component independently queries a different table, so the moment the page opens, concurrent requests exceed the limit.Fix: Cache data in app memory after loading (1–3 minute TTL is a good start), and serve cached data on subsequent views. Lazy-load below-the-fold components to stagger request timing.
Multiple components querying the same table
Multiple components querying the same table
Scenario: Three components on the same page each need data from the same table and each fires its own request — when one would have been enough.Fix: Centralize data fetching so the same dataset is loaded once and shared across components.
Re-fetching on page navigation
Re-fetching on page navigation
Dropdowns loading huge option lists
Dropdowns loading huge option lists
Scenario: A dropdown shows every record from a table as an option. With many records, that single request alone is heavy.Fix: Convert it into a search-style picker — only fetch matching records after the user types. Alternatively, cache the option list.
Cascading selectors chaining requests
Cascading selectors chaining requests
Scenario: Choosing one field triggers a load for the next level’s options. Multi-level cascades rack up several requests per interaction.Fix: Preload the related data once and filter locally, or cache cascade data after the first load.
Re-renders triggering duplicate requests
Re-renders triggering duplicate requests
Scenario: Poor state management causes components to re-fetch data on every render.Fix: Trigger data fetching on specific events (initial mount, explicit user action), not on every render. Use caching as a safety net.
Lists or tables without pagination
Lists or tables without pagination
Scenario: Loading all records at once produces a flood of API calls when the dataset grows.Fix: Paginate. Only fetch the current page’s data.
Per-row fetching for linked data (N+1)
Per-row fetching for linked data (N+1)
Scenario: After loading a list, you fetch linked-table details for each record one at a time. Loading 50 projects and then 50 owner lookups = 50 extra requests in an instant.Fix: Fetch all linked data in one batch, not row by row.
Per-row writes during bulk updates
Per-row writes during bulk updates
Scenario: Bulk-updating multiple records by sending one update request per record instead of a single batch request.Fix: Use the batch update API to submit all changes in one call.
API calls inside loops
API calls inside loops
Scenario: A
for loop processes records one at a time, calling the API each iteration.Fix: Collect all the IDs first, then issue a single batch request.Search or filters without debounce
Search or filters without debounce
Scenario: Every keystroke in a search box fires a request. Typing a 4-character query produces 4 requests.Fix: Debounce the input — wait 300–500 ms after the user stops typing before firing the request.
Rapid repeated user actions
Rapid repeated user actions
Scenario: Rapid-fire submit clicks, quick filter toggling, fast pagination — each action fires a request immediately.Fix: Apply debounce or throttle. Disable submit buttons until the request completes to prevent double-submits.
Overly aggressive form auto-save
Overly aggressive form auto-save
Scenario: Every field change saves immediately. Filling out a form can trigger a dozen writes.Fix: Switch to explicit save on button click, or debounce auto-save so it fires once after a pause in editing.
Polling intervals that are too short
Polling intervals that are too short
Scenario: Data refreshes every few seconds, producing sustained high-frequency traffic.Fix: Lengthen the poll interval to something reasonable (30 seconds or more), or switch to manual refresh.
Multiple components polling independently
Multiple components polling independently
Scenario: Several components on a page each set up their own poll timer. The combined load easily exceeds the limit.Fix: Centralize polling. Run one periodic fetch, then fan the result out to every component that needs it.

