Skip to main content
Recommended for: Production deployments with 100+ users

Architecture overview

Important: Teable requires S3-compatible storage. You can use:
  • AWS S3 (cross-cloud setup)
  • MinIO (self-hosted on GCE/GKE)
  • Any other S3-compatible service
Google Cloud Storage (GCS) is not directly supported because it uses a different API.

Prerequisites

  • gcloud CLI installed and authenticated
  • GCP project with billing enabled
  • Access to S3-compatible storage (e.g., AWS S3 account or MinIO deployment)

Step 1: Create GCP resources

1.1 Set project and region

export PROJECT_ID=your-project-id
export REGION=us-central1

gcloud config set project $PROJECT_ID

1.2 Create Cloud SQL (PostgreSQL)

gcloud sql instances create teable-db \
  --database-version=POSTGRES_16 \
  --tier=db-custom-2-7680 \
  --region=$REGION \
  --root-password=<YourStrongPassword> \
  --storage-type=SSD \
  --storage-size=100GB
Get connection name:
gcloud sql instances describe teable-db \
  --format='value(connectionName)'
Result: your-project:region:teable-db
Create database:
gcloud sql databases create teable --instance=teable-db

1.3 Create Memorystore (Redis)

gcloud redis instances create teable-cache \
  --size=1 \
  --region=$REGION \
  --redis-version=redis_7_0
Get Redis host:
gcloud redis instances describe teable-cache \
  --region=$REGION \
  --format='value(host)'

Step 2: Set up S3-compatible storage

Since Google Cloud Storage is not S3-compatible, choose one of the following options:

Option A: Use AWS S3 (cross-cloud)

  1. Create S3 buckets on AWS (see AWS deployment guide steps 1.3-1.4)
  2. Configure public bucket (see Object Storage guide)
Environment variables:
BACKEND_STORAGE_PROVIDER=s3
BACKEND_STORAGE_S3_REGION=us-west-2
BACKEND_STORAGE_S3_ENDPOINT=https://s3.us-west-2.amazonaws.com
BACKEND_STORAGE_S3_ACCESS_KEY=<aws-access-key>
BACKEND_STORAGE_S3_SECRET_KEY=<aws-secret-key>
BACKEND_STORAGE_PUBLIC_BUCKET=teable-public-<suffix>
BACKEND_STORAGE_PRIVATE_BUCKET=teable-private-<suffix>
STORAGE_PREFIX=https://teable-public-<suffix>.s3.us-west-2.amazonaws.com

Option B: Deploy MinIO on GCP

Deploy MinIO on a GCE instance or GKE cluster to provide S3-compatible storage. Quick setup (GCE VM):
# Create VM
gcloud compute instances create minio-server \
  --machine-type=e2-medium \
  --zone=us-central1-a \
  --image-family=ubuntu-2204-lts \
  --image-project=ubuntu-os-cloud \
  --boot-disk-size=100GB

# SSH and install MinIO
gcloud compute ssh minio-server --zone=us-central1-a

# On the VM:
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
sudo mv minio /usr/local/bin/

export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=<strong-password>
minio server /data --console-address ":9001"
Create buckets and configure access (see Azure guide Option B for details).

Step 3: Prepare environment variables

Create .env file:
# Core
PUBLIC_ORIGIN=https://teable.yourcompany.com
SECRET_KEY=<generate-32-char-random-string>

# Database (Cloud SQL)
PRISMA_DATABASE_URL=postgresql://postgres:<password>@/teable?host=/cloudsql/<connection-name>

# Redis (Memorystore)
BACKEND_CACHE_PROVIDER=redis
BACKEND_CACHE_REDIS_URI=redis://<memorystore-ip>:6379/0
BACKEND_PERFORMANCE_CACHE=redis://<memorystore-ip>:6379/0

# Storage (S3-compatible)
# For Option A (AWS S3):
BACKEND_STORAGE_PROVIDER=s3
BACKEND_STORAGE_S3_REGION=us-west-2
BACKEND_STORAGE_S3_ENDPOINT=https://s3.us-west-2.amazonaws.com
BACKEND_STORAGE_S3_ACCESS_KEY=<aws-key>
BACKEND_STORAGE_S3_SECRET_KEY=<aws-secret>
BACKEND_STORAGE_PUBLIC_BUCKET=teable-public-<suffix>
BACKEND_STORAGE_PRIVATE_BUCKET=teable-private-<suffix>
STORAGE_PREFIX=https://teable-public-<suffix>.s3.us-west-2.amazonaws.com

Step 4: Deploy to Cloud Run

4.1 Build and push container (or use pre-built image)

# Option 1: Use pre-built image
export IMAGE=ghcr.io/teableio/teable:latest

# Option 2: Build and push to GCR
# gcloud builds submit --tag gcr.io/$PROJECT_ID/teable
# export IMAGE=gcr.io/$PROJECT_ID/teable

4.2 Deploy to Cloud Run

gcloud run deploy teable \
  --image=$IMAGE \
  --platform=managed \
  --region=$REGION \
  --allow-unauthenticated \
  --port=3000 \
  --set-env-vars="PUBLIC_ORIGIN=https://teable.yourcompany.com" \
  --set-env-vars="SECRET_KEY=<your-secret>" \
  --set-env-vars="PRISMA_DATABASE_URL=postgresql://..." \
  --set-env-vars="BACKEND_CACHE_PROVIDER=redis" \
  --set-env-vars="BACKEND_CACHE_REDIS_URI=redis://..." \
  --set-env-vars="BACKEND_STORAGE_PROVIDER=s3" \
  --set-env-vars="BACKEND_STORAGE_S3_REGION=us-west-2" \
  --set-env-vars="BACKEND_STORAGE_S3_ENDPOINT=https://s3.us-west-2.amazonaws.com" \
  --set-env-vars="BACKEND_STORAGE_S3_ACCESS_KEY=***" \
  --set-env-vars="BACKEND_STORAGE_S3_SECRET_KEY=***" \
  --set-env-vars="BACKEND_STORAGE_PUBLIC_BUCKET=teable-public-xxx" \
  --set-env-vars="BACKEND_STORAGE_PRIVATE_BUCKET=teable-private-xxx" \
  --set-env-vars="STORAGE_PREFIX=https://teable-public-xxx.s3.us-west-2.amazonaws.com" \
  --add-cloudsql-instances=<connection-name> \
  --vpc-connector=<your-vpc-connector-for-redis>
For Cloud SQL Unix socket access, use --add-cloudsql-instances.
For Memorystore access, create a VPC connector first.

Step 5: Verify deployment

  1. Get service URL:
gcloud run services describe teable \
  --region=$REGION \
  --format='value(status.url)'
  1. Test health check:
curl https://<service-url>/health
Expected: {"status":"ok"}
  1. View logs:
gcloud run services logs read teable --region=$REGION

Production recommendations

  1. High availability: Enable Cloud Run minimum instances (2+)
  2. Security: Use Secret Manager for sensitive values
  3. Monitoring: Enable Cloud Monitoring and set up alerts
  4. Backup: Enable Cloud SQL automated backups
  5. Custom domain: Map a custom domain to Cloud Run service

Last modified on December 23, 2025