Recommended for: Production deployments with 100+ users
Architecture overview
Prerequisites
- AWS CLI installed and configured
- AWS account with permissions for ECS, RDS, ElastiCache, S3, IAM
- Docker image registry access (or ECR for your built images)
Step 1: Create AWS resources
1.1 Create RDS PostgreSQL
aws rds create-db-instance \
--db-instance-identifier teable-db \
--db-instance-class db.t3.medium \
--engine postgres \
--engine-version 16 \
--master-username teableadmin \
--master-user-password '<YourStrongPassword>' \
--allocated-storage 100 \
--vpc-security-group-ids sg-xxxxxx \
--db-subnet-group-name your-subnet-group \
--publicly-accessible false
Wait until the instance status is available. Get the endpoint:aws rds describe-db-instances \
--db-instance-identifier teable-db \
--query 'DBInstances[0].Endpoint.Address' --output text
1.2 Create ElastiCache Redis (for queues)
aws elasticache create-cache-cluster \
--cache-cluster-id teable-cache \
--engine redis \
--cache-node-type cache.t3.small \
--num-cache-nodes 1 \
--engine-version 7.0 \
--security-group-ids sg-xxxxxx \
--cache-subnet-group-name your-subnet-group
Get the endpoint:aws elasticache describe-cache-clusters \
--cache-cluster-id teable-cache \
--show-cache-node-info \
--query 'CacheClusters[0].CacheNodes[0].Endpoint.Address' --output text
1.3 Create S3 buckets (public + private)
# Create public bucket
aws s3api create-bucket \
--bucket teable-public-<your-unique-suffix> \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2
# Create private bucket
aws s3api create-bucket \
--bucket teable-private-<your-unique-suffix> \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2
Must-do: Configure public bucket
- Public read policy:
cat > public-bucket-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::teable-public-<your-unique-suffix>/*"
}
]
}
EOF
aws s3api put-bucket-policy \
--bucket teable-public-<your-unique-suffix> \
--policy file://public-bucket-policy.json
- Disable “Block Public Access” for public bucket:
aws s3api put-public-access-block \
--bucket teable-public-<your-unique-suffix> \
--public-access-block-configuration \
BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false
- CORS configuration (allow any origin):
cat > cors.json <<EOF
{
"CORSRules": [
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "HEAD", "PUT"],
"AllowedOrigins": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3000
}
]
}
EOF
aws s3api put-bucket-cors \
--bucket teable-public-<your-unique-suffix> \
--cors-configuration file://cors.json
Security reminder: The public bucket must allow public read. Ensure this complies with your organization’s security policy. The private bucket should remain non-public.
1.4 Create IAM user/role for S3 access
# Create IAM user
aws iam create-user --user-name teable-s3-user
# Attach S3 full access policy (or create a custom policy)
aws iam attach-user-policy \
--user-name teable-s3-user \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
# Create access key
aws iam create-access-key --user-name teable-s3-user
Save the AccessKeyId and SecretAccessKey from the output.
Step 2: Prepare environment variables
Create a .env file (or ECS task definition environment):
# Core
PUBLIC_ORIGIN=https://teable.yourcompany.com
SECRET_KEY=<generate-32-char-random-string>
# Database
PRISMA_DATABASE_URL=postgresql://teableadmin:<password>@<rds-endpoint>:5432/teable
# Redis (required for queues)
BACKEND_CACHE_PROVIDER=redis
BACKEND_CACHE_REDIS_URI=redis://<elasticache-endpoint>:6379/0
# Performance cache (optional; can point to the same Redis or a separate cluster)
BACKEND_PERFORMANCE_CACHE=redis://<elasticache-endpoint>:6379/0
# Storage (S3 dual buckets)
BACKEND_STORAGE_PROVIDER=s3
BACKEND_STORAGE_S3_REGION=us-west-2
BACKEND_STORAGE_S3_ENDPOINT=https://s3.us-west-2.amazonaws.com
BACKEND_STORAGE_S3_ACCESS_KEY=<access-key-from-step-1.4>
BACKEND_STORAGE_S3_SECRET_KEY=<secret-key-from-step-1.4>
BACKEND_STORAGE_PUBLIC_BUCKET=teable-public-<your-unique-suffix>
BACKEND_STORAGE_PRIVATE_BUCKET=teable-private-<your-unique-suffix>
STORAGE_PREFIX=https://teable-public-<your-unique-suffix>.s3.us-west-2.amazonaws.com
# Optional: Sandbox service (if deploying separately)
# SANDBOX_URL=http://teable-sandbox:7070
Generate a strong SECRET_KEY:
Step 3: Deploy to ECS
Option A: Using ECS Fargate (recommended for simplicity)
- Create ECS cluster:
aws ecs create-cluster --cluster-name teable-cluster
- Create task definition (
teable-task.json):
{
"family": "teable",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "2048",
"memory": "4096",
"containerDefinitions": [
{
"name": "teable",
"image": "ghcr.io/teableio/teable:latest",
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"environment": [
{"name": "PUBLIC_ORIGIN", "value": "https://teable.yourcompany.com"},
{"name": "SECRET_KEY", "value": "***REDACTED***"},
{"name": "PRISMA_DATABASE_URL", "value": "postgresql://..."},
{"name": "BACKEND_CACHE_PROVIDER", "value": "redis"},
{"name": "BACKEND_CACHE_REDIS_URI", "value": "redis://..."},
{"name": "BACKEND_STORAGE_PROVIDER", "value": "s3"},
{"name": "BACKEND_STORAGE_S3_REGION", "value": "us-west-2"},
{"name": "BACKEND_STORAGE_S3_ENDPOINT", "value": "https://s3.us-west-2.amazonaws.com"},
{"name": "BACKEND_STORAGE_S3_ACCESS_KEY", "value": "***"},
{"name": "BACKEND_STORAGE_S3_SECRET_KEY", "value": "***"},
{"name": "BACKEND_STORAGE_PUBLIC_BUCKET", "value": "teable-public-xxx"},
{"name": "BACKEND_STORAGE_PRIVATE_BUCKET", "value": "teable-private-xxx"},
{"name": "STORAGE_PREFIX", "value": "https://teable-public-xxx.s3.us-west-2.amazonaws.com"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/teable",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "teable"
}
}
}
]
}
- Register task definition:
aws ecs register-task-definition --cli-input-json file://teable-task.json
- Create ECS service:
aws ecs create-service \
--cluster teable-cluster \
--service-name teable-service \
--task-definition teable \
--desired-count 2 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[subnet-xxx,subnet-yyy],securityGroups=[sg-xxx],assignPublicIp=ENABLED}" \
--load-balancers "targetGroupArn=arn:aws:elasticloadbalancing:...,containerName=teable,containerPort=3000"
You need to create an Application Load Balancer (ALB) and target group separately, then reference it in --load-balancers.
Step 4: Verify deployment
- Check ECS service status:
aws ecs describe-services \
--cluster teable-cluster \
--services teable-service
- Check logs:
aws logs tail /ecs/teable --follow
- Access health check:
curl https://teable.yourcompany.com/health
Expected response:
- Open Teable in browser:
https://teable.yourcompany.com
Troubleshooting
Database connection errors
- Verify RDS security group allows traffic from ECS tasks
- Check
PRISMA_DATABASE_URL format: postgresql://user:pass@endpoint:5432/dbname
Redis connection errors
- Verify ElastiCache security group allows traffic from ECS
- Use the primary endpoint (not reader endpoint)
S3 access errors
- Verify IAM user has
s3:GetObject, s3:PutObject, s3:DeleteObject permissions
- Check bucket names match exactly in
BACKEND_STORAGE_PUBLIC_BUCKET / BACKEND_STORAGE_PRIVATE_BUCKET
- Ensure public bucket has public read policy and CORS configured
Container fails to start
- Check CloudWatch Logs:
/ecs/teable
- Verify all required environment variables are set
- Ensure
SECRET_KEY is set (not empty)
Production recommendations
-
High availability:
- Run at least 2 ECS tasks across multiple AZs
- Use RDS Multi-AZ deployment
- Use ElastiCache with replicas
-
Scaling:
- Enable ECS auto-scaling based on CPU/memory
- Use Application Load Balancer for traffic distribution
-
Security:
- Use AWS Secrets Manager for sensitive values (instead of plaintext environment variables)
- Enable RDS encryption at rest
- Use VPC endpoints for S3/ECR to avoid internet traffic
-
Monitoring:
- Enable CloudWatch Container Insights for ECS
- Set up alarms for task failures, high CPU/memory
-
Backup:
- Enable RDS automated backups (retention: 7-30 days)
- Enable S3 versioning for private bucket