The Vercel Attraction: Why We Started There
Let's be honest — Vercel is an exceptional platform. When our team first adopted it, the appeal was immediate and undeniable:
- Zero-config deployments — Push to Git, and it's live
- Automatic preview environments — Every PR gets its own URL for QA
- Edge Functions — Run code at the edge without managing infrastructure
- Next.js optimization — First-class support for the framework we heavily used
- Developer experience — The CLI, dashboard, and integrations are polished
For a team moving fast, Vercel was magical. Frontend developers could deploy without waiting for DevOps. Preview URLs made stakeholder reviews seamless. The DX was unmatched.
What started as a $20/month hobby plan grew to $4,500/month as we scaled to 12 applications with multiple environments, concurrent builds, and bandwidth demands.
The Breaking Point: $54K/Year for Frontend Hosting
As our organization scaled, so did the Vercel bill. Here's what drove the costs:
Team Pro Plan: $400/month
Additional Team Members (8): $160/month
Concurrent Builds: $500/month
Bandwidth (800GB): $1,200/month
Edge Function Invocations: $800/month
Preview Deployments: $600/month
Enterprise Features: $840/month
─────────────────────────────────────────────
Total: ~$4,500/month
Annual: ~$54,000/year
For a DevOps team obsessed with cost optimization, this was unacceptable — especially when we knew AWS could deliver the same (or better) performance at a fraction of the cost.
The Strategic Decision: Why an Experienced DevOps Team Matters
This is where having battle-tested DevOps engineers becomes invaluable. A junior team might have:
- Accepted the costs as "the price of good DX"
- Over-engineered a Kubernetes solution
- Lost months trying to replicate every Vercel feature
An experienced team asks different questions:
Key Questions We Asked
- What do we actually need vs. what's nice-to-have?
- Which workloads are CSR (static) vs. SSR (dynamic)?
- Can we replicate the critical DX features (preview envs) cheaply?
- What's the total cost of ownership including maintenance?
The Architecture: Two Solutions for Two Problems
We analyzed our 12 applications and categorized them:
| Type | Apps | Vercel Solution | AWS Solution |
|---|---|---|---|
| CSR (Static) | 8 apps | Vercel Edge Network | CloudFront + S3 |
| SSR (Dynamic) | 4 apps | Vercel Functions | App Runner |
Solution 1: CloudFront + S3 for CSR Applications
For Client-Side Rendered applications (React SPAs, static Next.js exports), the architecture is straightforward:
name: Deploy CSR to CloudFront
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
AWS_REGION: eu-west-1
S3_BUCKET: ${{ secrets.S3_BUCKET }}
CLOUDFRONT_DIST_ID: ${{ secrets.CLOUDFRONT_DIST_ID }}
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
env:
NEXT_PUBLIC_API_URL: ${{ vars.API_URL }}
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
# Production deployment
- name: Deploy to S3
if: github.ref == 'refs/heads/main'
run: |
aws s3 sync ./out s3://$S3_BUCKET --delete
- name: Invalidate CloudFront
if: github.ref == 'refs/heads/main'
run: |
aws cloudfront create-invalidation \
--distribution-id $CLOUDFRONT_DIST_ID \
--paths "/*"
# Preview deployment for PRs
- name: Deploy Preview
if: github.event_name == 'pull_request'
run: |
PR_NUMBER=${{ github.event.pull_request.number }}
aws s3 sync ./out s3://$S3_BUCKET-previews/pr-$PR_NUMBER --delete
echo "Preview URL: https://preview-$PR_NUMBER.example.com"
Solution 2: App Runner for SSR Applications
For Server-Side Rendered applications (Next.js with API routes, dynamic pages), AWS App Runner was the perfect fit:
- No infrastructure management — Similar to Vercel's serverless model
- Auto-scaling — Scales to zero when idle, scales up under load
- Container-based — Full control over the runtime environment
- VPC integration — Can connect to private RDS, ElastiCache, etc.
name: Deploy SSR to App Runner
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
AWS_REGION: eu-west-1
ECR_REPOSITORY: my-ssr-app
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push Docker image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
# Production deployment
- name: Update App Runner Service
if: github.ref == 'refs/heads/main'
run: |
aws apprunner update-service \
--service-arn ${{ secrets.APPRUNNER_SERVICE_ARN }} \
--source-configuration '{
"ImageRepository": {
"ImageIdentifier": "'$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG'",
"ImageRepositoryType": "ECR"
}
}'
# Preview environment for PRs
- name: Create Preview Environment
if: github.event_name == 'pull_request'
run: |
PR_NUMBER=${{ github.event.pull_request.number }}
# Create temporary App Runner service for preview
# (Or use a shared preview service with path-based routing)
The Dockerfile for SSR Apps
FROM node:20-alpine AS base
# Install dependencies only when needed
FROM base AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Build the application
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Production image
FROM base AS runner
WORKDIR /app
ENV NODE_ENV production
ENV PORT 8080
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 8080
CMD ["node", "server.js"]
Replicating Preview Environments
The biggest concern from developers: "But what about preview URLs for every PR?"
We solved this with a lightweight preview system:
Preview Environment Strategy
For CSR apps: Deploy to s3://bucket-previews/pr-{number} with a wildcard CloudFront distribution pointing to *.preview.example.com
For SSR apps: Use a shared "preview" App Runner service with environment-variable-based routing, or spin up temporary services for larger PRs.
resource "aws_cloudfront_distribution" "preview" {
enabled = true
default_root_object = "index.html"
aliases = ["*.preview.example.com"]
origin {
domain_name = aws_s3_bucket.previews.bucket_regional_domain_name
origin_id = "S3-previews"
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.preview.cloudfront_access_identity_path
}
}
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-previews"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
# Use Lambda@Edge to route based on subdomain
lambda_function_association {
event_type = "origin-request"
lambda_arn = aws_lambda_function.preview_router.qualified_arn
include_body = false
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.preview.arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
}
}
The Results: Cost Breakdown
CloudFront (800GB bandwidth): $68/month
S3 Storage (50GB): $1.15/month
S3 Requests: $5/month
App Runner (4 services): $180/month
ECR Storage: $10/month
Route 53: $2/month
ACM Certificates: $0 (free)
GitHub Actions (included): $0
─────────────────────────────────────────────
Total: ~$270/month
Annual: ~$3,240/year
Savings: $54,000 - $3,240 = $50,760/year
Even accounting for engineering time (2 weeks at ~$5K), the ROI was achieved in the first month. Conservative estimate: $45K+ annual savings.
What We Kept, What We Lost
| Feature | Vercel | AWS Solution | Status |
|---|---|---|---|
| Zero-config deploys | ✅ Built-in | ✅ GitHub Actions | Kept |
| Preview environments | ✅ Automatic | ✅ Custom solution | Kept |
| Global CDN | ✅ Vercel Edge | ✅ CloudFront | Kept |
| Auto-scaling | ✅ Serverless | ✅ App Runner | Kept |
| Instant rollbacks | ✅ Dashboard | ✅ ECR image tags | Kept |
| Analytics | ✅ Built-in | ⚡ CloudWatch + custom | Modified |
| Edge Functions | ✅ Native | ⚡ Lambda@Edge | Modified |
| Pretty dashboard | ✅ Excellent | ❌ AWS Console | Lost |
Key Takeaways: Why DevOps Experience Matters
This migration succeeded because of experienced engineering decisions:
Know When to Build vs. Buy
Vercel is worth it for small teams. At scale, the math changes. We recognized the inflection point.
Right-Size the Solution
We didn't build a Kubernetes cluster. CloudFront + S3 and App Runner were exactly enough.
Protect Developer Experience
Preview environments weren't negotiable. We built a lightweight solution that preserved the workflow.
Measure Total Cost of Ownership
The "free" features of managed platforms have hidden costs. Calculate the full picture.
Should You Migrate?
Stay on Vercel if:
- You're a small team (< 5 developers)
- You have < 5 applications
- Developer time is more expensive than hosting costs
- You heavily use Vercel-specific features (ISR, Edge Config)
Consider migrating if:
- Monthly Vercel bill exceeds $1,000
- You have DevOps capacity to maintain pipelines
- You need VPC integration for backend services
- You're already using AWS for other workloads
Vercel is a premium product with premium pricing. For organizations with DevOps maturity, AWS provides the same capabilities at 90%+ lower cost. The key is having the expertise to execute the migration without sacrificing developer experience.
Questions about this migration or similar cost optimization strategies? Get in touch — I'm happy to discuss your specific use case.