The one-paragraph answer
For an AI-powered solo iOS app in 2026: start on Railway for a backend proxy + Postgres. Use Firebase if you want to ship even faster with minimum backend code. Pick Cloudflare Workers if your traffic is bursty and global low-latency matters more than complexity. Move to AWS / GCP only when you've hit a specific limitation. Azure when you're integrating with Microsoft enterprise systems.
Cost at typical small-app scale (a few thousand MAU)
| Platform | Typical /mo | Predictability | Free tier |
|---|---|---|---|
| Railway | $5–$30 | High | Limited |
| Vercel | $0–$20 | High | Generous (hobby) |
| Firebase | $0–$50 | Medium | Generous |
| Cloudflare Workers | $0–$10 | High | Very generous |
| AWS (App Runner + Aurora Serverless v2) | $30–$150 | Low | 12-month limited free |
| Azure (App Service + Postgres) | $30–$150 | Low | $200 / 30 days credit |
| GCP (Cloud Run + Cloud SQL) | $15–$80 | Medium | $300 / 90 days credit |
| Self-host (Hetzner) | $5–$20 | High | None |
Critical context: at this scale, AI API costs (Claude / OpenAI / Gemini) typically dwarf the backend hosting bill. If your monthly Anthropic spend is $300, fretting about $20 of Railway vs $40 of GCP is the wrong optimization.
Deploy speed and developer experience
Time from "I have working code on my Mac" to "it's running with a public URL," for a clean first deploy:
- Railway: 5-10 minutes including signup.
- Vercel: 3-5 minutes.
- Firebase Cloud Functions: 10-15 minutes (CLI setup + project init).
- Cloudflare Workers: 5-10 minutes via Wrangler CLI.
- AWS App Runner: 30-60 minutes if it's your first AWS deploy (IAM, ECR, networking).
- Azure App Service: 30-60 minutes.
- GCP Cloud Run: 20-40 minutes.
- Self-host VPS: 1-3 hours (provisioning, Nginx, TLS, deployment scripts).
Per-deploy times (after the initial setup) are roughly similar across all of them — under 2 minutes once the pipeline is in place.
Scaling behavior
- Cloudflare Workers, Vercel, Lambda, Cloud Run, Cloud Functions: serverless — scale to zero, scale up automatically. Pay only for actual usage. Cold starts apply.
- Railway, AWS App Runner, Azure App Service, GCP App Engine: always-on containers, auto-scaling within configured limits. No cold starts.
- EC2, Compute Engine, Azure VMs: fixed-size servers you manage. Scale by adding more or sizing up.
- Firebase Firestore, DynamoDB: serverless data stores. Scale automatically. Pay per operation.
For AI-powered apps where latency matters, always-on containers (Railway, App Runner) deliver more consistent UX than cold-starty serverless.
Feature breadth
- AWS: ~200 services. Anything you might need exists.
- Azure: ~150 services. Strong on Microsoft ecosystem, AI (OpenAI Service), data.
- GCP: ~100 services. Best-in-class on certain (Cloud Run, BigQuery, Vertex AI).
- Railway / Vercel / Cloudflare / Firebase: focused product surfaces. Less breadth, less complexity.
"More features" isn't always better. For most projects, the focused platforms get you to production faster.
Lock-in risk
- Low lock-in: Railway, Vercel, plain VPS — you're running standard containers or VMs. Migrating is mostly DNS + config.
- Medium lock-in: AWS / Azure / GCP — if you avoid using too many proprietary services, you can migrate. If you go all-in on managed services (DynamoDB, BigQuery, Cosmos DB), migration is rework.
- High lock-in: Firebase Firestore + Auth + Functions tied tightly to your data model. Migrating away is a real project.
Don't fear lock-in to the point of refusing useful tools. Do think twice before letting a vendor's proprietary database define your whole data model.
By use case
"I'm a solo iOS dev launching an AI app with a Claude proxy"
Pick: Railway. Predictable, simple, ships fast. Move on with your life.
"I want to ship a real-time chat / social app with minimum backend code"
Pick: Firebase. Firestore + Auth + Cloud Functions covers 90% of the work.
"I'm building for a Fortune 500 customer who mandates a cloud"
Pick: whichever cloud the customer specifies. Usually Azure or AWS.
"I need global low-latency for a content / API product"
Pick: Cloudflare Workers + R2 + D1. Edge-first architecture wins on global latency.
"My workload includes serious analytics over many GB / TB of data"
Pick: GCP (BigQuery is the differentiator) or self-host ClickHouse.
"I need GPU instances for training / heavy inference"
Pick: AWS, GCP, or specialized GPU providers (Lambda Labs, Runpod, Coreweave).
"I'm a long-time Microsoft / .NET shop"
Pick: Azure. Path of least resistance.
By app maturity stage
- Stage 1 (0-100 users): Railway, Firebase, or Vercel. Move fast, validate the idea, don't optimize prematurely.
- Stage 2 (100-10k users): Same platform usually still works. Add monitoring, set budgets, optimize Firestore queries / cache patterns / backend cold starts.
- Stage 3 (10k-1M users): Stress-test the platform. Most apps stay on Railway / Firebase here. Some migrate to AWS/GCP for cost reasons or specific feature needs.
- Stage 4 (1M+ users / regulated industry): Likely on a hyperscaler. Multi-region, dedicated database tier, formal SRE practice.
The mistake is jumping stages prematurely. Stage-4 architecture in a stage-1 app is wasted time and money.
See the individual deep-dives for the platforms you're seriously considering: Railway, AWS, Azure, GCP, Firebase.
- Railway, AWS, Azure, GCP, Firebase, Cloudflare — official pricing pages