π Deploying Your MCP Server to Production: Free and Paid Platforms with GitHub Actions
From Docker image to live HTTPS endpoint β deploy on Koyeb and Railway for free, or Render and Fly.io for a few dollars a month, all wired to a GitHub Actions CD pipeline
Hi π, I'm Tushar Patil. Currently I am working as Frontend Developer (Angular) and also have expertise with .Net Core and Framework.
This is Part 9 of the AI Engineering with TypeScript series.
Prerequisites: Part 5 β Production MCP Servers Β· Part 7 β Observability Β· Part 8 β Client SDK
Stack: Docker Β· GitHub Actions Β· Koyeb Β· Railway Β· Render Β· Fly.io
πΊοΈ What we'll cover
In Parts 5β8 we built a hardened, observable, multi-tenant MCP server and packaged a client SDK to go with it. All of it has been running locally. In Part 9 we ship it.
The 2026 hosting landscape has shifted significantly β Heroku's free tier is gone, Fly.io no longer has a free tier for new accounts, and Koyeb recently removed its free compute tier for new signups. The honest picture is covered here, so you can pick the right platform without surprises. π‘
We will cover four platforms in two tiers:
π Low-cost / free-to-start:
- Koyeb β 1 free nano service, no credit card, global edge network
- Railway β $5/month hobby tier with excellent developer experience and Docker Compose support
π³ Paid with generous features:
- Render β $7/month, predictable flat pricing, polished dashboard
- Fly.io β usage-based from ~$5/month, true global edge, best for latency-sensitive deployments
Every section includes the exact deployment steps and the GitHub Actions workflow to automate it. Pick your platform and skip to that section β the CD pipeline pattern is the same across all four. π―
π Prerequisites: What We Are Deploying
The server we built in Part 5 is a Docker image with:
POST /mcpandGET /mcpβ Streamable HTTP MCP endpointsGET /healthβ unauthenticated health checkGET /metricsβ Prometheus scrape endpoint- Bearer token auth via
VALID_TOKENSenv var - Redis session store via
REDIS_URLenv var
The Dockerfile from Part 5 produces a minimal Alpine image. Make sure it is pushed to a container registry before deploying. We will use GitHub Container Registry (GHCR) β it is free for public repos and tightly integrated with GitHub Actions:
# Build and push to GHCR manually (just for initial setup)
docker build -t ghcr.io/YOUR_GITHUB_USERNAME/weather-mcp-server:latest .
docker push ghcr.io/YOUR_GITHUB_USERNAME/weather-mcp-server:latest
The GitHub Actions workflow below automates this push on every merge to main.
π Part 1: The Shared CD Foundation
No matter which platform you deploy to, the first half of the GitHub Actions pipeline is identical: build the Docker image, tag it, push it to GHCR. Only the last step β the actual deploy β differs per platform.
# .github/workflows/deploy.yml
name: Build and Deploy
on:
push:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}/weather-mcp-server
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
image-digest: ${{ steps.build.outputs.digest }}
steps:
- uses: actions/checkout@v4
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract image metadata
id: meta
uses: docker/metadata-action@v5
with:
images: \({{ env.REGISTRY }}/\){{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=sha-
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
- name: Build and push
id: build
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
This job runs on every push to main. The GHA cache means subsequent builds only rebuild changed layers β typical rebuild time drops from 3 minutes to under 30 seconds. β‘
π Option A: Koyeb (Free Tier β No Credit Card)
As of May 2026, Koyeb's free tier situation is nuanced β verify current free tier offerings at koyeb.com before publishing, as this changes frequently.
Koyeb runs your container across a global edge network (25+ regions) with automatic HTTPS, scale-to-zero, and built-in CI/CD. The free nano instance gets 512 MB RAM and 0.1 vCPU β enough for an MCP server handling moderate load.
Deploy steps:
Sign up at koyeb.com β no credit card required for the free tier
Create a new Service β choose Docker as the deployment method
Enter your GHCR image:
ghcr.io/YOUR_USERNAME/weather-mcp-server:latestSet the port to
3000Add environment variables under Settings β Environment:
VALID_TOKENS=your-secret-token-here WEATHER_API_KEY=your-openweathermap-key REDIS_URL=redis://your-upstash-redis-urlSet the health check path to
/healthClick Deploy
Koyeb assigns a public HTTPS URL like https://your-service-name.koyeb.app. That is your MCP server endpoint. π
For Redis on free tier: Use Upstash β it offers a free Redis instance with 10,000 commands/day, plenty for development and low-traffic production.
Automate deploys with GitHub Actions:
Add this job to your deploy.yml, after the build job completes:
deploy-koyeb:
needs: build
runs-on: ubuntu-latest
steps:
- name: Deploy to Koyeb
uses: koyeb/action-git-deploy@v1
with:
api-token: ${{ secrets.KOYEB_API_TOKEN }}
app-name: weather-mcp-server
service-name: web
docker-image: ghcr.io/\({{ github.repository }}/weather-mcp-server:sha-\){{ github.sha }}
Get your API token from Koyeb's dashboard under Account β API. Store it in GitHub as KOYEB_API_TOKEN under Settings β Secrets and variables β Actions.
Koyeb free tier caveats:
- Single nano instance β no horizontal scaling on free
- Scale-to-zero means a cold start (~1β2s) on the first request after idle
- 100 GB egress/month included, which is generous for an API
π Option B: Railway ($5/month Hobby Tier)
Railway is the best developer experience of the four platforms here. Docker Compose support means you can deploy both your MCP server and a Redis instance in one project, with private networking between them β zero external Redis service needed. π―
The Hobby plan costs \(5/month which includes \)5 of resource credits. A minimal MCP server + Redis typically runs well within that allowance.
Deploy steps:
Sign up at railway.app and create a new Project
Click Deploy from Docker image β enter your GHCR image URL
Add a Redis plugin to the same project (one click β Railway provisions it instantly)
Railway automatically sets
REDIS_URLin your service's environment from the pluginAdd your remaining env vars under Variables:
VALID_TOKENS=your-secret-token-here WEATHER_API_KEY=your-openweathermap-key PORT=3000Under Settings β Networking, generate a public domain
Set the Health Check Path to
/health
Railway detects the PORT env var and routes public traffic there automatically. Your MCP server is live at https://your-service.up.railway.app. β
Deploy with Docker Compose (optional but powerful):
If you want the full stack β MCP server + Redis β defined in code and deployed atomically:
# railway.yml (place at repo root)
version: "2"
services:
web:
build: .
variables:
PORT: "3000"
REDIS_URL: "${{Redis.REDIS_URL}}"
VALID_TOKENS: "${{VALID_TOKENS}}"
WEATHER_API_KEY: "${{WEATHER_API_KEY}}"
healthcheckPath: /health
redis:
image: redis:7-alpine
Automate deploys with GitHub Actions:
deploy-railway:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Railway CLI
run: npm install -g @railway/cli
- name: Deploy to Railway
run: railway up --service web --detach
env:
RAILWAY_TOKEN: ${{ secrets.RAILWAY_TOKEN }}
Get your token from Railway's dashboard under Account β Tokens. Store as RAILWAY_TOKEN in GitHub Secrets.
Railway caveats:
- No true free tier β $5/month minimum after trial
- Usage-based pricing means a traffic spike can exceed the $5 credit (set a spend limit in settings)
- Best Docker Compose support of any PaaS β genuinely useful for multi-service stacks
π· Option C: Render ($7/month β Predictable Flat Pricing)
Render charges a flat monthly rate per service with no per-request or per-bandwidth surprises. If you have been burned by unpredictable cloud bills, Render's pricing model is the most reassuring of the four.
The Starter web service is $7/month with 512 MB RAM, 0.5 CPU, and 100 GB egress β enough for a solid production MCP server.
Deploy steps:
Sign up at render.com and create a new Web Service
Choose Deploy an existing image and enter your GHCR image URL
Set Environment to
Dockerand port to3000Add environment variables:
VALID_TOKENS=your-secret-token-here WEATHER_API_KEY=your-openweathermap-key REDIS_URL=redis://your-upstash-redis-urlUnder Health & Alerts, set the health check path to
/healthChoose the Starter ($7/month) plan and deploy
For Redis on Render, use Upstash again β Render's native Redis service starts at $10/month, while an Upstash free instance covers you for development and low traffic.
Automate deploys with GitHub Actions:
Render uses a Deploy Hook URL β a webhook you call to trigger a redeploy with the latest image. Get it from Settings β Deploy Hook in your service dashboard:
deploy-render:
needs: build
runs-on: ubuntu-latest
steps:
- name: Trigger Render deploy
run: |
curl --silent --fail \
"${{ secrets.RENDER_DEPLOY_HOOK_URL }}"
Store the full hook URL as RENDER_DEPLOY_HOOK_URL in GitHub Secrets. Render will pull the latest tag from GHCR and redeploy β rolling update, zero downtime. β
Render caveats:
- Does not support deploying a specific image digest via the free deploy hook β it always uses
latest - No built-in Redis on affordable tiers (use Upstash)
- Rolling deploys are automatic β no manual zero-downtime config needed
- Best documentation of the four platforms β easy to debug when things go wrong
πͺ Option D: Fly.io (~$5/month β Global Edge, Best for Latency)
Fly.io runs your containers in Firecracker microVMs across 30+ cities worldwide. If you need your MCP server to respond fast for users in Mumbai, SΓ£o Paulo, and Frankfurt simultaneously, Fly.io is the right choice. New accounts require a credit card but there is no minimum spend β you only pay for what you use, with a typical minimal MCP server running around $3β5/month. π‘
Install the CLI:
curl -L https://fly.io/install.sh | sh
flyctl auth login
Initialise your app:
flyctl launch --image ghcr.io/YOUR_USERNAME/weather-mcp-server:latest \
--name weather-mcp-server \
--region bom \
--no-deploy
This creates a fly.toml at your repo root. Edit it:
# fly.toml
app = "weather-mcp-server"
primary_region = "bom" # Mumbai β closest to Pune!
[build]
image = "ghcr.io/YOUR_USERNAME/weather-mcp-server:latest"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
[[vm]]
memory = "512mb"
cpu_kind = "shared"
cpus = 1
[checks]
[checks.health]
grace_period = "10s"
interval = "30s"
method = "GET"
path = "/health"
port = 3000
timeout = "5s"
type = "http"
Set secrets (never pass tokens as build args):
flyctl secrets set \
VALID_TOKENS="your-secret-token-here" \
WEATHER_API_KEY="your-openweathermap-key" \
REDIS_URL="redis://your-upstash-redis-url"
Deploy:
flyctl deploy
Fly.io provisions a HTTPS endpoint like https://weather-mcp-server.fly.dev. π
Add a second region for redundancy (optional):
flyctl regions add sin # Singapore
flyctl scale count 2 # 1 machine per region
Automate deploys with GitHub Actions:
deploy-fly:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: superfly/flyctl-actions/setup-flyctl@master
- name: Deploy to Fly.io
run: flyctl deploy --image ghcr.io/\({{ github.repository }}/weather-mcp-server:sha-\){{ github.sha }}
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
Get your token with flyctl tokens create deploy -x 999999h and store it as FLY_API_TOKEN in GitHub Secrets.
Fly.io caveats:
- Credit card required even for low-usage deployments (no free tier for new accounts as of 2026)
auto_stop_machines = truecauses cold starts β setmin_machines_running = 1if you need instant response- Best multi-region story of the four β one
fly.tomldeploys globally
π Platform Comparison at a Glance
Platform | Cost | Free Tier | Docker | Redis | Multi-region | Best for
-----------|-----------------|------------|--------|--------|--------------|------------------
Koyeb | Free / $5.50+ | Yes (1 svc)| Yes | Upstash| Yes (25 loc) | Zero-cost start
Railway | $5/mo hobby | No | Yes | Built-in| Limited | DX + Compose
Render | $7/mo flat | Static only| Yes | Upstash| No | Predictable bills
Fly.io | ~$5/mo usage | No | Yes | Upstash| Yes (30+) | Global latency
β οΈ Free tier details change frequently. Always verify current limits at each platform's pricing page before committing to a deployment target.
π Part 2: Secrets Management β The Right Way
Never hardcode tokens in fly.toml, railway.yml, or any file that gets committed. Every platform has a secrets/env mechanism:
- Koyeb β Service Settings β Environment Variables β mark as Secret
- Railway β Project Variables β add and mark as Secret (hidden in logs)
- Render β Environment β Secret Files or Environment Groups
- Fly.io β
flyctl secrets set KEY=value(never appears infly.toml)
And in GitHub Actions, all sensitive values go into Settings β Secrets and variables β Actions β New repository secret. Reference them as ${{ secrets.MY_SECRET }} β GitHub redacts them from logs automatically. π
For production systems with multiple services sharing the same secrets (e.g. all services need WEATHER_API_KEY), consider:
- Railway β Shared Variables across services in a project
- Fly.io β
flyctl secrets importfrom a.envfile (which you keep outside the repo) - Any platform β Doppler or Infisical for centralised secrets management that syncs to all platforms via their CI integrations
π©Ί Part 3: Health Checks and Rolling Deploys
Every platform above uses your /health endpoint to decide when a deploy succeeded. Make sure it actually verifies the things that matter β not just that Express is running, but that Redis is reachable:
// src/server.ts β enhanced health check
app.get("/health", async (_req, res) => {
try {
// Ping Redis β if this fails, the server can not serve sessions
await redis.ping();
res.json({
status: "ok",
uptime: process.uptime(),
timestamp: new Date().toISOString(),
});
} catch (err) {
// Return 503 β the platform will NOT route traffic here
// and will roll back the deploy if this keeps failing
res.status(503).json({
status: "degraded",
error: "Redis unreachable",
});
}
});
A health check that returns 200 even when Redis is down will happily pass the deploy check while your users hit session errors. Always verify your actual dependencies. π¨
π Full CD Pipeline (All Four Platforms)
Here is the complete deploy.yml that builds once and deploys to whichever platforms you enable via GitHub Secrets. Remove the jobs for platforms you do not use:
name: Build and Deploy
on:
push:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}/weather-mcp-server
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
sha-tag: sha-${{ github.sha }}
steps:
- uses: actions/checkout@v4
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/metadata-action@v5
id: meta
with:
images: \({{ env.REGISTRY }}/\){{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=sha-
type=raw,value=latest
- uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy-koyeb:
needs: build
runs-on: ubuntu-latest
if: ${{ secrets.KOYEB_API_TOKEN != '' }}
steps:
- uses: koyeb/action-git-deploy@v1
with:
api-token: ${{ secrets.KOYEB_API_TOKEN }}
app-name: weather-mcp-server
service-name: web
docker-image: ghcr.io/\({{ github.repository }}/weather-mcp-server:\){{ needs.build.outputs.sha-tag }}
deploy-railway:
needs: build
runs-on: ubuntu-latest
if: ${{ secrets.RAILWAY_TOKEN != '' }}
steps:
- uses: actions/checkout@v4
- run: npm install -g @railway/cli
- run: railway up --service web --detach
env:
RAILWAY_TOKEN: ${{ secrets.RAILWAY_TOKEN }}
deploy-render:
needs: build
runs-on: ubuntu-latest
if: ${{ secrets.RENDER_DEPLOY_HOOK_URL != '' }}
steps:
- run: curl --silent --fail "${{ secrets.RENDER_DEPLOY_HOOK_URL }}"
deploy-fly:
needs: build
runs-on: ubuntu-latest
if: ${{ secrets.FLY_API_TOKEN != '' }}
steps:
- uses: actions/checkout@v4
- uses: superfly/flyctl-actions/setup-flyctl@master
- run: flyctl deploy --image ghcr.io/\({{ github.repository }}/weather-mcp-server:\){{ needs.build.outputs.sha-tag }}
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
The if: ${{ secrets.X != '' }} guards mean each deploy job is skipped unless the corresponding secret is set. You can target multiple platforms simultaneously or just one β controlled entirely by which secrets you add to your repo. ποΈ
π― Which Platform Should You Pick?
Just getting started / side project β start with Koyeb free tier + Upstash Redis. Zero cost, no credit card, your server is live in 10 minutes.
Want the best developer experience and can spend $5/month β Railway. Docker Compose support, built-in Redis, real-time logs, and the fastest deploy experience of the four.
Running a real product and want predictable bills β Render at $7/month. Flat pricing means no surprise invoices, and the documentation is the best in class.
Need global edge latency and are comfortable with usage-based pricing β Fly.io. Deploy to bom (Mumbai) and sin (Singapore) in one command and your Indian and SE Asian users both get sub-50ms responses.
π― Series Summary
Over Parts 1β9 you went from "what is MCP?" to a fully deployed, production-grade AI infrastructure stack:
- π Part 1β2 β MCP concepts, tools, resources, prompts, capability negotiation
- π€ Part 3β4 β AI agent loop, multi-step tool orchestration, streaming, interactive CLI
- π³ Part 5 β Streamable HTTP transport, OAuth, Zod validation, Docker
- ποΈ Part 6 β Multi-tenant sessions, state isolation, Redis, horizontal scaling
- π Part 7 β pino logging, OpenTelemetry tracing, Prometheus metrics, Grafana
- π¦ Part 8 β Typed client SDK, retry, timeout, plugins, npm publish
- π Part 9 β Production deployment on Koyeb, Railway, Render, Fly.io with GitHub Actions CD
The full stack is now live. What you build on top of it is up to you. π