Skip to main content

Command Palette

Search for a command to run...

πŸš€ Deploying Your MCP Server to Production: Free and Paid Platforms with GitHub Actions

From Docker image to live HTTPS endpoint β€” deploy on Koyeb and Railway for free, or Render and Fly.io for a few dollars a month, all wired to a GitHub Actions CD pipeline

Published
β€’14 min read
πŸš€ Deploying Your MCP Server to Production: Free and Paid Platforms with GitHub Actions
T

Hi πŸ‘‹, I'm Tushar Patil. Currently I am working as Frontend Developer (Angular) and also have expertise with .Net Core and Framework.


This is Part 9 of the AI Engineering with TypeScript series.

Prerequisites: Part 5 β€” Production MCP Servers Β· Part 7 β€” Observability Β· Part 8 β€” Client SDK

Stack: Docker Β· GitHub Actions Β· Koyeb Β· Railway Β· Render Β· Fly.io


πŸ—ΊοΈ What we'll cover

In Parts 5–8 we built a hardened, observable, multi-tenant MCP server and packaged a client SDK to go with it. All of it has been running locally. In Part 9 we ship it.

The 2026 hosting landscape has shifted significantly β€” Heroku's free tier is gone, Fly.io no longer has a free tier for new accounts, and Koyeb recently removed its free compute tier for new signups. The honest picture is covered here, so you can pick the right platform without surprises. πŸ’‘

We will cover four platforms in two tiers:

πŸ†“ Low-cost / free-to-start:

  • Koyeb β€” 1 free nano service, no credit card, global edge network
  • Railway β€” $5/month hobby tier with excellent developer experience and Docker Compose support

πŸ’³ Paid with generous features:

  • Render β€” $7/month, predictable flat pricing, polished dashboard
  • Fly.io β€” usage-based from ~$5/month, true global edge, best for latency-sensitive deployments

Every section includes the exact deployment steps and the GitHub Actions workflow to automate it. Pick your platform and skip to that section β€” the CD pipeline pattern is the same across all four. 🎯


πŸ“‹ Prerequisites: What We Are Deploying

The server we built in Part 5 is a Docker image with:

  • POST /mcp and GET /mcp β€” Streamable HTTP MCP endpoints
  • GET /health β€” unauthenticated health check
  • GET /metrics β€” Prometheus scrape endpoint
  • Bearer token auth via VALID_TOKENS env var
  • Redis session store via REDIS_URL env var

The Dockerfile from Part 5 produces a minimal Alpine image. Make sure it is pushed to a container registry before deploying. We will use GitHub Container Registry (GHCR) β€” it is free for public repos and tightly integrated with GitHub Actions:

# Build and push to GHCR manually (just for initial setup)
docker build -t ghcr.io/YOUR_GITHUB_USERNAME/weather-mcp-server:latest .
docker push ghcr.io/YOUR_GITHUB_USERNAME/weather-mcp-server:latest

The GitHub Actions workflow below automates this push on every merge to main.


πŸ”‘ Part 1: The Shared CD Foundation

No matter which platform you deploy to, the first half of the GitHub Actions pipeline is identical: build the Docker image, tag it, push it to GHCR. Only the last step β€” the actual deploy β€” differs per platform.

# .github/workflows/deploy.yml
name: Build and Deploy

on:
  push:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}/weather-mcp-server

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    outputs:
      image-tag: ${{ steps.meta.outputs.tags }}
      image-digest: ${{ steps.build.outputs.digest }}

    steps:
      - uses: actions/checkout@v4

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract image metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: \({{ env.REGISTRY }}/\){{ env.IMAGE_NAME }}
          tags: |
            type=sha,prefix=sha-
            type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}

      - name: Build and push
        id: build
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

This job runs on every push to main. The GHA cache means subsequent builds only rebuild changed layers β€” typical rebuild time drops from 3 minutes to under 30 seconds. ⚑


🌐 Option A: Koyeb (Free Tier β€” No Credit Card)

As of May 2026, Koyeb's free tier situation is nuanced β€” verify current free tier offerings at koyeb.com before publishing, as this changes frequently.

Koyeb runs your container across a global edge network (25+ regions) with automatic HTTPS, scale-to-zero, and built-in CI/CD. The free nano instance gets 512 MB RAM and 0.1 vCPU β€” enough for an MCP server handling moderate load.

Deploy steps:

  1. Sign up at koyeb.com β€” no credit card required for the free tier

  2. Create a new Service β†’ choose Docker as the deployment method

  3. Enter your GHCR image: ghcr.io/YOUR_USERNAME/weather-mcp-server:latest

  4. Set the port to 3000

  5. Add environment variables under Settings β†’ Environment:

     VALID_TOKENS=your-secret-token-here
     WEATHER_API_KEY=your-openweathermap-key
     REDIS_URL=redis://your-upstash-redis-url
    
  6. Set the health check path to /health

  7. Click Deploy

Koyeb assigns a public HTTPS URL like https://your-service-name.koyeb.app. That is your MCP server endpoint. πŸŽ‰

For Redis on free tier: Use Upstash β€” it offers a free Redis instance with 10,000 commands/day, plenty for development and low-traffic production.

Automate deploys with GitHub Actions:

Add this job to your deploy.yml, after the build job completes:

deploy-koyeb:
  needs: build
  runs-on: ubuntu-latest

  steps:
    - name: Deploy to Koyeb
      uses: koyeb/action-git-deploy@v1
      with:
        api-token: ${{ secrets.KOYEB_API_TOKEN }}
        app-name: weather-mcp-server
        service-name: web
        docker-image: ghcr.io/\({{ github.repository }}/weather-mcp-server:sha-\){{ github.sha }}

Get your API token from Koyeb's dashboard under Account β†’ API. Store it in GitHub as KOYEB_API_TOKEN under Settings β†’ Secrets and variables β†’ Actions.

Koyeb free tier caveats:

  • Single nano instance β€” no horizontal scaling on free
  • Scale-to-zero means a cold start (~1–2s) on the first request after idle
  • 100 GB egress/month included, which is generous for an API

πŸš‚ Option B: Railway ($5/month Hobby Tier)

Railway is the best developer experience of the four platforms here. Docker Compose support means you can deploy both your MCP server and a Redis instance in one project, with private networking between them β€” zero external Redis service needed. 🎯

The Hobby plan costs \(5/month which includes \)5 of resource credits. A minimal MCP server + Redis typically runs well within that allowance.

Deploy steps:

  1. Sign up at railway.app and create a new Project

  2. Click Deploy from Docker image β†’ enter your GHCR image URL

  3. Add a Redis plugin to the same project (one click β€” Railway provisions it instantly)

  4. Railway automatically sets REDIS_URL in your service's environment from the plugin

  5. Add your remaining env vars under Variables:

     VALID_TOKENS=your-secret-token-here
     WEATHER_API_KEY=your-openweathermap-key
     PORT=3000
    
  6. Under Settings β†’ Networking, generate a public domain

  7. Set the Health Check Path to /health

Railway detects the PORT env var and routes public traffic there automatically. Your MCP server is live at https://your-service.up.railway.app. βœ…

Deploy with Docker Compose (optional but powerful):

If you want the full stack β€” MCP server + Redis β€” defined in code and deployed atomically:

# railway.yml (place at repo root)
version: "2"
services:
  web:
    build: .
    variables:
      PORT: "3000"
      REDIS_URL: "${{Redis.REDIS_URL}}"
      VALID_TOKENS: "${{VALID_TOKENS}}"
      WEATHER_API_KEY: "${{WEATHER_API_KEY}}"
    healthcheckPath: /health

  redis:
    image: redis:7-alpine

Automate deploys with GitHub Actions:

deploy-railway:
  needs: build
  runs-on: ubuntu-latest

  steps:
    - uses: actions/checkout@v4

    - name: Install Railway CLI
      run: npm install -g @railway/cli

    - name: Deploy to Railway
      run: railway up --service web --detach
      env:
        RAILWAY_TOKEN: ${{ secrets.RAILWAY_TOKEN }}

Get your token from Railway's dashboard under Account β†’ Tokens. Store as RAILWAY_TOKEN in GitHub Secrets.

Railway caveats:

  • No true free tier β€” $5/month minimum after trial
  • Usage-based pricing means a traffic spike can exceed the $5 credit (set a spend limit in settings)
  • Best Docker Compose support of any PaaS β€” genuinely useful for multi-service stacks

πŸ”· Option C: Render ($7/month β€” Predictable Flat Pricing)

Render charges a flat monthly rate per service with no per-request or per-bandwidth surprises. If you have been burned by unpredictable cloud bills, Render's pricing model is the most reassuring of the four.

The Starter web service is $7/month with 512 MB RAM, 0.5 CPU, and 100 GB egress β€” enough for a solid production MCP server.

Deploy steps:

  1. Sign up at render.com and create a new Web Service

  2. Choose Deploy an existing image and enter your GHCR image URL

  3. Set Environment to Docker and port to 3000

  4. Add environment variables:

     VALID_TOKENS=your-secret-token-here
     WEATHER_API_KEY=your-openweathermap-key
     REDIS_URL=redis://your-upstash-redis-url
    
  5. Under Health & Alerts, set the health check path to /health

  6. Choose the Starter ($7/month) plan and deploy

For Redis on Render, use Upstash again β€” Render's native Redis service starts at $10/month, while an Upstash free instance covers you for development and low traffic.

Automate deploys with GitHub Actions:

Render uses a Deploy Hook URL β€” a webhook you call to trigger a redeploy with the latest image. Get it from Settings β†’ Deploy Hook in your service dashboard:

deploy-render:
  needs: build
  runs-on: ubuntu-latest

  steps:
    - name: Trigger Render deploy
      run: |
        curl --silent --fail \
          "${{ secrets.RENDER_DEPLOY_HOOK_URL }}"

Store the full hook URL as RENDER_DEPLOY_HOOK_URL in GitHub Secrets. Render will pull the latest tag from GHCR and redeploy β€” rolling update, zero downtime. βœ…

Render caveats:

  • Does not support deploying a specific image digest via the free deploy hook β€” it always uses latest
  • No built-in Redis on affordable tiers (use Upstash)
  • Rolling deploys are automatic β€” no manual zero-downtime config needed
  • Best documentation of the four platforms β€” easy to debug when things go wrong

πŸͺ‚ Option D: Fly.io (~$5/month β€” Global Edge, Best for Latency)

Fly.io runs your containers in Firecracker microVMs across 30+ cities worldwide. If you need your MCP server to respond fast for users in Mumbai, SΓ£o Paulo, and Frankfurt simultaneously, Fly.io is the right choice. New accounts require a credit card but there is no minimum spend β€” you only pay for what you use, with a typical minimal MCP server running around $3–5/month. πŸ’‘

Install the CLI:

curl -L https://fly.io/install.sh | sh
flyctl auth login

Initialise your app:

flyctl launch --image ghcr.io/YOUR_USERNAME/weather-mcp-server:latest \
  --name weather-mcp-server \
  --region bom \
  --no-deploy

This creates a fly.toml at your repo root. Edit it:

# fly.toml
app = "weather-mcp-server"
primary_region = "bom"    # Mumbai β€” closest to Pune!

[build]
  image = "ghcr.io/YOUR_USERNAME/weather-mcp-server:latest"

[http_service]
  internal_port = 3000
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0

[[vm]]
  memory = "512mb"
  cpu_kind = "shared"
  cpus = 1

[checks]
  [checks.health]
    grace_period = "10s"
    interval = "30s"
    method = "GET"
    path = "/health"
    port = 3000
    timeout = "5s"
    type = "http"

Set secrets (never pass tokens as build args):

flyctl secrets set \
  VALID_TOKENS="your-secret-token-here" \
  WEATHER_API_KEY="your-openweathermap-key" \
  REDIS_URL="redis://your-upstash-redis-url"

Deploy:

flyctl deploy

Fly.io provisions a HTTPS endpoint like https://weather-mcp-server.fly.dev. 🌏

Add a second region for redundancy (optional):

flyctl regions add sin    # Singapore
flyctl scale count 2      # 1 machine per region

Automate deploys with GitHub Actions:

deploy-fly:
  needs: build
  runs-on: ubuntu-latest

  steps:
    - uses: actions/checkout@v4

    - uses: superfly/flyctl-actions/setup-flyctl@master

    - name: Deploy to Fly.io
      run: flyctl deploy --image ghcr.io/\({{ github.repository }}/weather-mcp-server:sha-\){{ github.sha }}
      env:
        FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

Get your token with flyctl tokens create deploy -x 999999h and store it as FLY_API_TOKEN in GitHub Secrets.

Fly.io caveats:

  • Credit card required even for low-usage deployments (no free tier for new accounts as of 2026)
  • auto_stop_machines = true causes cold starts β€” set min_machines_running = 1 if you need instant response
  • Best multi-region story of the four β€” one fly.toml deploys globally

πŸ“Š Platform Comparison at a Glance

Platform   | Cost            | Free Tier  | Docker | Redis  | Multi-region | Best for
-----------|-----------------|------------|--------|--------|--------------|------------------
Koyeb      | Free / $5.50+   | Yes (1 svc)| Yes    | Upstash| Yes (25 loc) | Zero-cost start
Railway    | $5/mo hobby     | No         | Yes    | Built-in| Limited    | DX + Compose
Render     | $7/mo flat      | Static only| Yes    | Upstash| No           | Predictable bills
Fly.io     | ~$5/mo usage    | No         | Yes    | Upstash| Yes (30+)    | Global latency

⚠️ Free tier details change frequently. Always verify current limits at each platform's pricing page before committing to a deployment target.


πŸ” Part 2: Secrets Management β€” The Right Way

Never hardcode tokens in fly.toml, railway.yml, or any file that gets committed. Every platform has a secrets/env mechanism:

  • Koyeb β†’ Service Settings β†’ Environment Variables β†’ mark as Secret
  • Railway β†’ Project Variables β†’ add and mark as Secret (hidden in logs)
  • Render β†’ Environment β†’ Secret Files or Environment Groups
  • Fly.io β†’ flyctl secrets set KEY=value (never appears in fly.toml)

And in GitHub Actions, all sensitive values go into Settings β†’ Secrets and variables β†’ Actions β†’ New repository secret. Reference them as ${{ secrets.MY_SECRET }} β€” GitHub redacts them from logs automatically. πŸ”’

For production systems with multiple services sharing the same secrets (e.g. all services need WEATHER_API_KEY), consider:

  • Railway β†’ Shared Variables across services in a project
  • Fly.io β†’ flyctl secrets import from a .env file (which you keep outside the repo)
  • Any platform β†’ Doppler or Infisical for centralised secrets management that syncs to all platforms via their CI integrations

🩺 Part 3: Health Checks and Rolling Deploys

Every platform above uses your /health endpoint to decide when a deploy succeeded. Make sure it actually verifies the things that matter β€” not just that Express is running, but that Redis is reachable:

// src/server.ts β€” enhanced health check
app.get("/health", async (_req, res) => {
  try {
    // Ping Redis β€” if this fails, the server can not serve sessions
    await redis.ping();

    res.json({
      status: "ok",
      uptime: process.uptime(),
      timestamp: new Date().toISOString(),
    });
  } catch (err) {
    // Return 503 β€” the platform will NOT route traffic here
    // and will roll back the deploy if this keeps failing
    res.status(503).json({
      status: "degraded",
      error: "Redis unreachable",
    });
  }
});

A health check that returns 200 even when Redis is down will happily pass the deploy check while your users hit session errors. Always verify your actual dependencies. 🚨


πŸ“‹ Full CD Pipeline (All Four Platforms)

Here is the complete deploy.yml that builds once and deploys to whichever platforms you enable via GitHub Secrets. Remove the jobs for platforms you do not use:

name: Build and Deploy

on:
  push:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}/weather-mcp-server

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
    outputs:
      sha-tag: sha-${{ github.sha }}
    steps:
      - uses: actions/checkout@v4
      - uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
      - uses: docker/metadata-action@v5
        id: meta
        with:
          images: \({{ env.REGISTRY }}/\){{ env.IMAGE_NAME }}
          tags: |
            type=sha,prefix=sha-
            type=raw,value=latest
      - uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

  deploy-koyeb:
    needs: build
    runs-on: ubuntu-latest
    if: ${{ secrets.KOYEB_API_TOKEN != '' }}
    steps:
      - uses: koyeb/action-git-deploy@v1
        with:
          api-token: ${{ secrets.KOYEB_API_TOKEN }}
          app-name: weather-mcp-server
          service-name: web
          docker-image: ghcr.io/\({{ github.repository }}/weather-mcp-server:\){{ needs.build.outputs.sha-tag }}

  deploy-railway:
    needs: build
    runs-on: ubuntu-latest
    if: ${{ secrets.RAILWAY_TOKEN != '' }}
    steps:
      - uses: actions/checkout@v4
      - run: npm install -g @railway/cli
      - run: railway up --service web --detach
        env:
          RAILWAY_TOKEN: ${{ secrets.RAILWAY_TOKEN }}

  deploy-render:
    needs: build
    runs-on: ubuntu-latest
    if: ${{ secrets.RENDER_DEPLOY_HOOK_URL != '' }}
    steps:
      - run: curl --silent --fail "${{ secrets.RENDER_DEPLOY_HOOK_URL }}"

  deploy-fly:
    needs: build
    runs-on: ubuntu-latest
    if: ${{ secrets.FLY_API_TOKEN != '' }}
    steps:
      - uses: actions/checkout@v4
      - uses: superfly/flyctl-actions/setup-flyctl@master
      - run: flyctl deploy --image ghcr.io/\({{ github.repository }}/weather-mcp-server:\){{ needs.build.outputs.sha-tag }}
        env:
          FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

The if: ${{ secrets.X != '' }} guards mean each deploy job is skipped unless the corresponding secret is set. You can target multiple platforms simultaneously or just one β€” controlled entirely by which secrets you add to your repo. πŸŽ›οΈ


🎯 Which Platform Should You Pick?

Just getting started / side project β†’ start with Koyeb free tier + Upstash Redis. Zero cost, no credit card, your server is live in 10 minutes.

Want the best developer experience and can spend $5/month β†’ Railway. Docker Compose support, built-in Redis, real-time logs, and the fastest deploy experience of the four.

Running a real product and want predictable bills β†’ Render at $7/month. Flat pricing means no surprise invoices, and the documentation is the best in class.

Need global edge latency and are comfortable with usage-based pricing β†’ Fly.io. Deploy to bom (Mumbai) and sin (Singapore) in one command and your Indian and SE Asian users both get sub-50ms responses.


🎯 Series Summary

Over Parts 1–9 you went from "what is MCP?" to a fully deployed, production-grade AI infrastructure stack:

  • πŸ“– Part 1–2 β€” MCP concepts, tools, resources, prompts, capability negotiation
  • πŸ€– Part 3–4 β€” AI agent loop, multi-step tool orchestration, streaming, interactive CLI
  • 🐳 Part 5 β€” Streamable HTTP transport, OAuth, Zod validation, Docker
  • πŸ—οΈ Part 6 β€” Multi-tenant sessions, state isolation, Redis, horizontal scaling
  • πŸ“Š Part 7 β€” pino logging, OpenTelemetry tracing, Prometheus metrics, Grafana
  • πŸ“¦ Part 8 β€” Typed client SDK, retry, timeout, plugins, npm publish
  • πŸš€ Part 9 β€” Production deployment on Koyeb, Railway, Render, Fly.io with GitHub Actions CD

The full stack is now live. What you build on top of it is up to you. 🌏


πŸ“š Further Reading