MCP Gateway Model Context Protocol gateway & proxy - unify REST, MCP, and A2A with federation, virtual servers, retries, security, and an optional admin UI. ContextForge MCP Gateway is a feature-rich gateway, proxy and MCP Registry that federates MCP and REST services - unifying discovery, auth, rate-limiting, observability, virtual servers, multi-transport protocols, and an optional Admin UI into one clean endpoint for your AI clients. It runs as a fully compliant MCP server, deployable via PyPI or Docker, and scales to multi-cluster environments on Kubernetes with Redis-backed federation and caching. Table of Contents πŸš€ Overview & Goals ContextForge MCP Gateway is a gateway, registry, and proxy that sits in front of any Model Context Protocol (MCP) server or REST API-exposing a unified endpoint for all your AI clients. ⚠️ Caution: The current release (0.6.0) is considered alpha / early beta. It is not production-ready and should only be used for local development, testing, or experimentation. Features, APIs, and behaviors are subject to change without notice. Do not deploy in production environments without thorough security review, validation and additional security mechanisms. Many of the features required for secure, large-scale, or multi-tenant production deployments are still on the project roadmap - which is itself evolving. It currently supports: Federation across multiple MCP and REST services A2A (Agent-to-Agent) integration for external AI agents (OpenAI, Anthropic, custom) for external AI agents (OpenAI, Anthropic, custom) Virtualization of legacy APIs as MCP-compliant tools and servers Transport over HTTP, JSON-RPC, WebSocket, SSE (with configurable keepalive), stdio and streamable-HTTP An Admin UI for real-time management, configuration, and log monitoring Built-in auth, retries, and rate-limiting OpenTelemetry observability with Phoenix, Jaeger, Zipkin, and other OTLP backends with Phoenix, Jaeger, Zipkin, and other OTLP backends Scalable deployments via Docker or PyPI, Redis-backed caching, and multi-cluster federation For a list of upcoming features, check out the ContextForge MCP Gateway Roadmap ⚠️ Important: MCP Gateway is not a standalone product - it is an open source component with NO OFFICIAL SUPPORT from IBM or its affiliates that can be integrated into your own solution architecture. If you choose to use it, you are responsible for evaluating its fit, securing the deployment, and managing its lifecycle. See SECURITY.md for more details. πŸ”Œ Gateway Layer with Protocol Flexibility Sits in front of any MCP server or REST API Lets you choose your MCP protocol version (e.g., 2025-03-26 ) ) Exposes a single, unified interface for diverse backends 🌐 Federation of Peer Gateways (MCP Registry) Auto-discovers or configures peer gateways (via mDNS or manual) Performs health checks and merges remote registries transparently Supports Redis-backed syncing and fail-over 🧩 Virtualization of REST/gRPC Services Wraps non-MCP services as virtual MCP servers Registers tools, prompts, and resources with minimal configuration πŸ” REST-to-MCP Tool Adapter Adapts REST APIs into tools with: Automatic JSON Schema extraction Support for headers, tokens, and custom auth Retry, timeout, and rate-limit policies 🧠 Unified Registries Prompts : Jinja2 templates, multimodal support, rollback/versioning : Jinja2 templates, multimodal support, rollback/versioning Resources : URI-based access, MIME detection, caching, SSE updates : URI-based access, MIME detection, caching, SSE updates Tools: Native or adapted, with input validation and concurrency controls πŸ“ˆ Admin UI, Observability & Dev Experience Admin UI built with HTMX + Alpine.js Real-time log viewer with filtering, search, and export capabilities Auth: Basic, JWT, or custom schemes Structured logs, health endpoints, metrics 400+ tests, Makefile targets, live reload, pre-commit hooks πŸ” OpenTelemetry Observability Vendor-agnostic tracing with OpenTelemetry (OTLP) protocol support with OpenTelemetry (OTLP) protocol support Multiple backend support : Phoenix (LLM-focused), Jaeger, Zipkin, Tempo, DataDog, New Relic : Phoenix (LLM-focused), Jaeger, Zipkin, Tempo, DataDog, New Relic Distributed tracing across federated gateways and services across federated gateways and services Automatic instrumentation of tools, prompts, resources, and gateway operations of tools, prompts, resources, and gateway operations LLM-specific metrics : Token usage, costs, model performance : Token usage, costs, model performance Zero-overhead when disabled with graceful degradation with graceful degradation Easy configuration via environment variables Quick start with Phoenix (LLM observability): # Start Phoenix docker run -p 6006:6006 -p 4317:4317 arizephoenix/phoenix:latest # Configure gateway export OTEL_ENABLE_OBSERVABILITY=true export OTEL_TRACES_EXPORTER=otlp export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 # Run gateway - traces automatically sent to Phoenix mcpgateway See Observability Documentation for detailed setup with other backends. Quick Start - PyPI MCP Gateway is published on PyPI as mcp-contextforge-gateway . TLDR;: (single command using uv) BASIC_AUTH_PASSWORD=pass \ MCPGATEWAY_UI_ENABLED=true \ MCPGATEWAY_ADMIN_API_ENABLED=true \ uvx --from mcp-contextforge-gateway mcpgateway --host 0.0.0.0 --port 4444 πŸ“‹ Prerequisites Python β‰₯ 3.10 (3.11 recommended) (3.11 recommended) curl + jq - only for the last smoke-test step 1 - Install & run (copy-paste friendly) # 1️⃣ Isolated env + install from pypi mkdir mcpgateway && cd mcpgateway python3 -m venv .venv && source .venv/bin/activate pip install --upgrade pip pip install mcp-contextforge-gateway # 2️⃣ Launch on all interfaces with custom creds & secret key # Enable the Admin API endpoints (true/false) - disabled by default export MCPGATEWAY_UI_ENABLED=true export MCPGATEWAY_ADMIN_API_ENABLED=true BASIC_AUTH_PASSWORD=pass JWT_SECRET_KEY=my-test-key \ mcpgateway --host 0.0.0.0 --port 4444 & # admin/pass # 3️⃣ Generate a bearer token & smoke-test the API export MCPGATEWAY_BEARER_TOKEN= $( python3 -m mcpgateway.utils.create_jwt_token \ --username admin --exp 10080 --secret my-test-key ) curl -s -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://127.0.0.1:4444/version | jq Windows (PowerShell) quick-start # 1️⃣ Isolated env + install from PyPI mkdir mcpgateway ; cd mcpgateway python3 - m venv .venv ; .\.venv\Scripts\Activate.ps1 pip install -- upgrade pip pip install mcp - contextforge - gateway # 2️⃣ Environment variables (session-only) $ Env: MCPGATEWAY_UI_ENABLED = " true " $ Env: MCPGATEWAY_ADMIN_API_ENABLED = " true " $ Env: BASIC_AUTH_PASSWORD = " changeme " # admin/changeme $ Env: JWT_SECRET_KEY = " my-test-key " # 3️⃣ Launch the gateway mcpgateway.exe -- host 0.0 . 0.0 -- port 4444 # Optional: background it # Start-Process -FilePath "mcpgateway.exe" -ArgumentList "--host 0.0.0.0 --port 4444" # 4️⃣ Bearer token and smoke-test $ Env: MCPGATEWAY_BEARER_TOKEN = python3 - m mcpgateway.utils.create_jwt_token ` -- username admin -- exp 10080 -- secret my - test-key curl - s - H " Authorization: Bearer $ Env: MCPGATEWAY_BEARER_TOKEN " ` http: // 127.0 . 0.1 : 4444 / version | jq More configuration Copy .env.example to .env and tweak any of the settings (or use them as env variables). πŸš€ End-to-end demo (register a local MCP server) # 1️⃣ Spin up the sample GO MCP time server using mcpgateway.translate & docker python3 -m mcpgateway.translate \ --stdio " docker run --rm -i -p 8888:8080 ghcr.io/ibm/fast-time-server:latest -transport=stdio " \ --expose-sse \ --port 8003 # Or using the official mcp-server-git using uvx: pip install uv # to install uvx, if not already installed python3 -m mcpgateway.translate --stdio " uvx mcp-server-git " --expose-sse --port 9000 # Alternative: running the local binary # cd mcp-servers/go/fast-time-server; make build # python3 -m mcpgateway.translate --stdio "./dist/fast-time-server -transport=stdio" --expose-sse --port 8002 # NEW: Expose via multiple protocols simultaneously! python3 -m mcpgateway.translate \ --stdio " uvx mcp-server-git " \ --expose-sse \ --expose-streamable-http \ --port 9000 # Now accessible via both /sse (SSE) and /mcp (streamable HTTP) endpoints # 2️⃣ Register it with the gateway curl -s -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"name":"fast_time","url":"http://localhost:9000/sse"} ' \ http://localhost:4444/gateways # 3️⃣ Verify tool catalog curl -s -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/tools | jq # 4️⃣ Create a *virtual server* bundling those tools. Use the ID of tools from the tool catalog (Step #3) and pass them in the associatedTools list. curl -s -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"name":"time_server","description":"Fast time tools","associatedTools":[]} ' \ http://localhost:4444/servers | jq # Example curl curl -s -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " -H " Content-Type: application/json " -d ' {"name":"time_server","description":"Fast time tools","associatedTools":["6018ca46d32a4ac6b4c054c13a1726a2"]} ' \ http://localhost:4444/servers | jq # 5️⃣ List servers (should now include the UUID of the newly created virtual server) curl -s -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/servers | jq # 6️⃣ Client SSE endpoint. Inspect it interactively with the MCP Inspector CLI (or use any MCP client) npx -y @modelcontextprotocol/inspector # Transport Type: SSE, URL: http://localhost:4444/servers/UUID_OF_SERVER_1/sse, Header Name: "Authorization", Bearer Token πŸ–§ Using the stdio wrapper (mcpgateway-wrapper) export MCP_AUTH= $MCPGATEWAY_BEARER_TOKEN export MCP_SERVER_URL=http://localhost:4444/servers/UUID_OF_SERVER_1/mcp python3 -m mcpgateway.wrapper # Ctrl-C to exit You can also run it with uv or inside Docker/Podman - see the Containers section above. In MCP Inspector, define MCP_AUTH and MCP_SERVER_URL env variables, and select python3 as the Command, and -m mcpgateway.wrapper as Arguments. echo $PWD /.venv/bin/python3 # Using the Python3 full path ensures you have a working venv export MCP_SERVER_URL= ' http://localhost:4444/servers/UUID_OF_SERVER_1/mcp ' export MCP_AUTH= ${MCPGATEWAY_BEARER_TOKEN} npx -y @modelcontextprotocol/inspector or Pass the url and auth as arguments (no need to set environment variables) npx -y @modelcontextprotocol/inspector command as ` python ` Arguments as ` -m mcpgateway.wrapper --url " http://localhost:4444/servers/UUID_OF_SERVER_1/mcp " --auth " Bearer " ` When using a MCP Client such as Claude with stdio: { "mcpServers" : { "mcpgateway-wrapper" : { "command" : " python " , "args" : [ " -m " , " mcpgateway.wrapper " ], "env" : { "MCP_AUTH" : " your-token-here " , "MCP_SERVER_URL" : " http://localhost:4444/servers/UUID_OF_SERVER_1 " , "MCP_TOOL_CALL_TIMEOUT" : " 120 " } } } } Quick Start - Containers Use the official OCI image from GHCR with Docker or Podman. 🐳 Docker 1 - Minimum viable run docker run -d --name mcpgateway \ -p 4444:4444 \ -e MCPGATEWAY_UI_ENABLED=true \ -e MCPGATEWAY_ADMIN_API_ENABLED=true \ -e HOST=0.0.0.0 \ -e JWT_SECRET_KEY=my-test-key \ -e BASIC_AUTH_USER=admin \ -e BASIC_AUTH_PASSWORD=changeme \ -e AUTH_REQUIRED=true \ -e DATABASE_URL=sqlite:///./mcp.db \ ghcr.io/ibm/mcp-context-forge:0.6.0 # Tail logs (Ctrl+C to quit) docker logs -f mcpgateway # Generating an API key docker run --rm -it ghcr.io/ibm/mcp-context-forge:0.6.0 \ python3 -m mcpgateway.utils.create_jwt_token --username admin --exp 0 --secret my-test-key Browse to http://localhost:4444/admin (user admin / pass changeme ). 2 - Persist the SQLite database mkdir -p $( pwd ) /data touch $( pwd ) /data/mcp.db sudo chown -R :docker $( pwd ) /data chmod 777 $( pwd ) /data docker run -d --name mcpgateway \ --restart unless-stopped \ -p 4444:4444 \ -v $( pwd ) /data:/data \ -e MCPGATEWAY_UI_ENABLED=true \ -e MCPGATEWAY_ADMIN_API_ENABLED=true \ -e DATABASE_URL=sqlite:////data/mcp.db \ -e HOST=0.0.0.0 \ -e JWT_SECRET_KEY=my-test-key \ -e BASIC_AUTH_USER=admin \ -e BASIC_AUTH_PASSWORD=changeme \ ghcr.io/ibm/mcp-context-forge:0.6.0 SQLite now lives on the host at ./data/mcp.db . 3 - Local tool discovery (host network) mkdir -p $( pwd ) /data touch $( pwd ) /data/mcp.db sudo chown -R :docker $( pwd ) /data chmod 777 $( pwd ) /data docker run -d --name mcpgateway \ --network=host \ -e MCPGATEWAY_UI_ENABLED=true \ -e MCPGATEWAY_ADMIN_API_ENABLED=true \ -e HOST=0.0.0.0 \ -e PORT=4444 \ -e DATABASE_URL=sqlite:////data/mcp.db \ -v $( pwd ) /data:/data \ ghcr.io/ibm/mcp-context-forge:0.6.0 Using --network=host allows Docker to access the local network, allowing you to add MCP servers running on your host. See Docker Host network driver documentation for more details. 🦭 Podman (rootless-friendly) 1 - Basic run podman run -d --name mcpgateway \ -p 4444:4444 \ -e HOST=0.0.0.0 \ -e DATABASE_URL=sqlite:///./mcp.db \ ghcr.io/ibm/mcp-context-forge:0.6.0 2 - Persist SQLite mkdir -p $( pwd ) /data touch $( pwd ) /data/mcp.db sudo chown -R :docker $( pwd ) /data chmod 777 $( pwd ) /data podman run -d --name mcpgateway \ --restart=on-failure \ -p 4444:4444 \ -v $( pwd ) /data:/data \ -e DATABASE_URL=sqlite:////data/mcp.db \ ghcr.io/ibm/mcp-context-forge:0.6.0 3 - Host networking (rootless) mkdir -p $( pwd ) /data touch $( pwd ) /data/mcp.db sudo chown -R :docker $( pwd ) /data chmod 777 $( pwd ) /data podman run -d --name mcpgateway \ --network=host \ -v $( pwd ) /data:/data \ -e DATABASE_URL=sqlite:////data/mcp.db \ ghcr.io/ibm/mcp-context-forge:0.6.0 ✏️ Docker/Podman tips .env files - Put all the -e FOO= lines into a file and replace them with --env-file .env . See the provided .env.example for reference. Pinned tags - Use an explicit version (e.g. v0.6.0 ) instead of latest for reproducible builds. JWT tokens - Generate one in the running container: docker exec mcpgateway python3 -m mcpgateway.utils.create_jwt_token -u admin -e 10080 --secret my-test-key Upgrades - Stop, remove, and rerun with the same -v $(pwd)/data:/data mount; your DB and config stay intact. πŸš‘ Smoke-test the running container curl -s -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/health | jq curl -s -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/tools | jq curl -s -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/version | jq πŸ–§ Running the MCP Gateway stdio wrapper The mcpgateway.wrapper lets you connect to the gateway over stdio while keeping JWT authentication. You should run this from the MCP Client. The example below is just for testing. # Set environment variables export MCPGATEWAY_BEARER_TOKEN= $( python3 -m mcpgateway.utils.create_jwt_token --username admin --exp 10080 --secret my-test-key ) export MCP_AUTH= ${MCPGATEWAY_BEARER_TOKEN} export MCP_SERVER_URL= ' http://localhost:4444/servers/UUID_OF_SERVER_1/mcp ' export MCP_TOOL_CALL_TIMEOUT=120 export MCP_WRAPPER_LOG_LEVEL=DEBUG # or OFF to disable logging docker run --rm -i \ -e MCP_AUTH= $MCPGATEWAY_BEARER_TOKEN \ -e MCP_SERVER_URL=http://host.docker.internal:4444/servers/UUID_OF_SERVER_1/mcp \ -e MCP_TOOL_CALL_TIMEOUT=120 \ -e MCP_WRAPPER_LOG_LEVEL=DEBUG \ ghcr.io/ibm/mcp-context-forge:0.6.0 \ python3 -m mcpgateway.wrapper Testing mcpgateway.wrapper by hand: Because the wrapper speaks JSON-RPC over stdin/stdout, you can interact with it using nothing more than a terminal or pipes. # Start the MCP Gateway Wrapper export MCP_AUTH= ${MCPGATEWAY_BEARER_TOKEN} export MCP_SERVER_URL=http://localhost:4444/servers/YOUR_SERVER_UUID python3 -m mcpgateway.wrapper Initialize the protocol # Initialize the protocol { "jsonrpc" : " 2.0 " , "id" : 1 , "method" : " initialize " , "params" :{ "protocolVersion" : " 2025-03-26 " , "capabilities" :{}, "clientInfo" :{ "name" : " demo " , "version" : " 0.0.1 " }}} # Then after the reply: { "jsonrpc" : " 2.0 " , "method" : " notifications/initialized " , "params" :{}} # Get prompts { "jsonrpc" : " 2.0 " , "id" : 4 , "method" : " prompts/list " } { "jsonrpc" : " 2.0 " , "id" : 5 , "method" : " prompts/get " , "params" :{ "name" : " greeting " , "arguments" :{ "user" : " Bob " }}} # Get resources { "jsonrpc" : " 2.0 " , "id" : 6 , "method" : " resources/list " } { "jsonrpc" : " 2.0 " , "id" : 7 , "method" : " resources/read " , "params" :{ "uri" : " https://example.com/some.txt " }} # Get / call tools { "jsonrpc" : " 2.0 " , "id" : 2 , "method" : " tools/list " } { "jsonrpc" : " 2.0 " , "id" : 3 , "method" : " tools/call " , "params" :{ "name" : " get_system_time " , "arguments" :{ "timezone" : " Europe/Dublin " }}} Expected responses from mcpgateway.wrapper { "jsonrpc" : " 2.0 " , "id" : 1 , "result" :{ "protocolVersion" : " 2025-03-26 " , "capabilities" :{ "experimental" :{}, "prompts" :{ "listChanged" : false }, "resources" :{ "subscribe" : false , "listChanged" : false }, "tools" :{ "listChanged" : false }}, "serverInfo" :{ "name" : " mcpgateway-wrapper " , "version" : " 0.6.0 " }}} # When there's no tools { "jsonrpc" : " 2.0 " , "id" : 2 , "result" :{ "tools" :[]}} # After you add some tools and create a virtual server { "jsonrpc" : " 2.0 " , "id" : 2 , "result" :{ "tools" :[{ "annotations" :{ "readOnlyHint" : false , "destructiveHint" : true , "idempotentHint" : false , "openWorldHint" : true }, "description" : " Convert time between different timezones " , "inputSchema" :{ "properties" :{ "source_timezone" :{ "description" : " Source IANA timezone name " , "type" : " string " }, "target_timezone" :{ "description" : " Target IANA timezone name " , "type" : " string " }, "time" :{ "description" : " Time to convert in RFC3339 format or common formats like '2006-01-02 15:04:05' " , "type" : " string " }}, "required" :[ " time " , " source_timezone " , " target_timezone " ], "type" : " object " }, "name" : " convert_time " },{ "annotations" :{ "readOnlyHint" : false , "destructiveHint" : true , "idempotentHint" : false , "openWorldHint" : true }, "description" : " Get current system time in specified timezone " , "inputSchema" :{ "properties" :{ "timezone" :{ "description" : " IANA timezone name (e.g., 'America/New_York', 'Europe/London'). Defaults to UTC " , "type" : " string " }}, "type" : " object " }, "name" : " get_system_time " }]}} # Running the time tool: { "jsonrpc" : " 2.0 " , "id" : 3 , "result" :{ "content" :[{ "type" : " text " , "text" : " 2025-07-09T00:09:45+01:00 " }]}} 🧩 Running from an MCP Client ( mcpgateway.wrapper ) The mcpgateway.wrapper exposes everything your Gateway knows about over stdio, so any MCP client that can't (or shouldn't) open an authenticated SSE stream still gets full tool-calling power. Remember to substitute your real Gateway URL (and server ID) for http://localhost:4444/servers/UUID_OF_SERVER_1/mcp . When inside Docker/Podman, that often becomes http://host.docker.internal:4444/servers/UUID_OF_SERVER_1/mcp (macOS/Windows) or the gateway container's hostname (Linux). 🐳 Docker / Podman docker run -i --rm \ --network=host \ -e MCP_SERVER_URL=http://localhost:4444/servers/UUID_OF_SERVER_1/mcp \ -e MCP_AUTH= ${MCPGATEWAY_BEARER_TOKEN} \ -e MCP_TOOL_CALL_TIMEOUT=120 \ ghcr.io/ibm/mcp-context-forge:0.6.0 \ python3 -m mcpgateway.wrapper πŸ“¦ pipx (one-liner install & run) # Install gateway package in its own isolated venv pipx install --include-deps mcp-contextforge-gateway # Run the stdio wrapper MCP_AUTH= ${MCPGATEWAY_BEARER_TOKEN} \ MCP_SERVER_URL=http://localhost:4444/servers/UUID_OF_SERVER_1/mcp \ python3 -m mcpgateway.wrapper # Alternatively with uv uv run --directory . -m mcpgateway.wrapper Claude Desktop JSON (uses the host Python that pipx injected): { "mcpServers" : { "mcpgateway-wrapper" : { "command" : " python3 " , "args" : [ " -m " , " mcpgateway.wrapper " ], "env" : { "MCP_AUTH" : " " , "MCP_SERVER_URL" : " http://localhost:4444/servers/UUID_OF_SERVER_1/mcp " , "MCP_TOOL_CALL_TIMEOUT" : " 120 " } } } } ⚑ uv / uvx (light-speed venvs) 1 - Install uv ( uvx is an alias it provides) # (a) official one-liner curl -Ls https://astral.sh/uv/install.sh | sh # (b) or via pipx pipx install uv 2 - Create an on-the-spot venv & run the wrapper # Create venv in ~/.venv/mcpgateway (or current dir if you prefer) uv venv ~ /.venv/mcpgateway source ~ /.venv/mcpgateway/bin/activate # Install the gateway package using uv uv pip install mcp-contextforge-gateway # Launch wrapper MCP_AUTH= ${MCPGATEWAY_BEARER_TOKEN} \ MCP_SERVER_URL=http://localhost:4444/servers/UUID_OF_SERVER_1/mcp \ uv run --directory . -m mcpgateway.wrapper # Use this just for testing, as the Client will run the uv command Claude Desktop JSON (runs through uvx) { "mcpServers" : { "mcpgateway-wrapper" : { "command" : " uvx " , "args" : [ " run " , " -- " , " python " , " -m " , " mcpgateway.wrapper " ], "env" : { "MCP_AUTH" : " " , "MCP_SERVER_URL" : " http://localhost:4444/servers/UUID_OF_SERVER_1/mcp " } } } πŸš€ Using with Claude Desktop (or any GUI MCP client) Edit Config β†’ File β–Έ Settings β–Έ Developer β–Έ Edit Config Paste one of the JSON blocks above (Docker / pipx / uvx). Restart the app so the new stdio server is spawned. Open logs in the same menu to verify mcpgateway-wrapper started and listed your tools. Need help? See: MCP Debugging Guide - https://modelcontextprotocol.io/docs/tools/debugging πŸš€ Quick Start: VS Code Dev Container Spin up a fully-loaded dev environment (Python 3.11, Docker/Podman CLI, all project dependencies) in just two clicks. πŸ“‹ Prerequisites VS Code with the Dev Containers extension with the Dev Containers extension Docker or Podman installed and running locally 🧰 Setup Instructions 1 - Clone & Open git clone https://github.com/ibm/mcp-context-forge.git cd mcp-context-forge code . VS Code will detect the .devcontainer and prompt: "Reopen in Container" or manually run: Ctrl/Cmd ⇧ P β†’ Dev Containers: Reopen in Container 2 - First-Time Build (Automatic) The container build will: Install system packages & Python 3.11 Run make install-dev to pull all dependencies to pull all dependencies Execute tests to verify the toolchain You'll land in /workspace ready to develop. πŸ› οΈ Daily Developer Workflow Common tasks inside the container: # Start dev server (hot reload) make dev # http://localhost:4444 # Run tests & linters make test make lint Optional: make bash - drop into an interactive shell - drop into an interactive shell make clean - clear build artefacts & caches - clear build artefacts & caches Port forwarding is automatic (customize via .devcontainer/devcontainer.json ) ☁️ GitHub Codespaces: 1-Click Cloud IDE No local Docker? Use Codespaces: Go to the repo β†’ Code β–Έ Codespaces β–Έ Create codespace on main Wait for the container image to build in the cloud Develop using the same workflow above Quick Start (manual install) Prerequisites Python β‰₯ 3.10 GNU Make (optional, but all common workflows are available as Make targets) (optional, but all common workflows are available as Make targets) Optional: Docker / Podman for containerized runs make venv install serve What it does: Creates / activates a .venv in your home folder ~/.venv/mcpgateway Installs the gateway and necessary dependencies Launches Gunicorn (Uvicorn workers) on http://localhost:4444 For development, you can use: make install-dev # Install development dependencies, ex: linters and test harness make lint # optional: run style checks (ruff, mypy, etc.) Containerized (self-signed TLS) Container Runtime Support This project supports both Docker and Podman. The Makefile automatically detects which runtime is available and handles image naming differences. make container-build # Uses podman if available, otherwise docker > You can use docker or podman, ex: ` ` ` bash make podman # build production image make podman-run-ssl # run at https://localhost:4444 # or listen on port 4444 on your host directly, adds --network=host to podman make podman-run-ssl-host Smoke-test the API curl -k -sX GET \ -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ https://localhost:4444/tools | jq You should receive [] until you register a tool. Installation Via Make make venv install # create .venv + install deps make serve # gunicorn on :4444 UV (alternative) uv venv && source .venv/bin/activate uv pip install -e ' .[dev] ' # IMPORTANT: in zsh, quote to disable glob expansion! pip (alternative) python3 -m venv .venv && source .venv/bin/activate pip install -e " .[dev] " Optional (PostgreSQL adapter) You can configure the gateway with SQLite, PostgreSQL (or any other compatible database) in .env. When using PostgreSQL, you need to install psycopg2 driver. uv pip install psycopg2-binary # dev convenience # or uv pip install psycopg2 # production build Quick Postgres container docker run --name mcp-postgres \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=mysecretpassword \ -e POSTGRES_DB=mcp \ -p 5432:5432 -d postgres A make compose-up target is provided along with a docker-compose.yml file to make this process simpler. Configuration ( .env or env vars) ⚠️ If any required .env variable is missing or invalid, the gateway will fail fast at startup with a validation error via Pydantic. You can get started by copying the provided .env.example to .env and making the necessary edits to fit your environment. πŸ”§ Environment Configuration Variables Basic Setting Description Default Options APP_NAME Gateway / OpenAPI title MCP Gateway string HOST Bind address for the app 127.0.0.1 IPv4/IPv6 PORT Port the server listens on 4444 1-65535 DATABASE_URL SQLAlchemy connection URL sqlite:///./mcp.db any SQLAlchemy dialect APP_ROOT_PATH Subpath prefix for app (e.g. /gateway ) (empty) string TEMPLATES_DIR Path to Jinja2 templates mcpgateway/templates path STATIC_DIR Path to static files mcpgateway/static path πŸ’‘ Use APP_ROOT_PATH=/foo if reverse-proxying under a subpath like https://host.com/foo/ . Authentication Setting Description Default Options BASIC_AUTH_USER Username for Admin UI login and HTTP Basic authentication admin string BASIC_AUTH_PASSWORD Password for Admin UI login and HTTP Basic authentication changeme string AUTH_REQUIRED Require authentication for all API routes true bool JWT_SECRET_KEY Secret key used to sign JWT tokens for API access my-test-key string JWT_ALGORITHM Algorithm used to sign the JWTs ( HS256 is default, HMAC-based) HS256 PyJWT algs TOKEN_EXPIRY Expiry of generated JWTs in minutes 10080 int > 0 AUTH_ENCRYPTION_SECRET Passphrase used to derive AES key for encrypting tool auth headers my-test-salt string πŸ” BASIC_AUTH_USER / PASSWORD are used for: Logging into the web-based Admin UI Accessing APIs via Basic Auth ( curl -H "Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN" ) πŸ”‘ JWT_SECRET_KEY is used to: Sign JSON Web Tokens ( Authorization: Bearer ) Generate tokens via: export MCPGATEWAY_BEARER_TOKEN= $( python3 -m mcpgateway.utils.create_jwt_token --username admin --exp 0 --secret my-test-key ) echo $MCPGATEWAY_BEARER_TOKEN Tokens allow non-interactive API clients to authenticate securely. πŸ§ͺ Set AUTH_REQUIRED=false during development if you want to disable all authentication (e.g. for local testing or open APIs) or clients that don't support SSE authentication. In production, you should use the SSE to stdio mcpgateway-wrapper for such tools that don't support authenticated SSE, while still ensuring the gateway uses authentication. πŸ” AUTH_ENCRYPTION_SECRET is used to encrypt and decrypt tool authentication credentials ( auth_value ). You must set the same value across environments to decode previously stored encrypted auth values. Recommended: use a long, random string. UI Features Setting Description Default Options MCPGATEWAY_UI_ENABLED Enable the interactive Admin dashboard false bool MCPGATEWAY_ADMIN_API_ENABLED Enable API endpoints for admin ops false bool MCPGATEWAY_BULK_IMPORT_ENABLED Enable bulk import endpoint for tools true bool πŸ–₯️ Set both UI and Admin API to false to disable management UI and APIs in production. πŸ“₯ The bulk import endpoint allows importing up to 200 tools in a single request via /admin/tools/import . A2A (Agent-to-Agent) Features Setting Description Default Options MCPGATEWAY_A2A_ENABLED Enable A2A agent features true bool MCPGATEWAY_A2A_MAX_AGENTS Maximum number of A2A agents allowed 100 int MCPGATEWAY_A2A_DEFAULT_TIMEOUT Default timeout for A2A HTTP requests (seconds) 30 int MCPGATEWAY_A2A_MAX_RETRIES Maximum retry attempts for A2A calls 3 int MCPGATEWAY_A2A_METRICS_ENABLED Enable A2A agent metrics collection true bool πŸ€– A2A Integration: Register external AI agents (OpenAI, Anthropic, custom) and expose them as MCP tools πŸ“Š Metrics: Track agent performance, success rates, and response times πŸ”’ Security: Encrypted credential storage and configurable authentication πŸŽ›οΈ Admin UI: Dedicated tab for agent management with test functionality A2A Configuration Effects: MCPGATEWAY_A2A_ENABLED=false : Completely disables A2A features (API endpoints return 404, admin tab hidden) : Completely disables A2A features (API endpoints return 404, admin tab hidden) MCPGATEWAY_A2A_METRICS_ENABLED=false : Disables metrics collection while keeping functionality Security Setting Description Default Options SKIP_SSL_VERIFY Skip upstream TLS verification false bool ENVIRONMENT Deployment environment (affects security defaults) development development / production APP_DOMAIN Domain for production CORS origins localhost string ALLOWED_ORIGINS CORS allow-list Auto-configured by environment JSON array CORS_ENABLED Enable CORS true bool CORS_ALLOW_CREDENTIALS Allow credentials in CORS true bool SECURE_COOKIES Force secure cookie flags true bool COOKIE_SAMESITE Cookie SameSite attribute lax strict / lax / none SECURITY_HEADERS_ENABLED Enable security headers middleware true bool X_FRAME_OPTIONS X-Frame-Options header value DENY DENY / SAMEORIGIN HSTS_ENABLED Enable HSTS header true bool HSTS_MAX_AGE HSTS max age in seconds 31536000 int REMOVE_SERVER_HEADERS Remove server identification true bool DOCS_ALLOW_BASIC_AUTH Allow Basic Auth for docs (in addition to JWT) false bool CORS Configuration: When ENVIRONMENT=development , CORS origins are automatically configured for common development ports (3000, 8080, gateway port). In production, origins are constructed from APP_DOMAIN (e.g., https://yourdomain.com , https://app.yourdomain.com ). You can override this by explicitly setting ALLOWED_ORIGINS . Security Headers: The gateway automatically adds configurable security headers to all responses including CSP, X-Frame-Options, X-Content-Type-Options, X-Download-Options, and HSTS (on HTTPS). All headers can be individually enabled/disabled. Sensitive server headers are removed. iframe Embedding: By default, X-Frame-Options: DENY prevents iframe embedding for security. To allow embedding, set X_FRAME_OPTIONS=SAMEORIGIN (same domain) or disable with X_FRAME_OPTIONS="" . Also update CSP frame-ancestors directive if needed. Cookie Security: Authentication cookies are automatically configured with HttpOnly, Secure (in production), and SameSite attributes for CSRF protection. Note: do not quote the ALLOWED_ORIGINS values, this needs to be valid JSON, such as: ALLOWED_ORIGINS=["http://localhost", "http://localhost:4444"] Documentation endpoints ( /docs , /redoc , /openapi.json ) are always protected by authentication. By default, they require Bearer token authentication. Setting DOCS_ALLOW_BASIC_AUTH=true enables HTTP Basic Authentication as an additional method using the same credentials as BASIC_AUTH_USER and BASIC_AUTH_PASSWORD . Logging MCP Gateway provides flexible logging with stdout/stderr output by default and optional file-based logging. When file logging is enabled, it provides JSON formatting for structured logs and text formatting for console output. Setting Description Default Options LOG_LEVEL Minimum log level INFO DEBUG ... CRITICAL LOG_FORMAT Console log format json json , text LOG_TO_FILE Enable file logging false true , false LOG_FILE Log filename (when enabled) null mcpgateway.log LOG_FOLDER Directory for log files null logs , /var/log/gateway LOG_FILEMODE File write mode a+ a+ (append), w (overwrite) LOG_ROTATION_ENABLED Enable log file rotation false true , false LOG_MAX_SIZE_MB Max file size before rotation (MB) 1 Any positive integer LOG_BACKUP_COUNT Number of backup files to keep 5 Any non-negative integer Logging Behavior: Default : Logs only to stdout/stderr with human-readable text format : Logs only to with human-readable text format File Logging : When LOG_TO_FILE=true , logs to both file (JSON format) and console (text format) : When , logs to file (JSON format) and console (text format) Log Rotation : When LOG_ROTATION_ENABLED=true , files rotate at LOG_MAX_SIZE_MB with LOG_BACKUP_COUNT backup files (e.g., .log.1 , .log.2 ) : When , files rotate at with backup files (e.g., , ) Directory Creation : Log folder is automatically created if it doesn't exist : Log folder is automatically created if it doesn't exist Centralized Service: All modules use the unified LoggingService for consistent formatting Example Configurations: # Default: stdout/stderr only (recommended for containers) LOG_LEVEL=INFO # No additional config needed - logs to stdout/stderr # Optional: Enable file logging (no rotation) LOG_TO_FILE=true LOG_FOLDER=/var/log/mcpgateway LOG_FILE=gateway.log LOG_FILEMODE=a+ # Optional: Enable file logging with rotation LOG_TO_FILE=true LOG_ROTATION_ENABLED=true LOG_MAX_SIZE_MB=10 LOG_BACKUP_COUNT=3 LOG_FOLDER=/var/log/mcpgateway LOG_FILE=gateway.log Default Behavior: Logs are written only to stdout/stderr in human-readable text format in human-readable text format File logging is disabled by default (no files created) (no files created) Set LOG_TO_FILE=true to enable optional file logging with JSON format Observability (OpenTelemetry) MCP Gateway includes vendor-agnostic OpenTelemetry support for distributed tracing. Works with Phoenix, Jaeger, Zipkin, Tempo, DataDog, New Relic, and any OTLP-compatible backend. Setting Description Default Options OTEL_ENABLE_OBSERVABILITY Master switch for observability true true , false OTEL_SERVICE_NAME Service identifier in traces mcp-gateway string OTEL_SERVICE_VERSION Service version in traces 0.6.0 string OTEL_DEPLOYMENT_ENVIRONMENT Environment tag (dev/staging/prod) development string OTEL_TRACES_EXPORTER Trace exporter backend otlp otlp , jaeger , zipkin , console , none OTEL_RESOURCE_ATTRIBUTES Custom resource attributes (empty) key=value,key2=value2 OTLP Configuration (for Phoenix, Tempo, DataDog, etc.): Setting Description Default Options OTEL_EXPORTER_OTLP_ENDPOINT OTLP collector endpoint (none) http://localhost:4317 OTEL_EXPORTER_OTLP_PROTOCOL OTLP protocol grpc grpc , http/protobuf OTEL_EXPORTER_OTLP_HEADERS Authentication headers (empty) api-key=secret,x-auth=token OTEL_EXPORTER_OTLP_INSECURE Skip TLS verification true true , false Alternative Backends (optional): Setting Description Default Options OTEL_EXPORTER_JAEGER_ENDPOINT Jaeger collector endpoint http://localhost:14268/api/traces URL OTEL_EXPORTER_ZIPKIN_ENDPOINT Zipkin collector endpoint http://localhost:9411/api/v2/spans URL Performance Tuning: Setting Description Default Options OTEL_TRACES_SAMPLER Sampling strategy parentbased_traceidratio always_on , always_off , traceidratio OTEL_TRACES_SAMPLER_ARG Sample rate (0.0-1.0) 0.1 float (0.1 = 10% sampling) OTEL_BSP_MAX_QUEUE_SIZE Max queued spans 2048 int > 0 OTEL_BSP_MAX_EXPORT_BATCH_SIZE Max batch size for export 512 int > 0 OTEL_BSP_SCHEDULE_DELAY Export interval (ms) 5000 int > 0 Quick Start with Phoenix: # Start Phoenix for LLM observability docker run -p 6006:6006 -p 4317:4317 arizephoenix/phoenix:latest # Configure gateway export OTEL_ENABLE_OBSERVABILITY=true export OTEL_TRACES_EXPORTER=otlp export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 # Run gateway - traces automatically sent to Phoenix mcpgateway πŸ” What Gets Traced: Tool invocations, prompt rendering, resource fetching, gateway federation, health checks, plugin execution (if enabled) πŸš€ Zero Overhead: When OTEL_ENABLE_OBSERVABILITY=false , all tracing is disabled with no performance impact πŸ“Š View Traces: Phoenix UI at http://localhost:6006 , Jaeger at http://localhost:16686 , or your configured backend Transport Setting Description Default Options TRANSPORT_TYPE Enabled transports all http , ws , sse , stdio , all WEBSOCKET_PING_INTERVAL WebSocket ping (secs) 30 int > 0 SSE_RETRY_TIMEOUT SSE retry timeout (ms) 5000 int > 0 SSE_KEEPALIVE_ENABLED Enable SSE keepalive events true bool SSE_KEEPALIVE_INTERVAL SSE keepalive interval (secs) 30 int > 0 USE_STATEFUL_SESSIONS streamable http config false bool JSON_RESPONSE_ENABLED json/sse streams (streamable http) true bool πŸ’‘ SSE Keepalive Events: The gateway sends periodic keepalive events to prevent connection timeouts with proxies and load balancers. Disable with SSE_KEEPALIVE_ENABLED=false if your client doesn't handle unknown event types. Common intervals: 30s (default), 60s (AWS ALB), 240s (Azure). Federation Setting Description Default Options FEDERATION_ENABLED Enable federation true bool FEDERATION_DISCOVERY Auto-discover peers false bool FEDERATION_PEERS Comma-sep peer URLs [] JSON array FEDERATION_TIMEOUT Gateway timeout (secs) 30 int > 0 FEDERATION_SYNC_INTERVAL Sync interval (secs) 300 int > 0 Resources Setting Description Default Options RESOURCE_CACHE_SIZE LRU cache size 1000 int > 0 RESOURCE_CACHE_TTL Cache TTL (seconds) 3600 int > 0 MAX_RESOURCE_SIZE Max resource bytes 10485760 int > 0 ALLOWED_MIME_TYPES Acceptable MIME types see code JSON array Tools Setting Description Default Options TOOL_TIMEOUT Tool invocation timeout (secs) 60 int > 0 MAX_TOOL_RETRIES Max retry attempts 3 int β‰₯ 0 TOOL_RATE_LIMIT Tool calls per minute 100 int > 0 TOOL_CONCURRENT_LIMIT Concurrent tool invocations 10 int > 0 Prompts Setting Description Default Options PROMPT_CACHE_SIZE Cached prompt templates 100 int > 0 MAX_PROMPT_SIZE Max prompt template size (bytes) 102400 int > 0 PROMPT_RENDER_TIMEOUT Jinja render timeout (secs) 10 int > 0 Health Checks Setting Description Default Options HEALTH_CHECK_INTERVAL Health poll interval (secs) 60 int > 0 HEALTH_CHECK_TIMEOUT Health request timeout (secs) 10 int > 0 UNHEALTHY_THRESHOLD Fail-count before peer deactivation, 3 int > 0 Set to -1 if deactivation is not needed. Database Setting Description Default Options DB_POOL_SIZE . SQLAlchemy connection pool size 200 int > 0 DB_MAX_OVERFLOW . Extra connections beyond pool 10 int β‰₯ 0 DB_POOL_TIMEOUT . Wait for connection (secs) 30 int > 0 DB_POOL_RECYCLE . Recycle connections (secs) 3600 int > 0 DB_MAX_RETRIES . Max Retry Attempts 3 int > 0 DB_RETRY_INTERVAL_MS Retry Interval (ms) 2000 int > 0 Cache Backend Setting Description Default Options CACHE_TYPE Backend ( memory / redis ) memory none , memory , redis REDIS_URL Redis connection URL (none) string or empty CACHE_PREFIX Key prefix mcpgw: string REDIS_MAX_RETRIES Max Retry Attempts 3 int > 0 REDIS_RETRY_INTERVAL_MS Retry Interval (ms) 2000 int > 0 🧠 none disables caching entirely. Use memory for dev, database for persistence, or redis for distributed caching. Database Management MCP Gateway uses Alembic for database migrations. Common commands: make db-current - Show current database version - Show current database version make db-upgrade - Apply pending migrations - Apply pending migrations make db-migrate - Create new migration - Create new migration make db-history - Show migration history - Show migration history make db-status - Detailed migration status Troubleshooting Common Issues: "No 'script_location' key found" : Ensure you're running from the project root directory. "Unknown SSE event: keepalive" warnings : Some MCP clients don't recognize keepalive events. These warnings are harmless and don't affect functionality. To disable: SSE_KEEPALIVE_ENABLED=false Connection timeouts with proxies/load balancers: If experiencing timeouts, adjust keepalive interval to match your infrastructure: SSE_KEEPALIVE_INTERVAL=60 (AWS ALB) or 240 (Azure). Development Setting Description Default Options DEV_MODE Enable dev mode false bool RELOAD Auto-reload on changes false bool DEBUG Debug logging false bool Running Makefile make serve # Run production Gunicorn server on make serve-ssl # Run Gunicorn behind HTTPS on :4444 (uses ./certs) Script helper To run the development (uvicorn) server: make dev # or ./run.sh --reload --log debug --workers 2 run.sh is a wrapper around uvicorn that loads .env , supports reload, and passes arguments to the server. Key flags: Flag Purpose Example -e, --env FILE load env-file --env prod.env -H, --host bind address --host 127.0.0.1 -p, --port listen port --port 8080 -w, --workers gunicorn workers --workers 4 -r, --reload auto-reload --reload Manual (Uvicorn) uvicorn mcpgateway.main:app --host 0.0.0.0 --port 4444 --workers 4 Authentication examples # Generate a JWT token using JWT_SECRET_KEY and export it as MCPGATEWAY_BEARER_TOKEN # Note that the module needs to be installed. If running locally use: export MCPGATEWAY_BEARER_TOKEN= $( JWT_SECRET_KEY=my-test-key python3 -m mcpgateway.utils.create_jwt_token ) # Use the JWT token in an API call curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/tools ☁️ AWS / Azure / OpenShift Deployment details can be found in the GitHub Pages. ☁️ IBM Cloud Code Engine Deployment This project supports deployment to IBM Cloud Code Engine using the ibmcloud CLI and the IBM Container Registry. ☁️ IBM Cloud Code Engine Deployment πŸ”§ Prerequisites Podman or Docker installed locally Docker installed locally IBM Cloud CLI (use make ibmcloud-cli-install to install) to install) An IBM Cloud API key with access to Code Engine & Container Registry Code Engine and Container Registry services enabled in your IBM Cloud account πŸ“¦ Environment Variables Create a .env file (or export the variables in your shell). The first block is required; the second provides tunable defaults you can override: # ── Required ───────────────────────────────────────────── IBMCLOUD_REGION=us-south IBMCLOUD_RESOURCE_GROUP=default IBMCLOUD_PROJECT=my-codeengine-project IBMCLOUD_CODE_ENGINE_APP=mcpgateway IBMCLOUD_IMAGE_NAME=us.icr.io/myspace/mcpgateway:latest IBMCLOUD_IMG_PROD=mcpgateway/mcpgateway IBMCLOUD_API_KEY=your_api_key_here # Optional - omit to use interactive `ibmcloud login --sso` # ── Optional overrides (sensible defaults provided) ────── IBMCLOUD_CPU=1 # vCPUs for the app IBMCLOUD_MEMORY=4G # Memory allocation IBMCLOUD_REGISTRY_SECRET=my-regcred # Name of the Container Registry secret βœ… Quick check: make ibmcloud-check-env πŸš€ Make Targets Target Purpose make ibmcloud-cli-install Install IBM Cloud CLI and required plugins make ibmcloud-login Log in to IBM Cloud (API key or SSO) make ibmcloud-ce-login Select the Code Engine project & region make ibmcloud-tag Tag the local container image make ibmcloud-push Push the image to IBM Container Registry make ibmcloud-deploy Create or update the Code Engine application (uses CPU/memory/secret) make ibmcloud-ce-status Show current deployment status make ibmcloud-ce-logs Stream logs from the running app make ibmcloud-ce-rm Delete the Code Engine application πŸ“ Example Workflow make ibmcloud-check-env make ibmcloud-cli-install make ibmcloud-login make ibmcloud-ce-login make ibmcloud-tag make ibmcloud-push make ibmcloud-deploy make ibmcloud-ce-status make ibmcloud-ce-logs API Endpoints You can test the API endpoints through curl, or Swagger UI, and check detailed documentation on ReDoc: Swagger UI β†’ http://localhost:4444/docs β†’ http://localhost:4444/docs ReDoc β†’ http://localhost:4444/redoc Generate an API Bearer token, and test the various API endpoints. πŸ” Authentication & Health Checks # Generate a bearer token using the configured secret key (use the same as your .env) export MCPGATEWAY_BEARER_TOKEN= $( python3 -m mcpgateway.utils.create_jwt_token -u admin --secret my-test-key ) echo ${MCPGATEWAY_BEARER_TOKEN} # Quickly confirm that authentication works and the gateway is healthy curl -s -k -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " https://localhost:4444/health # {"status":"healthy"} # Quickly confirm the gateway version & DB connectivity curl -s -k -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " https://localhost:4444/version | jq 🧱 Protocol APIs (MCP) /protocol # Initialize MCP session curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "protocol_version":"2025-03-26", "capabilities":{}, "client_info":{"name":"MyClient","version":"1.0.0"} } ' \ http://localhost:4444/protocol/initialize # Ping (JSON-RPC style) curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"jsonrpc":"2.0","id":1,"method":"ping"} ' \ http://localhost:4444/protocol/ping # Completion for prompt/resource arguments (not implemented) curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "ref":{"type":"ref/prompt","name":"example_prompt"}, "argument":{"name":"topic","value":"py"} } ' \ http://localhost:4444/protocol/completion/complete # Sampling (streaming) (not implemented) curl -N -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "messages":[{"role":"user","content":{"type":"text","text":"Hello"}}], "maxTokens":16 } ' \ http://localhost:4444/protocol/sampling/createMessage 🧠 JSON-RPC Utility /rpc # Generic JSON-RPC calls (tools, gateways, roots, etc.) curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"jsonrpc":"2.0","id":1,"method":"list_tools"} ' \ http://localhost:4444/rpc Handles any method name: list_tools , list_gateways , prompts/get , or invokes a tool if method matches a registered tool name . πŸ”§ Tool Management /tools # Register a new tool curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "name":"clock_tool", "url":"http://localhost:9000/rpc", "description":"Returns current time", "input_schema":{ "type":"object", "properties":{"timezone":{"type":"string"}}, "required":[] } } ' \ http://localhost:4444/tools # List tools curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/tools # Get tool by ID curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/tools/1 # Update tool curl -X PUT -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "description":"Updated desc" } ' \ http://localhost:4444/tools/1 # Toggle active status curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/tools/1/toggle ? activate=false curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/tools/1/toggle ? activate=true # Delete tool curl -X DELETE -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/tools/1 πŸ€– A2A Agent Management /a2a # Register a new A2A agent curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "name":"hello_world_agent", "endpoint_url":"http://localhost:9999/", "agent_type":"jsonrpc", "description":"External AI agent for hello world functionality", "auth_type":"api_key", "auth_value":"your-api-key", "tags":["ai", "hello-world"] } ' \ http://localhost:4444/a2a # List A2A agents curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/a2a # Get agent by ID curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/a2a/agent-id # Update agent curl -X PUT -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "description":"Updated description" } ' \ http://localhost:4444/a2a/agent-id # Test agent (direct invocation) curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "parameters": { "method": "message/send", "params": { "message": { "messageId": "test-123", "role": "user", "parts": [{"type": "text", "text": "Hello!"}] } } }, "interaction_type": "test" } ' \ http://localhost:4444/a2a/agent-name/invoke # Toggle agent status curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/a2a/agent-id/toggle ? activate=false # Delete agent curl -X DELETE -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/a2a/agent-id # Associate agent with virtual server (agents become available as MCP tools) curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "name":"AI Assistant Server", "description":"Virtual server with AI agents", "associated_a2a_agents":["agent-id"] } ' \ http://localhost:4444/servers πŸ€– A2A Integration: A2A agents are external AI agents that can be registered and exposed as MCP tools πŸ”„ Protocol Detection: Gateway automatically detects JSONRPC vs custom A2A protocols πŸ“Š Testing: Built-in test functionality via Admin UI or /a2a/{agent_id}/test endpoint πŸŽ›οΈ Virtual Servers: Associate agents with servers to expose them as standard MCP tools 🌐 Gateway Management /gateways # Register an MCP server as a new gateway provider curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"name":"peer_gateway","url":"http://peer:4444"} ' \ http://localhost:4444/gateways # List gateways curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/gateways # Get gateway by ID curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/gateways/1 # Update gateway curl -X PUT -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"description":"New description"} ' \ http://localhost:4444/gateways/1 # Toggle active status curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/gateways/1/toggle ? activate=false # Delete gateway curl -X DELETE -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/gateways/1 πŸ“ Resource Management /resources # Register resource curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "uri":"config://app/settings", "name":"App Settings", "content":"key=value" } ' \ http://localhost:4444/resources # List resources curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/resources # Read a resource curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/resources/config://app/settings # Update resource curl -X PUT -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"content":"new=value"} ' \ http://localhost:4444/resources/config://app/settings # Delete resource curl -X DELETE -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/resources/config://app/settings # Subscribe to updates (SSE) curl -N -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/resources/subscribe/config://app/settings πŸ“ Prompt Management /prompts # Create prompt template curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' { "name":"greet", "template":"Hello, {{ user }}!", "argument_schema":{ "type":"object", "properties":{"user":{"type":"string"}}, "required":["user"] } } ' \ http://localhost:4444/prompts # List prompts curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/prompts # Get prompt (with args) curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"user":"Alice"} ' \ http://localhost:4444/prompts/greet # Get prompt (no args) curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/prompts/greet # Update prompt curl -X PUT -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"template":"Hi, {{ user }}!"} ' \ http://localhost:4444/prompts/greet # Toggle active curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/prompts/5/toggle ? activate=false # Delete prompt curl -X DELETE -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/prompts/greet 🌲 Root Management /roots # List roots curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/roots # Add root curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"uri":"/data","name":"Data Root"} ' \ http://localhost:4444/roots # Remove root curl -X DELETE -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/roots/%2Fdata # Subscribe to root changes (SSE) curl -N -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/roots/changes πŸ–₯️ Server Management /servers # List servers curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/servers # Get server curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/servers/UUID_OF_SERVER_1 # Create server curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"name":"db","description":"Database","associatedTools": ["1","2","3"]} ' \ http://localhost:4444/servers # Update server curl -X PUT -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ -H " Content-Type: application/json " \ -d ' {"description":"Updated"} ' \ http://localhost:4444/servers/UUID_OF_SERVER_1 # Toggle active curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " \ http://localhost:4444/servers/UUID_OF_SERVER_1/toggle ? activate=false πŸ“Š Metrics /metrics # Get aggregated metrics curl -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/metrics # Reset metrics (all or per-entity) curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/metrics/reset curl -X POST -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/metrics/reset ? entity=tool & id=1 πŸ“‘ Events & Health # SSE: all events curl -N -H " Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN " http://localhost:4444/events # WebSocket wscat -c ws://localhost:4444/ws \ -H " Authorization: Basic $( echo -n admin:changeme | base64 ) " # Health check curl http://localhost:4444/health Full Swagger UI at /docs . πŸ› οΈ Sample Tool uvicorn sample_tool.clock_tool:app --host 0.0.0.0 --port 9000 curl -X POST -H " Content-Type: application/json " \ -d ' {"jsonrpc":"2.0","id":1,"method":"get_time","params":{"timezone":"UTC"}} ' \ http://localhost:9000/rpc Testing make test # Run unit tests make lint # Run lint tools Doctest Coverage MCP Context Forge implements comprehensive doctest coverage to ensure all code examples in documentation are tested and verified: make doctest # Run all doctests make doctest-verbose # Run with detailed output make doctest-coverage # Generate coverage report make doctest-check # Check coverage percentage Coverage Status: βœ… Transport Modules : 100% (base, stdio, SSE, WebSocket, streamable HTTP) : 100% (base, stdio, SSE, WebSocket, streamable HTTP) βœ… Utility Functions : 100% (slug generation, JWT tokens, validation) : 100% (slug generation, JWT tokens, validation) βœ… Configuration : 100% (settings, environment variables) : 100% (settings, environment variables) πŸ”„ Service Classes : ~60% (in progress) : ~60% (in progress) πŸ”„ Complex Classes: ~40% (in progress) Benefits: All documented examples are automatically tested Documentation stays accurate and up-to-date Developers can run examples directly from docstrings Regression prevention through automated verification For detailed information, see the Doctest Coverage Guide. Project Structure πŸ“ Directory and file structure for mcpgateway # ────────── CI / Quality & Meta-files ────────── β”œβ”€β”€ .bumpversion.cfg # Automated semantic-version bumps β”œβ”€β”€ .coveragerc # Coverage.py settings β”œβ”€β”€ .darglint # Doc-string linter rules β”œβ”€β”€ .dockerignore # Context exclusions for image builds β”œβ”€β”€ .editorconfig # Consistent IDE / editor behaviour β”œβ”€β”€ .env # Local runtime variables (git-ignored) β”œβ”€β”€ .env.ce # IBM Code Engine runtime env (ignored) β”œβ”€β”€ .env.ce.example # Sample env for IBM Code Engine β”œβ”€β”€ .env.example # Generic sample env file β”œβ”€β”€ .env.gcr # Google Cloud Run runtime env (ignored) β”œβ”€β”€ .eslintrc.json # ESLint rules for JS / TS assets β”œβ”€β”€ .flake8 # Flake-8 configuration β”œβ”€β”€ .gitattributes # Git attributes (e.g. EOL normalisation) β”œβ”€β”€ .github # GitHub settings, CI/CD workflows & templates β”‚ β”œβ”€β”€ CODEOWNERS # Default reviewers β”‚ └── workflows/ # Bandit, Docker, CodeQL, Python Package, Container Deployment, etc. β”œβ”€β”€ .gitignore # Git exclusion rules β”œβ”€β”€ .hadolint.yaml # Hadolint rules for Dockerfiles β”œβ”€β”€ .htmlhintrc # HTMLHint rules β”œβ”€β”€ .markdownlint.json # Markdown-lint rules β”œβ”€β”€ .pre-commit-config.yaml # Pre-commit hooks (ruff, black, mypy, ...) β”œβ”€β”€ .pycodestyle # PEP-8 checker settings β”œβ”€β”€ .pylintrc # Pylint configuration β”œβ”€β”€ .pyspelling.yml # Spell-checker dictionary & filters β”œβ”€β”€ .ruff.toml # Ruff linter / formatter settings β”œβ”€β”€ .spellcheck-en.txt # Extra dictionary entries β”œβ”€β”€ .stylelintrc.json # Stylelint rules for CSS β”œβ”€β”€ .travis.yml # Legacy Travis CI config (reference) β”œβ”€β”€ .whitesource # WhiteSource security-scanning config β”œβ”€β”€ .yamllint # yamllint ruleset # ────────── Documentation & Guidance ────────── β”œβ”€β”€ CHANGELOG.md # Version-by-version change log β”œβ”€β”€ CODE_OF_CONDUCT.md # Community behaviour guidelines β”œβ”€β”€ CONTRIBUTING.md # How to file issues & send PRs β”œβ”€β”€ DEVELOPING.md # Contributor workflows & style guide β”œβ”€β”€ LICENSE # Apache License 2.0 β”œβ”€β”€ README.md # Project overview & quick-start β”œβ”€β”€ SECURITY.md # Security policy & CVE disclosure process β”œβ”€β”€ TESTING.md # Testing strategy, fixtures & guidelines # ────────── Containerisation & Runtime ────────── β”œβ”€β”€ Containerfile # OCI image build (Docker / Podman) β”œβ”€β”€ Containerfile.lite # FROM scratch UBI-Micro production build β”œβ”€β”€ docker-compose.yml # Local multi-service stack β”œβ”€β”€ podman-compose-sonarqube.yaml # One-liner SonarQube stack β”œβ”€β”€ run-gunicorn.sh # Opinionated Gunicorn startup script β”œβ”€β”€ run.sh # Uvicorn shortcut with arg parsing # ────────── Build / Packaging / Tooling ────────── β”œβ”€β”€ MANIFEST.in # sdist inclusion rules β”œβ”€β”€ Makefile # Dev & deployment targets β”œβ”€β”€ package-lock.json # Deterministic npm lock-file β”œβ”€β”€ package.json # Front-end / docs tooling deps β”œβ”€β”€ pyproject.toml # Poetry / PDM config & lint rules β”œβ”€β”€ sonar-code.properties # SonarQube analysis settings β”œβ”€β”€ uv.lock # UV resolver lock-file # ────────── Kubernetes & Helm Assets ────────── β”œβ”€β”€ charts # Helm chart(s) for K8s / OpenShift β”‚ β”œβ”€β”€ mcp-stack # Umbrella chart β”‚ β”‚ β”œβ”€β”€ Chart.yaml # Chart metadata β”‚ β”‚ β”œβ”€β”€ templates/... # Manifest templates β”‚ β”‚ └── values.yaml # Default values β”‚ └── README.md # Install / upgrade guide β”œβ”€β”€ k8s # Raw (non-Helm) K8s manifests β”‚ └── * .yaml # Deployment, Service, PVC resources # ────────── Documentation Source ────────── β”œβ”€β”€ docs # MkDocs site source β”‚ β”œβ”€β”€ base.yml # MkDocs "base" configuration snippet (do not modify) β”‚ β”œβ”€β”€ mkdocs.yml # Site configuration (requires base.yml) β”‚ β”œβ”€β”€ requirements.txt # Python dependencies for the MkDocs site β”‚ β”œβ”€β”€ Makefile # Make targets for building/serving the docs β”‚ └── theme # Custom MkDocs theme assets β”‚ └── logo.png # Logo for the documentation theme β”‚ └── docs # Markdown documentation β”‚ β”œβ”€β”€ architecture/ # ADRs for the project β”‚ β”œβ”€β”€ articles/ # Long-form writeups β”‚ β”œβ”€β”€ blog/ # Blog posts β”‚ β”œβ”€β”€ deployment/ # Deployment guides (AWS, Azure, etc.) β”‚ β”œβ”€β”€ development/ # Development workflows & CI docs β”‚ β”œβ”€β”€ images/ # Diagrams & screenshots β”‚ β”œβ”€β”€ index.md # Top-level docs landing page β”‚ β”œβ”€β”€ manage/ # Management topics (backup, logging, tuning, upgrade) β”‚ β”œβ”€β”€ overview/ # Feature overviews & UI documentation β”‚ β”œβ”€β”€ security/ # Security guidance & policies β”‚ β”œβ”€β”€ testing/ # Testing strategy & fixtures β”‚ └── using/ # User-facing usage guides (agents, clients, etc.) β”‚ β”œβ”€β”€ media/ # Social media, press coverage, videos & testimonials β”‚ β”‚ β”œβ”€β”€ press/ # Press articles and blog posts β”‚ β”‚ β”œβ”€β”€ social/ # Tweets, LinkedIn posts, YouTube embeds β”‚ β”‚ β”œβ”€β”€ testimonials/ # Customer quotes & community feedback β”‚ β”‚ └── kit/ # Media kit & logos for bloggers & press β”œβ”€β”€ dictionary.dic # Custom dictionary for spell-checker (make spellcheck) # ────────── Application & Libraries ────────── β”œβ”€β”€ agent_runtimes # Configurable agentic frameworks converted to MCP Servers β”œβ”€β”€ mcpgateway # ← main application package β”‚ β”œβ”€β”€ __init__.py # Package metadata & version constant β”‚ β”œβ”€β”€ admin.py # FastAPI routers for Admin UI β”‚ β”œβ”€β”€ cache β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ resource_cache.py # LRU+TTL cache implementation β”‚ β”‚ └── session_registry.py # Session ↔ cache mapping β”‚ β”œβ”€β”€ config.py # Pydantic settings loader β”‚ β”œβ”€β”€ db.py # SQLAlchemy models & engine setup β”‚ β”œβ”€β”€ federation β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ discovery.py # Peer-gateway discovery β”‚ β”‚ β”œβ”€β”€ forward.py # RPC forwarding β”‚ β”œβ”€β”€ handlers β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ └── sampling.py # Streaming sampling handler β”‚ β”œβ”€β”€ main.py # FastAPI app factory & startup events β”‚ β”œβ”€β”€ mcp.db # SQLite fixture for tests β”‚ β”œβ”€β”€ py.typed # PEP 561 marker (ships type hints) β”‚ β”œβ”€β”€ schemas.py # Shared Pydantic DTOs β”‚ β”œβ”€β”€ services β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ completion_service.py # Prompt / argument completion β”‚ β”‚ β”œβ”€β”€ gateway_service.py # Peer-gateway registry β”‚ β”‚ β”œβ”€β”€ logging_service.py # Central logging helpers β”‚ β”‚ β”œβ”€β”€ prompt_service.py # Prompt CRUD & rendering β”‚ β”‚ β”œβ”€β”€ resource_service.py # Resource registration & retrieval β”‚ β”‚ β”œβ”€β”€ root_service.py # File-system root registry β”‚ β”‚ β”œβ”€β”€ server_service.py # Server registry & monitoring β”‚ β”‚ └── tool_service.py # Tool registry & invocation β”‚ β”œβ”€β”€ static β”‚ β”‚ β”œβ”€β”€ admin.css # Styles for Admin UI β”‚ β”‚ └── admin.js # Behaviour for Admin UI β”‚ β”œβ”€β”€ templates β”‚ β”‚ └── admin.html # HTMX/Alpine Admin UI template β”‚ β”œβ”€β”€ transports β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ base.py # Abstract transport interface β”‚ β”‚ β”œβ”€β”€ sse_transport.py # Server-Sent Events transport β”‚ β”‚ β”œβ”€β”€ stdio_transport.py # stdio transport for embedding β”‚ β”‚ └── websocket_transport.py # WS transport with ping/pong β”‚ β”œβ”€β”€ models.py # Core enums / type aliases β”‚ β”œβ”€β”€ utils β”‚ β”‚ β”œβ”€β”€ create_jwt_token.py # CLI & library for JWT generation β”‚ β”‚ β”œβ”€β”€ services_auth.py # Service-to-service auth dependency β”‚ β”‚ └── verify_credentials.py # Basic / JWT auth helpers β”‚ β”œβ”€β”€ validation β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ └── jsonrpc.py # JSON-RPC 2.0 validation β”‚ └── version.py # Library version helper β”œβ”€β”€ mcpgateway-wrapper # Stdio client wrapper (PyPI) β”‚ β”œβ”€β”€ pyproject.toml β”‚ β”œβ”€β”€ README.md β”‚ └── src/mcpgateway_wrapper/ β”‚ β”œβ”€β”€ __init__.py β”‚ └── server.py # Wrapper entry-point β”œβ”€β”€ mcp-servers # Sample downstream MCP servers β”œβ”€β”€ mcp.db # Default SQLite DB (auto-created) β”œβ”€β”€ mcpgrid # Experimental grid client / PoC β”œβ”€β”€ os_deps.sh # Installs system-level deps for CI # ────────── Tests & QA Assets ────────── β”œβ”€β”€ test_readme.py # Guard: README stays in sync β”œβ”€β”€ tests β”‚ β”œβ”€β”€ conftest.py # Shared fixtures β”‚ β”œβ”€β”€ e2e/... # End-to-end scenarios β”‚ β”œβ”€β”€ hey/... # Load-test logs & helper script β”‚ β”œβ”€β”€ integration/... # API-level integration tests β”‚ └── unit/... # Pure unit tests for business logic API Documentation Swagger UI β†’ http://localhost:4444/docs β†’ http://localhost:4444/docs ReDoc β†’ http://localhost:4444/redoc β†’ http://localhost:4444/redoc Admin Panel β†’ http://localhost:4444/admin Makefile targets This project offer the following Makefile targets. Type make in the project root to show all targets. πŸ”§ Available Makefile targets 🐍 MCP CONTEXT FORGE (An enterprise-ready Model Context Protocol Gateway) πŸ”§ SYSTEM-LEVEL DEPENDENCIES (DEV BUILD ONLY) os-deps - Install Graphviz, Pandoc, Trivy, SCC used for dev docs generation and security scan 🌱 VIRTUAL ENVIRONMENT & INSTALLATION venv - Create a fresh virtual environment with uv & friends activate - Activate the virtual environment in the current shell install - Install project into the venv install-dev - Install project (incl. dev deps) into the venv install-db - Install project (incl. postgres and redis) into venv update - Update all installed deps inside the venv check-env - Verify all required env vars in .env are present ▢️ SERVE & TESTING serve - Run production Gunicorn server on :4444 certs - Generate self-signed TLS cert & key in ./certs (won ' t overwrite) serve-ssl - Run Gunicorn behind HTTPS on :4444 (uses ./certs) dev - Run fast-reload dev server (uvicorn) run - Execute helper script ./run.sh test - Run unit tests with pytest test-curl - Smoke-test API endpoints with curl script pytest-examples - Run README / examples through pytest-examples clean - Remove caches, build artefacts, virtualenv, docs, certs, coverage, SBOM, etc. πŸ“Š COVERAGE & METRICS coverage - Run tests with coverage, emit md/HTML/XML + badge pip-licenses - Produce dependency license inventory (markdown) scc - Quick LoC/complexity snapshot with scc scc-report - Generate HTML LoC & per-file metrics with scc πŸ“š DOCUMENTATION & SBOM docs - Build docs (graphviz + handsdown + images + SBOM) images - Generate architecture & dependency diagrams πŸ” LINTING & STATIC ANALYSIS lint - Run the full linting suite (see targets below) black - Reformat code with black autoflake - Remove unused imports / variables with autoflake isort - Organise & sort imports with isort flake8 - PEP-8 style & logical errors pylint - Pylint static analysis markdownlint - Lint Markdown files with markdownlint (requires markdownlint-cli) mypy - Static type-checking with mypy bandit - Security scan with bandit pydocstyle - Docstring style checker pycodestyle - Simple PEP-8 checker pre-commit - Run all configured pre-commit hooks ruff - Ruff linter + formatter ty - Ty type checker from astral pyright - Static type-checking with Pyright radon - Code complexity & maintainability metrics pyroma - Validate packaging metadata importchecker - Detect orphaned imports spellcheck - Spell-check the codebase fawltydeps - Detect undeclared / unused deps wily - Maintainability report pyre - Static analysis with Facebook Pyre depend - List dependencies in β‰ˆrequirements format snakeviz - Profile & visualise with snakeviz pstats - Generate PNG call-graph from cProfile stats spellcheck-sort - Sort local spellcheck dictionary tox - Run tox across multi-Python versions sbom - Produce a CycloneDX SBOM and vulnerability scan pytype - Flow-sensitive type checker check-manifest - Verify sdist/wheel completeness yamllint - Lint YAML files (uses .yamllint) jsonlint - Validate every *.json file with jq (--exit-status) tomllint - Validate *.toml files with tomlcheck πŸ•ΈοΈ WEBPAGE LINTERS & STATIC ANALYSIS (HTML/CSS/JS lint + security scans + formatting) install-web-linters - Install HTMLHint, Stylelint, ESLint, Retire.js & Prettier via npm lint-web - Run HTMLHint, Stylelint, ESLint, Retire.js and npm audit format-web - Format HTML, CSS & JS files with Prettier osv-install - Install/upgrade osv-scanner (Go) osv-scan-source - Scan source & lockfiles for CVEs osv-scan-image - Scan the built container image for CVEs osv-scan - Run all osv-scanner checks (source, image, licence) πŸ“‘ SONARQUBE ANALYSIS sonar-deps-podman - Install podman-compose + supporting tools sonar-deps-docker - Install docker-compose + supporting tools sonar-up-podman - Launch SonarQube with podman-compose sonar-up-docker - Launch SonarQube with docker-compose sonar-submit-docker - Run containerized Sonar Scanner CLI with Docker sonar-submit-podman - Run containerized Sonar Scanner CLI with Podman pysonar-scanner - Run scan with Python wrapper (pysonar-scanner) sonar-info - How to create a token & which env vars to export πŸ›‘οΈ SECURITY & PACKAGE SCANNING trivy - Scan container image for CVEs (HIGH/CRIT). Needs podman socket enabled grype-scan - Scan container for security audit and vulnerability scanning dockle - Lint the built container image via tarball (no daemon/socket needed) hadolint - Lint Containerfile/Dockerfile(s) with hadolint pip-audit - Audit Python dependencies for published CVEs πŸ“¦ DEPENDENCY MANAGEMENT deps-update - Run update-deps.py to update all dependencies in pyproject.toml and docs/requirements.txt containerfile-update - Update base image in Containerfile to latest tag πŸ“¦ PACKAGING & PUBLISHING dist - Clean-build wheel *and* sdist into ./dist wheel - Build wheel only sdist - Build source distribution only verify - Build + twine + check-manifest + pyroma (no upload) publish - Verify, then upload to PyPI (needs TWINE_* creds) 🦭 PODMAN CONTAINER BUILD & RUN podman-dev - Build development container image podman - Build container image podman-prod - Build production container image (using ubi-micro β†’ scratch). Not supported on macOS. podman-run - Run the container on HTTP (port 4444) podman-run-shell - Run the container on HTTP (port 4444) and start a shell podman-run-ssl - Run the container on HTTPS (port 4444, self-signed) podman-run-ssl-host - Run the container on HTTPS with --network=host (port 4444, self-signed) podman-stop - Stop & remove the container podman-test - Quick curl smoke-test against the container podman-logs - Follow container logs (βŒƒC to quit) podman-stats - Show container resource stats (if supported) podman-top - Show live top-level process info in container podman-shell - Open an interactive shell inside the Podman container πŸ‹ DOCKER BUILD & RUN docker-dev - Build development Docker image docker - Build production Docker image docker-prod - Build production container image (using ubi-micro β†’ scratch). Not supported on macOS. docker-run - Run the container on HTTP (port 4444) docker-run-ssl - Run the container on HTTPS (port 4444, self-signed) docker-stop - Stop & remove the container docker-test - Quick curl smoke-test against the container docker-logs - Follow container logs (βŒƒC to quit) docker-stats - Show container resource usage stats (non-streaming) docker-top - Show top-level process info in Docker container docker-shell - Open an interactive shell inside the Docker container πŸ› οΈ COMPOSE STACK - Build / start / stop the multi-service stack compose-up - Bring the whole stack up (detached) compose-restart - Recreate changed containers, pulling / building as needed compose-build - Build (or rebuild) images defined in the compose file compose-pull - Pull the latest images only compose-logs - Tail logs from all services (Ctrl-C to exit) compose-ps - Show container status table compose-shell - Open an interactive shell in the "gateway" container compose-stop - Gracefully stop the stack (keep containers) compose-down - Stop & remove containers (keep named volumes) compose-rm - Remove *stopped* containers compose-clean - ✨ Down **and** delete named volumes (data-loss ⚠) ☁️ IBM CLOUD CODE ENGINE ibmcloud-check-env - Verify all required IBM Cloud env vars are set ibmcloud-cli-install - Auto-install IBM Cloud CLI + required plugins (OS auto-detected) ibmcloud-login - Login to IBM Cloud CLI using IBMCLOUD_API_KEY (--sso) ibmcloud-ce-login - Set Code Engine target project and region ibmcloud-list-containers - List deployed Code Engine apps ibmcloud-tag - Tag container image for IBM Container Registry ibmcloud-push - Push image to IBM Container Registry ibmcloud-deploy - Deploy (or update) container image in Code Engine ibmcloud-ce-logs - Stream logs for the deployed application ibmcloud-ce-status - Get deployment status ibmcloud-ce-rm - Delete the Code Engine application πŸ§ͺ MINIKUBE LOCAL CLUSTER minikube-install - Install Minikube (macOS, Linux, or Windows via choco) helm-install - Install Helm CLI (macOS, Linux, or Windows) minikube-start - Start local Minikube cluster with Ingress + DNS + metrics-server minikube-stop - Stop the Minikube cluster minikube-delete - Delete the Minikube cluster minikube-image-load - Build and load ghcr.io/ibm/mcp-context-forge:latest into Minikube minikube-k8s-apply - Apply Kubernetes manifests from deployment/k8s/ minikube-status - Show status of Minikube and ingress pods πŸ› οΈ HELM CHART TASKS helm-lint - Lint the Helm chart (static analysis) helm-package - Package the chart into dist/ as mcp-stack-.tgz helm-deploy - Upgrade/Install chart into Minikube (profile mcpgw) helm-delete - Uninstall the chart release from Minikube 🏠 LOCAL PYPI SERVER local-pypi-install - Install pypiserver for local testing local-pypi-start - Start local PyPI server on :8084 (no auth) local-pypi-start-auth - Start local PyPI server with basic auth (admin/admin) local-pypi-stop - Stop local PyPI server local-pypi-upload - Upload existing package to local PyPI (no auth) local-pypi-upload-auth - Upload existing package to local PyPI (with auth) local-pypi-test - Install package from local PyPI local-pypi-clean - Full cycle: build β†’ upload β†’ install locally 🏠 LOCAL DEVPI SERVER devpi-install - Install devpi server and client devpi-init - Initialize devpi server (first time only) devpi-start - Start devpi server devpi-stop - Stop devpi server devpi-setup-user - Create user and dev index devpi-upload - Upload existing package to devpi devpi-test - Install package from devpi devpi-clean - Full cycle: build β†’ upload β†’ install locally devpi-status - Show devpi server status devpi-web - Open devpi web interface πŸ” Troubleshooting Port publishing on WSL2 (rootless Podman & Docker Desktop) Diagnose the listener # Inside your WSL distro ss -tlnp | grep 4444 # Use ss netstat -anp | grep 4444 # or netstat Seeing :::4444 LISTEN rootlessport is normal - the IPv6 wildcard socket ( :: ) also accepts IPv4 traffic when net.ipv6.bindv6only = 0 (default on Linux). Why localhost fails on Windows WSL 2's NAT layer rewrites only the IPv6 side of the dual-stack listener. From Windows, http://127.0.0.1:4444 (or Docker Desktop's "localhost") therefore times-out. Fix (Podman rootless) # Inside the WSL distro echo " wsl " | sudo tee /etc/containers/podman-machine systemctl --user restart podman.socket ss should now show 0.0.0.0:4444 instead of :::4444 , and the service becomes reachable from Windows and the LAN. Fix (Docker Desktop > 4.19) Docker Desktop adds a "WSL integration" switch per-distro. Turn it on for your distro, restart Docker Desktop, then restart the container: docker restart mcpgateway Gateway starts but immediately exits ("Failed to read DATABASE_URL") Copy .env.example to .env first: cp .env.example .env Then edit DATABASE_URL , JWT_SECRET_KEY , BASIC_AUTH_PASSWORD , etc. Missing or empty required vars cause a fast-fail at startup. Contributing Fork the repo, create a feature branch. Run make lint and fix any issues. Keep make test green and 100% coverage. Open a PR - describe your changes clearly. See CONTRIBUTING.md for more details. Changelog A complete changelog can be found here: CHANGELOG.md License Licensed under the Apache License 2.0 - see LICENSE Core Authors and Maintainers Mihai Criveti - Distinguished Engineer, Agentic AI Special thanks to our contributors for helping us improve ContextForge MCP Gateway: