Skip to content
Tech News
← Back to articles

Show HN: GoModel – an open-source AI gateway in Go; 44x lighter than LiteLLM

read original get GoModel AI Gateway → more articles
Why This Matters

GoModel is an open-source, high-performance AI gateway built in Go that offers a unified API compatible with multiple AI providers, making it easier for developers to integrate and switch between different models. Its lightweight design, being 44 times lighter than LiteLLM, enhances deployment efficiency and scalability for the tech industry and consumers alike. This development simplifies multi-provider AI integration, reducing complexity and boosting flexibility in AI applications.

Key Takeaways

GoModel - AI Gateway Written in Go

A high-performance AI gateway written in Go, providing a unified OpenAI-compatible API for OpenAI, Anthropic, Gemini, xAI, Groq, OpenRouter, Z.ai, Azure OpenAI, Oracle, Ollama, and more.

Quick Start - Deploy the AI Gateway

Step 1: Start GoModel

docker run --rm -p 8080:8080 \ -e LOGGING_ENABLED=true \ -e LOGGING_LOG_BODIES=true \ -e LOG_FORMAT=text \ -e LOGGING_LOG_HEADERS=true \ -e OPENAI_API_KEY= " your-openai-key " \ enterpilot/gomodel

Pass only the provider credentials or base URL you need (at least one required):

docker run --rm -p 8080:8080 \ -e OPENAI_API_KEY= " your-openai-key " \ -e ANTHROPIC_API_KEY= " your-anthropic-key " \ -e GEMINI_API_KEY= " your-gemini-key " \ -e GROQ_API_KEY= " your-groq-key " \ -e OPENROUTER_API_KEY= " your-openrouter-key " \ -e ZAI_API_KEY= " your-zai-key " \ -e XAI_API_KEY= " your-xai-key " \ -e AZURE_API_KEY= " your-azure-key " \ -e AZURE_BASE_URL= " https://your-resource.openai.azure.com/openai/deployments/your-deployment " \ -e AZURE_API_VERSION= " 2024-10-21 " \ -e ORACLE_API_KEY= " your-oracle-key " \ -e ORACLE_BASE_URL= " https://inference.generativeai.us-chicago-1.oci.oraclecloud.com/20231130/actions/v1 " \ -e ORACLE_MODELS= " openai.gpt-oss-120b,xai.grok-3 " \ -e OLLAMA_BASE_URL= " http://host.docker.internal:11434/v1 " \ enterpilot/gomodel

⚠️ Avoid passing secrets via -e on the command line - they can leak via shell history and process lists. For production, use docker run --env-file .env to load API keys from a file instead.

Step 2: Make your first API call

curl http://localhost:8080/v1/chat/completions \ -H " Content-Type: application/json " \ -d ' { "model": "gpt-5-chat-latest", "messages": [{"role": "user", "content": "Hello!"}] } '

... continue reading