Claude Code Router
中文版
A powerful tool to route Claude Code requests to different models and customize any request.
✨ Features
Model Routing : Route requests to different models based on your needs (e.g., background tasks, thinking, long context).
: Route requests to different models based on your needs (e.g., background tasks, thinking, long context). Multi-Provider Support : Supports various model providers like OpenRouter, DeepSeek, Ollama, Gemini, Volcengine, and SiliconFlow.
: Supports various model providers like OpenRouter, DeepSeek, Ollama, Gemini, Volcengine, and SiliconFlow. Request/Response Transformation : Customize requests and responses for different providers using transformers.
: Customize requests and responses for different providers using transformers. Dynamic Model Switching : Switch models on-the-fly within Claude Code using the /model command.
: Switch models on-the-fly within Claude Code using the command. GitHub Actions Integration : Trigger Claude Code tasks in your GitHub workflows.
: Trigger Claude Code tasks in your GitHub workflows. Plugin System: Extend functionality with custom transformers.
🚀 Getting Started
1. Installation
First, ensure you have Claude Code installed:
npm install -g @anthropic-ai/claude-code
Then, install Claude Code Router:
npm install -g @musistudio/claude-code-router
2. Configuration
Create and configure your ~/.claude-code-router/config.json file. For more details, you can refer to config.example.json .
The config.json file has several key sections:
PROXY_URL (optional): You can set a proxy for API requests, for example: "PROXY_URL": "http://127.0.0.1:7890" .
LOG (optional): You can enable logging by setting it to true . The log file will be located at $HOME/.claude-code-router.log .
APIKEY (optional): You can set a secret key to authenticate requests. When set, clients must provide this key in the Authorization header (e.g., Bearer your-secret-key ) or the x-api-key header. Example: "APIKEY": "your-secret-key" .
HOST (optional): You can set the host address for the server. If APIKEY is not set, the host will be forced to 127.0.0.1 for security reasons to prevent unauthorized access. Example: "HOST": "0.0.0.0" .
Providers : Used to configure different model providers.
Router : Used to set up routing rules. default specifies the default model, which will be used for all requests if no other route is configured.
API_TIMEOUT_MS : Specifies the timeout for API calls in milliseconds.
Here is a comprehensive example:
{ "APIKEY" : " your-secret-key " , "PROXY_URL" : " http://127.0.0.1:7890 " , "LOG" : true , "API_TIMEOUT_MS" : 600000 , "Providers" : [ { "name" : " openrouter " , "api_base_url" : " https://openrouter.ai/api/v1/chat/completions " , "api_key" : " sk-xxx " , "models" : [ " google/gemini-2.5-pro-preview " , " anthropic/claude-sonnet-4 " , " anthropic/claude-3.5-sonnet " , " anthropic/claude-3.7-sonnet:thinking " ], "transformer" : { "use" : [ " openrouter " ] } }, { "name" : " deepseek " , "api_base_url" : " https://api.deepseek.com/chat/completions " , "api_key" : " sk-xxx " , "models" : [ " deepseek-chat " , " deepseek-reasoner " ], "transformer" : { "use" : [ " deepseek " ], "deepseek-chat" : { "use" : [ " tooluse " ] } } }, { "name" : " ollama " , "api_base_url" : " http://localhost:11434/v1/chat/completions " , "api_key" : " ollama " , "models" : [ " qwen2.5-coder:latest " ] }, { "name" : " gemini " , "api_base_url" : " https://generativelanguage.googleapis.com/v1beta/models/ " , "api_key" : " sk-xxx " , "models" : [ " gemini-2.5-flash " , " gemini-2.5-pro " ], "transformer" : { "use" : [ " gemini " ] } }, { "name" : " volcengine " , "api_base_url" : " https://ark.cn-beijing.volces.com/api/v3/chat/completions " , "api_key" : " sk-xxx " , "models" : [ " deepseek-v3-250324 " , " deepseek-r1-250528 " ], "transformer" : { "use" : [ " deepseek " ] } }, { "name" : " modelscope " , "api_base_url" : " https://api-inference.modelscope.cn/v1/chat/completions " , "api_key" : " " , "models" : [ " Qwen/Qwen3-Coder-480B-A35B-Instruct " , " Qwen/Qwen3-235B-A22B-Thinking-2507 " ], "transformer" : { "use" : [ [ " maxtoken " , { "max_tokens" : 65536 } ], " enhancetool " ], "Qwen/Qwen3-235B-A22B-Thinking-2507" : { "use" : [ " reasoning " ] } } }, { "name" : " dashscope " , "api_base_url" : " https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions " , "api_key" : " " , "models" : [ " qwen3-coder-plus " ], "transformer" : { "use" : [ [ " maxtoken " , { "max_tokens" : 65536 } ], " enhancetool " ] } } ], "Router" : { "default" : " deepseek,deepseek-chat " , "background" : " ollama,qwen2.5-coder:latest " , "think" : " deepseek,deepseek-reasoner " , "longContext" : " openrouter,google/gemini-2.5-pro-preview " , "longContextThreshold" : 60000 , "webSearch" : " gemini,gemini-2.5-flash " } }
3. Running Claude Code with the Router
Start Claude Code using the router:
ccr code
Note: After modifying the configuration file, you need to restart the service for the changes to take effect: ccr restart
Providers
The Providers array is where you define the different model providers you want to use. Each provider object requires:
name : A unique name for the provider.
: A unique name for the provider. api_base_url : The full API endpoint for chat completions.
: The full API endpoint for chat completions. api_key : Your API key for the provider.
: Your API key for the provider. models : A list of model names available from this provider.
: A list of model names available from this provider. transformer (optional): Specifies transformers to process requests and responses.
Transformers
Transformers allow you to modify the request and response payloads to ensure compatibility with different provider APIs.
Global Transformer : Apply a transformer to all models from a provider. In this example, the openrouter transformer is applied to all models under the openrouter provider. { "name" : " openrouter " , "api_base_url" : " https://openrouter.ai/api/v1/chat/completions " , "api_key" : " sk-xxx " , "models" : [ " google/gemini-2.5-pro-preview " , " anthropic/claude-sonnet-4 " , " anthropic/claude-3.5-sonnet " ], "transformer" : { "use" : [ " openrouter " ] } }
Model-Specific Transformer : Apply a transformer to a specific model. In this example, the deepseek transformer is applied to all models, and an additional tooluse transformer is applied only to the deepseek-chat model. { "name" : " deepseek " , "api_base_url" : " https://api.deepseek.com/chat/completions " , "api_key" : " sk-xxx " , "models" : [ " deepseek-chat " , " deepseek-reasoner " ], "transformer" : { "use" : [ " deepseek " ], "deepseek-chat" : { "use" : [ " tooluse " ] } } }
Passing Options to a Transformer: Some transformers, like maxtoken , accept options. To pass options, use a nested array where the first element is the transformer name and the second is an options object. { "name" : " siliconflow " , "api_base_url" : " https://api.siliconflow.cn/v1/chat/completions " , "api_key" : " sk-xxx " , "models" : [ " moonshotai/Kimi-K2-Instruct " ], "transformer" : { "use" : [ [ " maxtoken " , { "max_tokens" : 16384 } ] ] } }
Available Built-in Transformers:
deepseek : Adapts requests/responses for DeepSeek API.
: Adapts requests/responses for DeepSeek API. gemini : Adapts requests/responses for Gemini API.
: Adapts requests/responses for Gemini API. openrouter : Adapts requests/responses for OpenRouter API.
: Adapts requests/responses for OpenRouter API. groq : Adapts requests/responses for groq API.
: Adapts requests/responses for groq API. maxtoken : Sets a specific max_tokens value.
: Sets a specific value. tooluse : Optimizes tool usage for certain models via tool_choice .
: Optimizes tool usage for certain models via . gemini-cli (experimental): Unofficial support for Gemini via Gemini CLI gemini-cli.js.
Custom Transformers:
You can also create your own transformers and load them via the transformers field in config.json .
{ "transformers" : [ { "path" : " $HOME/.claude-code-router/plugins/gemini-cli.js " , "options" : { "project" : " xxx " } } ] }
Router
The Router object defines which model to use for different scenarios:
default : The default model for general tasks.
: The default model for general tasks. background : A model for background tasks. This can be a smaller, local model to save costs.
: A model for background tasks. This can be a smaller, local model to save costs. think : A model for reasoning-heavy tasks, like Plan Mode.
: A model for reasoning-heavy tasks, like Plan Mode. longContext : A model for handling long contexts (e.g., > 60K tokens).
: A model for handling long contexts (e.g., > 60K tokens). longContextThreshold (optional): The token count threshold for triggering the long context model. Defaults to 60000 if not specified.
(optional): The token count threshold for triggering the long context model. Defaults to 60000 if not specified. webSearch : Used for handling web search tasks and this requires the model itself to support the feature. If you're using openrouter, you need to add the :online suffix after the model name.
You can also switch models dynamically in Claude Code with the /model command: /model provider_name,model_name Example: /model openrouter,anthropic/claude-3.5-sonnet
Custom Router
For more advanced routing logic, you can specify a custom router script via the CUSTOM_ROUTER_PATH in your config.json . This allows you to implement complex routing rules beyond the default scenarios.
In your config.json :
{ "CUSTOM_ROUTER_PATH" : " $HOME/.claude-code-router/custom-router.js " }
The custom router file must be a JavaScript module that exports an async function. This function receives the request object and the config object as arguments and should return the provider and model name as a string (e.g., "provider_name,model_name" ), or null to fall back to the default router.
Here is an example of a custom-router.js based on custom-router.example.js :
// $HOME/.claude-code-router/custom-router.js /** * A custom router function to determine which model to use based on the request. * * @param { object } req - The request object from Claude Code, containing the request body. * @param { object } config - The application's config object. * @returns { Promise } - A promise that resolves to the "provider,model_name" string, or null to use the default router. */ module . exports = async function router ( req , config ) { const userMessage = req . body . messages . find ( ( m ) => m . role === "user" ) ?. content ; if ( userMessage && userMessage . includes ( "explain this code" ) ) { // Use a powerful model for code explanation return "openrouter,anthropic/claude-3.5-sonnet" ; } // Fallback to the default router configuration return null ; } ;
🤖 GitHub Actions
Integrate Claude Code Router into your CI/CD pipeline. After setting up Claude Code Actions, modify your .github/workflows/claude.yaml to use the router:
name : Claude Code on : issue_comment : types : [created] # ... other triggers jobs : claude : if : | (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) || # ... other conditions runs-on : ubuntu-latest permissions : contents : read pull-requests : read issues : read id-token : write steps : - name : Checkout repository uses : actions/checkout@v4 with : fetch-depth : 1 - name : Prepare Environment run : | curl -fsSL https://bun.sh/install | bash mkdir -p $HOME/.claude-code-router cat << 'EOF' > $HOME/.claude-code-router/config.json { "log": true, "OPENAI_API_KEY": "${{ secrets.OPENAI_API_KEY }}", "OPENAI_BASE_URL": "https://api.deepseek.com", "OPENAI_MODEL": "deepseek-chat" } EOF shell : bash - name : Start Claude Code Router run : | nohup ~/.bun/bin/bunx @musistudio/[email protected] start & shell : bash - name : Run Claude Code id : claude uses : anthropics/claude-code-action@beta env : ANTHROPIC_BASE_URL : http://localhost:3456 with : anthropic_api_key : " any-string-is-ok "
This setup allows for interesting automations, like running tasks during off-peak hours to reduce API costs.
📝 Further Reading
❤️ Support & Sponsoring
If you find this project helpful, please consider sponsoring its development. Your support is greatly appreciated!
Our Sponsors
A huge thank you to all our sponsors for their generous support!
(If your name is masked, please contact me via my homepage email to update it with your GitHub username.)