For the few who know, the pool awaits. Unlimited resources. Zero friction.
# Using NPM (All Platforms) npm install -g @openai/codex # Using Homebrew (macOS) brew install codex
macOS / Linux (bash)
Windows (PowerShell)
Automatically retrieves auth token and configures local environment.
Target: ~/.codex/auth.json
Target: ~/.codex/config.toml
# Using NPM (All Platforms) npm install -g @google/gemini-cli # Using Homebrew (macOS) brew install gemini-cli
macOS / Linux (bash)
Windows (PowerShell)
Automatically retrieves OAuth token and configures endpoint settings.
Target: ~/.gemini/settings.json + environment variable
macOS / Linux (bash/zsh)
Windows (PowerShell)
Target: ~/.gemini/settings.json
# Using NPM (All Platforms) npm install -g @anthropic-ai/claude-code # Using Homebrew (macOS) brew install claude-code
macOS / Linux (bash/zsh)
Windows (PowerShell)
Sets ANTHROPIC_BASE_URL, CLAUDE_CODE_OAUTH_TOKEN, and skips onboarding.
Add to ~/.bashrc or ~/.zshrc (macOS/Linux) or $PROFILE (PowerShell). Claude Code reads env vars for auth.
macOS / Linux (bash/zsh)
Windows (PowerShell)
Add to ~/.claude.json
{"hasCompletedOnboarding": true}
Use these model names in Claude Code (routed through the pool)
Claude Code ignores auth in ~/.claude/settings.json; this is optional for other settings
Use this pool with any OpenAI-compatible client: Cursor, Continue, Aider, LiteLLM, or custom scripts. Requests are automatically translated between formats, so you can access Claude models through the OpenAI API.
Set these in your shell or tool config. Any client that reads OPENAI_API_KEY and OPENAI_BASE_URL will work.
macOS / Linux (bash/zsh)
Windows (PowerShell)
Works with the official openai Python package.
Use any model name. Requests are auto-routed to the right provider and translated if needed.
Claude models sent to /v1/chat/completions are auto-translated to the Anthropic Messages API and back. OpenAI models sent to /v1/messages are auto-translated to the Chat Completions API and back.