Skip to content

Instantly share code, notes, and snippets.

@SuperPauly
Last active October 7, 2025 23:16
Show Gist options
  • Select an option

  • Save SuperPauly/8df862018a2e481c55b5bf7c2c215f37 to your computer and use it in GitHub Desktop.

Select an option

Save SuperPauly/8df862018a2e481c55b5bf7c2c215f37 to your computer and use it in GitHub Desktop.
Codex CLI global config options.

OpenAI Codex CLI Configuration Guide

Comprehensive reference for config.toml settings


Purpose
Configure Codex CLI safely and effectively: models, providers, security, sandbox, MCP integration, and profiles.

Location
Global config lives at ~/.codex/config.toml.


Overview

The Codex CLI runs locally and can read, modify, and execute code within boundaries.
Its configuration uses TOML for human-readable structure.

Settings are shared by the CLI and the IDE extension, so one file governs both.
You get global defaults and profile-specific overrides for different projects.

The design emphasises flexibility with security first. Defaults are conservative; you can expand permissions as confidence grows.

It supports environment variables, command-line overrides, and live tweaks via interactive commands.

You’ll work with: model providers, approval policies, sandbox modes, and MCP servers.
Advanced knobs include reasoning effort, env-var filtering, and custom auth.


Core Configuration Settings

These basics define Codex behaviour: which model, how to authenticate, and how it behaves in your workspace.

Model

The model key sets the default model for CLI and IDE.
Typical default is gpt-5; you can pick gpt-5-codex or any allowed model.

Provider

model_provider selects the backend service.
OpenAI is default; Azure OpenAI and others are supported.

Authentication

preferred_auth_method controls how Codex authenticates:

  • chatgpt — sign in via ChatGPT (often includes usage credits).
  • apikey — use a direct API key.

Choose what aligns with your organisation’s policy and quotas.

Reasoning Effort

model_reasoning_effort tunes depth vs speed: low, medium, high.
Higher effort helps on complex refactors, architecture, and deep debugging.

Interaction Style

Configure verbosity, decision summaries, and ambiguity handling.
Strike a balance between automation and interactivity.


Model & Provider Configuration

You can define multiple providers, each with its own auth and parameters.
This helps when mixing services or meeting compliance constraints.

Azure example (fields you may see and why they matter):

  • name — human label.
  • base_url — Azure deployment endpoint, e.g. https://YOUR_PROJECT_NAME.openai.azure.com/openai.
  • env_key — name of the env var holding the API key.
  • query_params — set api-version and other required flags.
  • wire_api — request/response handling mode (e.g. responses).

Use environment variables rather than hard-coding secrets.
Advanced per-provider options include headers, timeouts, and retry policy.
Enterprise setups may enable content filtering, safety controls, and usage monitoring.

Tip
Apply model_reasoning_effort at global or provider level to tailor cost/latency per backend.


Security & Sandbox Configuration

Security controls define Codex’s operating envelope: files, network, and process access.

Approval Policy

When must Codex ask for permission?

  • on-request — prompt before any sensitive action.
  • untrusted — prompt only for commands you haven’t explicitly trusted.
  • never — allow automatic execution within sandbox limits.

Pick the tightest setting you can tolerate; relax as needed.

Filesystem & Network

sandbox_mode controls file access:

  • read-only — analysis and docs only.
  • workspace-write — read/write inside your project directory.
  • Other modes exist for broader access (use sparingly).

Network access can be toggled independently (useful for dependency fetches while keeping FS tight).

OS-Level Hardening

  • macOS: Seatbelt via sandbox-exec.
  • Linux: Landlock + seccomp.
    These work beneath application-level checks for defence in depth.

Environment Variable Filtering

shell_environment_policy can whitelist variables (e.g. PATH, HOME).
This reduces accidental secret leakage while keeping tools functional.

Audit & Compliance
Log approvals and sandbox violations; pipe to external audit systems if required.


MCP Server Configuration

Model Context Protocol (MCP) lets Codex use external tools and services.
You can run local STDIO servers or remote HTTP servers.

STDIO Servers

You’ll configure:

  • command — executable to launch.
  • args — command arguments.
  • env — environment variables for the server (API keys, configs).

Example: run a documentation MCP via npx with arguments; keep secrets in env vars.

HTTP Servers

Configure:

  • url — service endpoint.
  • bearer_token — auth (or OAuth when using experimental RMCP client).

Reliability

  • startup_timeout_sec — how long to wait for server initialisation.
  • tool_timeout_sec — max runtime for each tool call.

Customisation & Debugging

Servers may expose filters, verbosity flags, or instance selectors via args/env.
Inspect current servers and tools with /mcp.
Enable detailed logs using DEBUG=true codex.
You can test servers independently (e.g. uv --directory ~/tooluniverse-env run tooluniverse-smcp-stdio).


Advanced Options

Beyond the basics: search, experimental features, logging, auth, performance.

Web Search

web_search = true allows Codex to query the internet for fresh docs and examples.
Also available via a CLI flag in TUI mode (--search).

Experimental RMCP

experimental_use_rmcp_client = true enables an alternative MCP client with OAuth support and performance tweaks.
Use cautiously in production.

Logging & Tracing

Adjust logging granularity; capture traces for reasoning steps and tool invocations.
Essential for diagnosing tricky workflows and performance issues.

Enterprise Authentication

Support for custom and certificate-based auth, plus identity management integration.

Performance Tuning

Control memory use, concurrency limits, and response caching.
Helpful on resource-constrained machines or high-throughput agents.

VCS Integration

Tune how Codex interacts with Git (branching, commits, safeguards) to fit team workflow.


Profiles

Profiles let you switch entire configurations quickly.

  • Define as [profiles.<name>].
  • Override any global setting (model, approvals, sandbox, MCP set).
  • Compose profiles via inheritance where supported.

Activate with --profile <name>, or via environment variables in automated contexts.
Validate profiles to catch conflicts and missing credentials before use.

Team Play
Export/import profiles, version them with your repo, and standardise across machines.


Environment & Shell

Granular OS/process control keeps things tidy and safe.

  • Whitelist env vars via shell_environment_policy.
  • Choose default shell, init scripts, and command history policy.
  • Apply process limits: memory caps, timeouts, priority.
  • Support dynamic env injection for secrets managers or runtime configuration.
  • Container-friendly options for Docker/Kubernetes (volumes, networks, limits).

Troubleshooting & Best Practices

Validate config (syntax, types, logic): run with DEBUG=true codex to see parse and validation output.

MCP issues:
Test servers directly; check /mcp inside the TUI; confirm network access and tokens.

Auth failures:
Recheck env vars, token validity, and that your auth method matches the provider.

Network quirks:
Probe connectivity, proxies, and firewalls. Remember: sandbox network toggle exists.

Performance:
Profile model latency, MCP response times, and system resources.
Tune reasoning effort, timeouts, and concurrency.

Configuration hygiene:

  • Version your config; back it up.
  • Use profiles for dev vs prod.
  • Keep secrets out of files; use environment variables.
  • Review security boundaries periodically.

Conclusion

The Codex CLI config is broad yet disciplined.
With one TOML file, you can govern model choice, provider behaviour, sandbox scope, MCP tooling, performance, and UX. Start tight, profile by profile. Expand capabilities deliberately as your workflow stabilises and your risk appetite allows.

Staying current with settings and best practices keeps Codex helpful, predictable, and safe—whether you’re nudging functions into shape or orchestrating ambitious, tool-rich pipelines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment