Overview

The Codex CLI was originally built in TypeScript using React/Ink for the terminal UI. This implementation shipped from launch through August 2025, when it was replaced by the current Rust rewrite (commit 408c7ca, August 8, 2025)1. The TypeScript architecture is well-documented because the original source files were open and remained the basis for many derivative projects. Understanding it illuminates the design decisions in the Rust rewrite and shows how the project evolved from a Node.js prototype to a production Rust system.

Tech Stack (TypeScript Version)

ComponentTechnology
RuntimeNode.js >= 22
LanguageTypeScript 5.x
UI FrameworkReact 18 + Ink 5 (terminal React renderer)
API Clientopenai npm package (^4.95.1)
CLI Parsermeow (^13.2.0)
Buildesbuild
Testingvitest
Shell Parsingshell-quote
ConfigJSON or YAML (js-yaml)
Markdownmarked + marked-terminal
Schema Validationzod

CLI Entry Point (cli.tsx)

The entry point was a #!/usr/bin/env node script with this startup sequence:

  1. Load dotenv/config for environment variables
  2. Validate Node.js >= 22 (hard exit if older)
  3. Suppress deprecation warnings (process.noDeprecation = true)
  4. Parse CLI flags via meow
  5. Resolve authentication (OAuth token or API key)
  6. Render the React/Ink <App> component (interactive) or create an AgentLoop directly (quiet mode)

CLI Flags

FlagDescriptionDefault
--model / -mModel selectioncodex-mini-latest
--provider / -pProvider selectionopenai
--approval-mode / -asuggest, auto-edit, full-autosuggest
--writable-root / -wExtra sandbox-writable directories
--quiet / -qNon-interactive modefalse
--full-autoAuto-approve everything in sandboxfalse
--dangerously-auto-approve-everythingNo sandbox, no promptsfalse
--reasoningEffort level (low/medium/high)high
--full-context / -fSingle-pass full-repo editing modefalse
--flex-modeOpenAI flex service tierfalse
--notifyDesktop notificationsfalse

Authentication

  • Tokens stored in ~/.codex/auth.json (refresh token, access token, API key)
  • Tokens expired after 28 days
  • OAuth issuer: https://auth.openai.com, client ID: app_EMoamEEZ73f0CkXaXp7hrann
  • Fallback to OPENAI_API_KEY environment variable
  • Provider-specific env vars: GEMINI_API_KEY, OLLAMA_API_KEY, etc.

React/Ink UI

Component Hierarchy

<App>
  ├── Git repo check + <ConfirmInput> warning
  └── <TerminalChat>
        ├── <TerminalChatInput>        # User input
        ├── <TerminalMessageHistory>   # Conversation display
        ├── <TerminalMessage>          # Individual messages
        └── Overlay system             # Slash command overlays

TerminalChat State

StatePurpose
model, providerCurrent model and API provider
lastResponseIdFor response chaining (server-side context)
itemsResponseItem[] conversation history
loadingWhether the agent is active
approvalPolicyCurrent approval mode
thinkingSecondsTimer for reasoning display
overlayModeActive overlay (none, history, model, approval, help, diff)

Overlay System

Slash commands surfaced overlays:

CommandOverlay
/historyConversation history browser
/sessionsSaved sessions browser
/modelModel switcher (also changes provider)
/approvalApproval mode switcher
/helpHelp screen
/diffgit diff output
/compactSummarize conversation to reduce context

Desktop Notifications

On macOS, when config.notify was enabled and the agent finished a turn, the UI spawned osascript to show a native notification with the last assistant message (truncated to 100 characters).

The Agent Loop (agent-loop.ts)

This was the core of the harness — a class managing the conversation with the OpenAI Responses API.

Constructor Parameters

type AgentLoopParams = {
  model: string;
  provider?: string;
  config?: AppConfig;
  instructions?: string;
  approvalPolicy: ApprovalPolicy;
  disableResponseStorage?: boolean;
  onItem: (item: ResponseItem) => void;
  onLoading: (loading: boolean) => void;
  additionalWritableRoots: ReadonlyArray<string>;
  getCommandConfirmation: (...) => Promise<CommandConfirmation>;
  onLastResponseId: (lastResponseId: string) => void;
};

Key Instance Fields

FieldPurpose
generation: numberIncremented per run() call; used to ignore stale events
execAbortControllerFor aborting in-progress tool calls
canceled / terminatedLifecycle flags
hardAbortMaster abort signal, fires on terminate()
transcriptLocal conversation history (when disableResponseStorage === true)
pendingAbortsTracks unresolved function call IDs from cancelled runs

Tools

Two tool types were defined:

// Standard function tool
const shellFunctionTool: FunctionTool = {
  type: "function",
  name: "shell",
  description: "Runs a shell command, and returns its output.",
  parameters: {
    type: "object",
    properties: {
      command: { type: "array", items: { type: "string" } },
      workdir: { type: "string" },
      timeout: { type: "number" },
    },
    required: ["command"],
  },
};
 
// Native tool (for codex-series models)
const localShellTool: Tool = { type: "local_shell" };

The run() Method — Main Loop

  1. Bump generation, reset canceled, create fresh execAbortController
  2. Build abort outputs for any pendingAborts from prior cancelled runs
  3. Build turnInput (full transcript or delta, depending on disableResponseStorage)
  4. Enter the main while (turnInput.length > 0) loop:
    • Stage input items to UI with 3ms delay
    • Build API request with model-specific reasoning config
    • Call the Responses API with streaming
    • Process streaming events (response.output_item.done, response.completed)
    • Handle function calls via handleFunctionCall() or handleLocalShellCall()
    • New turnInput built from function call outputs (loop continues if non-empty)

API Call Configuration

stream = await responseCall({
  model,
  instructions: mergedInstructions,
  input: turnInput,
  stream: true,
  parallel_tool_calls: false,
  reasoning: { effort: config.reasoningEffort, summary: "auto" },
  tools,
  tool_choice: "auto",
  ...(flexMode ? { service_tier: "flex" } : {}),
  ...(disableResponseStorage
    ? { store: false }
    : { store: true, previous_response_id: lastResponseId }),
});

Retry Logic

ConditionStrategy
Transient errors (5xx, timeout, network)Up to 8 retries with backoff
Rate limit (429)Exponential backoff from 500ms, parses retry-after header
Stream-level rate limitsUp to 5 retries during streaming
Context too longGraceful error message

System Prompt

The full system prompt (~3000 characters) included:

  • Identity: “You are operating as and within the Codex CLI”
  • Capabilities: receive prompts, stream responses, emit function calls, apply patches, run commands
  • Coding guidelines: fix root cause, avoid complexity, minimal changes, consistent style
  • apply_patch usage instructions
  • Dynamic context: username (os.userInfo().username), working directory, rg availability

Approval System (approvals.ts)

Approval Policies

type ApprovalPolicy = "suggest" | "auto-edit" | "full-auto";
PolicyFile EditsShell CommandsSandbox
suggestAsk userAsk userYes
auto-editAuto-approve (in writable roots)Ask userYes
full-autoAuto-approveAuto-approveYes

Safety Assessment

canAutoApprove() returned a SafetyAssessment:

type SafetyAssessment =
  | { type: "auto-approve"; runInSandbox: boolean; reason: string }
  | { type: "ask-user" }
  | { type: "reject"; reason: string };

Known Safe Commands (isSafeCommand())

Auto-approved without sandbox:

CategoryCommands
Navigationcd, ls, pwd, true, echo
File viewingcat, nl, head, tail, wc
Searchrg (except --pre, --hostname-bin), grep, find (except -exec, -delete), which
Git (read-only)git status, git branch, git log, git diff, git show
Build (read-only)cargo check
Sed (read-only)sed -n <range>p [file]

Compound Expression Safety

isEntireShellExpressionSafe() validated that:

  • Every command segment passed isSafeCommand()
  • All operators were safe: &&, ||, |, ;
  • No parentheses, braces, or redirections

Review Decisions

enum ReviewDecision {
  YES = "yes",           // Approve this execution
  NO_CONTINUE = "no",    // Deny but keep going
  NO_EXIT = "exit",      // Deny and stop
  ALWAYS = "always",     // Approve + remember for session
  EXPLAIN = "explain",   // Request command explanation
}

The EXPLAIN option called oai.chat.completions.create() with a dedicated system prompt to explain what a command does, then re-prompted the user.

Configuration System (config.ts)

File Locations

PathPurpose
~/.codex/config.jsonUser config (also .yaml/.yml)
~/.codex/instructions.mdUser-wide instructions
~/.codex.envUser-wide environment variables
~/.codex/auth.jsonAuthentication tokens

Supported Providers

providers = {
  openai:     { baseURL: "api.openai.com/v1" },
  openrouter: { baseURL: "openrouter.ai/api/v1" },
  azure:      { baseURL: "YOUR_PROJECT_NAME.openai.azure.com/openai" },
  gemini:     { baseURL: "generativelanguage.googleapis.com/v1beta/openai" },
  ollama:     { baseURL: "localhost:11434/v1" },
  mistral:    { baseURL: "api.mistral.ai/v1" },
  deepseek:   { baseURL: "api.deepseek.com" },
  xai:        { baseURL: "api.x.ai/v1" },
  groq:       { baseURL: "api.groq.com/openai/v1" },
  arceeai:    { baseURL: "conductor.arcee.ai/v1" },
};

Project Documentation Discovery

Searched for (in priority order): AGENTS.md, codex.md, .codex.md, CODEX.md

  1. First checked CWD
  2. Then walked up to git root
  3. Max size: 32KB
  4. Combined with user instructions via \n\n--- project-doc ---\n\n separator

Command Execution (handle-exec-command.ts)

Flow

  1. Check alwaysApprovedCommands cache (session-level set of command keys)
  2. Call canAutoApprove() to assess safety
  3. Based on assessment:
    • auto-approve: Execute, optionally in sandbox
    • ask-user: Surface UI prompt via getCommandConfirmation()
    • reject: Return “aborted”
  4. If sandbox execution fails in full-auto with fullAutoErrorMode === ASK_USER: re-prompt user

Sandbox Routing (exec.ts)

switch (sandbox) {
  case SandboxType.NONE:
    return rawExec(cmd, opts, config, abortSignal);
  case SandboxType.MACOS_SEATBELT:
    return execWithSeatbelt(cmd, opts, writableRoots, config, abortSignal);
  case SandboxType.LINUX_LANDLOCK:
    return execWithLandlock(cmd, opts, writableRoots, config, abortSignal);
}

Process Spawning (raw-exec.ts)

Used child_process.spawn() with:

  • stdio: ["ignore", "pipe", "pipe"] — Prevent stdin reads, capture stdout/stderr
  • detached: true — Own process group for reliable kill propagation

Abort sequence: SIGTERM → 2000ms wait → SIGKILL (process group, then individual child as fallback).

Output truncation: Default 10KB / 256 lines, configurable via config.tools.shell.maxBytes/maxLines.

TypeScript Seatbelt Profile

The macOS sandbox used a complete Seatbelt profile inspired by Chrome’s sandbox:

(version 1)
(deny default)                ; closed-by-default
(allow file-read*)            ; read-only file operations
(allow process-exec)          ; child processes inherit policy
(allow process-fork)
(allow signal (target self))
(allow file-write-data
  (require-all (path "/dev/null")
               (vnode-type CHARACTER-DEVICE)))

With dynamic writable roots:

(allow file-write*
  (subpath (param "WRITABLE_ROOT_0"))
  (subpath (param "WRITABLE_ROOT_1"))
  ...)

Always-included writable roots: process.cwd(), os.tmpdir(), $HOME/.pyenv.

TypeScript Apply-Patch Format

The same custom V4A diff format used in the Rust version, but with three-pass matching:

  1. Exact match after Unicode canonicalization (NFC + punctuation equivalents)
  2. Trailing whitespace ignored
  3. All surrounding whitespace ignored

(The Rust version extended this to four passes, adding a full trim() pass.)

Migration to Rust

Timeline

  • Launch – August 2025: TypeScript implementation
  • August 8, 2025 (commit 408c7ca): TypeScript source removed
  • August 2025 onward: codex-cli npm package became a thin wrapper invoking platform-specific Rust binaries
  • April 2026: v0.118+ with 95+ Rust crates

Key Changes

AspectTypeScriptRust
UIReact/InkRatatui
Config formatJSON/YAMLTOML
Approval modessuggest/auto-edit/full-autoread-only/workspace-write/danger-full-access
Sandbox (Linux)Landlock binaryBubblewrap + Landlock + seccomp
Sandbox (Windows)Not supportedRestricted tokens + ACL overlay
ArchitectureSingle-processClient-server (in-process or WebSocket)
Process modelSingle AgentLoop class95+ crate workspace
Session persistenceResponse ID chainingSQLite + JSONL rollout
MCP supportNoneClient + experimental server
Provider support10 hardcodedDynamic catalog
Output cap10KB / 256 lines~1 MiB / 10,000 deltas
Exec timeout10s10s (same)

What Stayed the Same

  • Apache-2.0 license
  • @openai/codex npm package name
  • The custom V4A patch format (extended with a fourth matching pass)
  • The core ReAct loop pattern (model → tool → observe → iterate)
  • Sandbox-first security philosophy
  • AGENTS.md project documentation convention

Footnotes

References

Footnotes

  1. Codex CLI Commit History