<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai Developer Tools on ICE-ICE-BEAR-BLOG</title><link>https://ice-ice-bear.github.io/tags/ai-developer-tools/</link><description>Recent content in Ai Developer Tools on ICE-ICE-BEAR-BLOG</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Mon, 04 May 2026 00:00:00 +0900</lastBuildDate><atom:link href="https://ice-ice-bear.github.io/tags/ai-developer-tools/index.xml" rel="self" type="application/rss+xml"/><item><title>Codex in ChatGPT — Realigning to a Unified Coding Agent, and the Python SDK That Opens Headless Automation</title><link>https://ice-ice-bear.github.io/posts/2026-05-04-codex-in-chatgpt-rollout/</link><pubDate>Mon, 04 May 2026 00:00:00 +0900</pubDate><guid>https://ice-ice-bear.github.io/posts/2026-05-04-codex-in-chatgpt-rollout/</guid><description>&lt;img src="https://ice-ice-bear.github.io/" alt="Featured image of post Codex in ChatGPT — Realigning to a Unified Coding Agent, and the Python SDK That Opens Headless Automation" /&gt;&lt;h2 id="overview"&gt;Overview
&lt;/h2&gt;&lt;p&gt;OpenAI quietly updated the &lt;a class="link" href="https://help.openai.com/en/articles/11369540-codex-in-chatgpt" target="_blank" rel="noopener"
 &gt;official Codex help article&lt;/a&gt; to formally fold &lt;strong&gt;Codex into ChatGPT plans&lt;/strong&gt;. The headline: Codex is included with ChatGPT Plus, Pro, Business, and Enterprise/Edu, plus a limited-time inclusion in Free and Go, and all other plans get 2x rate limits. At nearly the same moment, the &lt;a class="link" href="https://github.com/openai/codex" target="_blank" rel="noopener"
 &gt;&lt;code&gt;openai/codex&lt;/code&gt; repo&lt;/a&gt; landed an experimental Python SDK under &lt;a class="link" href="https://github.com/openai/codex/tree/main/sdk/python" target="_blank" rel="noopener"
 &gt;&lt;code&gt;sdk/python&lt;/code&gt;&lt;/a&gt; — a thin wrapper around &lt;code&gt;codex app-server&lt;/code&gt; JSON-RPC v2. Read together, this is OpenAI realigning Codex from &amp;ldquo;a CLI tool&amp;rdquo; into a &lt;strong&gt;unified coding agent with five surfaces (app, CLI, IDE extension, web, Python SDK) all authenticated through one ChatGPT account&lt;/strong&gt;.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;graph TD
 Core["Codex core &amp;lt;br/&amp;gt; ChatGPT account auth"]
 Core --&gt; App["Codex App &amp;lt;br/&amp;gt; (desktop)"]
 Core --&gt; CLI["Codex CLI"]
 Core --&gt; IDE["Codex IDE extension &amp;lt;br/&amp;gt; (VS Code + forks)"]
 Core --&gt; Web["Codex Web &amp;lt;br/&amp;gt; chatgpt.com/codex"]
 Core --&gt; SDK["Python SDK &amp;lt;br/&amp;gt; app-server JSON-RPC v2"]
 Policy["ToU + Business Terms &amp;lt;br/&amp;gt; (constraint layer)"] -.-&gt; Core
 Policy -.-&gt; SDK&lt;/pre&gt;&lt;p&gt;This post weaves three threads — &lt;strong&gt;(1) Codex in ChatGPT as a product/GTM move&lt;/strong&gt;, &lt;strong&gt;(2) what the Python SDK unlocks for headless automation and sub-agents&lt;/strong&gt;, &lt;strong&gt;(3) what the terms of use and business terms allow vs leave ambiguous&lt;/strong&gt;. It ends with a recommendation matrix across &lt;a class="link" href="https://www.anthropic.com/claude-code" target="_blank" rel="noopener"
 &gt;Claude Code&lt;/a&gt;, &lt;a class="link" href="https://cursor.com/" target="_blank" rel="noopener"
 &gt;Cursor&lt;/a&gt;, Codex in ChatGPT, and &lt;a class="link" href="https://github.com/thedalbee/codex-r" target="_blank" rel="noopener"
 &gt;codex-r&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="1-codex-in-chatgpt--gtm-realignment"&gt;1. Codex in ChatGPT — GTM realignment
&lt;/h2&gt;&lt;p&gt;The help article pins down the new shape:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Included plans&lt;/strong&gt;: ChatGPT Plus, Pro, Business, Enterprise/Edu&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Limited-time inclusion&lt;/strong&gt;: Free and Go (with 2x rate limits on other plans)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Four clients + web&lt;/strong&gt;: &lt;a class="link" href="https://developers.openai.com/codex/app" target="_blank" rel="noopener"
 &gt;Codex app&lt;/a&gt;, &lt;a class="link" href="https://developers.openai.com/codex/cli" target="_blank" rel="noopener"
 &gt;Codex CLI&lt;/a&gt;, &lt;a class="link" href="https://developers.openai.com/codex/ide" target="_blank" rel="noopener"
 &gt;Codex IDE extension&lt;/a&gt;, &lt;a class="link" href="https://chatgpt.com/codex" target="_blank" rel="noopener"
 &gt;Codex web&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Auth&lt;/strong&gt;: ChatGPT account SSO everywhere; web also requires a GitHub connection&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Terms&lt;/strong&gt;: the same &lt;a class="link" href="https://openai.com/policies/terms-of-use/" target="_blank" rel="noopener"
 &gt;ChatGPT Terms of Use&lt;/a&gt; + &lt;a class="link" href="https://openai.com/policies/privacy-policy/" target="_blank" rel="noopener"
 &gt;Privacy Policy&lt;/a&gt;; business users fall under the &lt;a class="link" href="https://openai.com/policies/services-agreement/" target="_blank" rel="noopener"
 &gt;Online Services Agreement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Enterprise controls&lt;/strong&gt;: &lt;a class="link" href="https://help.openai.com/en/articles/11750701-rbac" target="_blank" rel="noopener"
 &gt;RBAC&lt;/a&gt;, workspace App controls, and a unified &lt;a class="link" href="https://chatgpt.com/admin/api-reference#tag/Codex-Tasks" target="_blank" rel="noopener"
 &gt;Compliance API&lt;/a&gt; that logs CLI, IDE, web, and cloud usage together&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="what-this-announcement-actually-means"&gt;What this announcement actually means
&lt;/h3&gt;&lt;p&gt;GitHub Copilot lives in the IDE; &lt;a class="link" href="https://cursor.com/" target="_blank" rel="noopener"
 &gt;Cursor&lt;/a&gt; is an IDE-as-product; Anthropic&amp;rsquo;s Claude Code recruits through the terminal and a VS Code extension. OpenAI is doing the inverse: &lt;strong&gt;funnel its massive ChatGPT user base outward into IDEs and terminals&lt;/strong&gt;. A Plus subscriber already has a card on file — they install Codex CLI with no second billing relationship. The limited-time Free/Go inclusion accelerates that pipe.&lt;/p&gt;
&lt;p&gt;Where this collides with competitors: the &lt;a class="link" href="https://developers.openai.com/codex/ide" target="_blank" rel="noopener"
 &gt;Codex IDE extension&lt;/a&gt; targets Cursor; the IDE extension + web (&lt;code&gt;chatgpt.com/codex&lt;/code&gt;) target &lt;a class="link" href="https://github.com/features/copilot" target="_blank" rel="noopener"
 &gt;GitHub Copilot&lt;/a&gt;; the &lt;a class="link" href="https://developers.openai.com/codex/cli" target="_blank" rel="noopener"
 &gt;Codex CLI&lt;/a&gt; targets Claude Code. The real moat isn&amp;rsquo;t any single surface — it&amp;rsquo;s that &lt;strong&gt;billing and auth collapse to one ChatGPT account&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;For enterprise, the &lt;a class="link" href="https://chatgpt.com/admin/api-reference#tag/Codex-Tasks" target="_blank" rel="noopener"
 &gt;Compliance API&lt;/a&gt; is the underrated lever: CLI, IDE, web, and cloud Codex usage all funnel into one log surface. SOC/SOX flows get a single source of truth. Cursor exposes its own enterprise log; Claude Code logs to the &lt;a class="link" href="https://console.anthropic.com/" target="_blank" rel="noopener"
 &gt;Anthropic Console&lt;/a&gt;. With Codex you only audit one place.&lt;/p&gt;
&lt;h2 id="2-python-sdk--the-door-to-headless-automation-just-opened"&gt;2. Python SDK — the door to headless automation just opened
&lt;/h2&gt;&lt;p&gt;The &lt;a class="link" href="https://github.com/openai/codex/tree/main/sdk/python" target="_blank" rel="noopener"
 &gt;&lt;code&gt;sdk/python&lt;/code&gt;&lt;/a&gt; directory will publish as &lt;code&gt;openai-codex-app-server-sdk&lt;/code&gt;. Core entry point is &lt;code&gt;codex_app_server.Codex&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;codex_app_server&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Codex&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;Codex&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;codex&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;thread&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;codex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;thread_start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;gpt-5.4&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;model_reasoning_effort&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;high&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;thread&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;Summarize Rust ownership in 2 bullets.&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;final_response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h3 id="shape"&gt;Shape
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Transport&lt;/strong&gt;: the SDK spawns the &lt;code&gt;codex app-server&lt;/code&gt; binary over stdio and talks &lt;strong&gt;JSON-RPC v2&lt;/strong&gt;, then exposes Pydantic models on top.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Runtime packaging&lt;/strong&gt;: SDK builds pin an exact &lt;code&gt;openai-codex-cli-bin&lt;/code&gt; runtime, shipped as platform wheels (macOS arm64/x86_64, musllinux aarch64/x86_64, win arm64/amd64).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API surface&lt;/strong&gt; — &lt;code&gt;Codex&lt;/code&gt; / &lt;code&gt;AsyncCodex&lt;/code&gt;, &lt;code&gt;thread_start&lt;/code&gt; / &lt;code&gt;thread_resume&lt;/code&gt; / &lt;code&gt;thread_fork&lt;/code&gt; / &lt;code&gt;thread_archive&lt;/code&gt;, &lt;code&gt;Thread.run(...)&lt;/code&gt; / &lt;code&gt;Thread.turn(...)&lt;/code&gt;, &lt;code&gt;TurnHandle.steer(...)&lt;/code&gt; / &lt;code&gt;interrupt()&lt;/code&gt; / &lt;code&gt;stream()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Async parity&lt;/strong&gt;: &lt;code&gt;async with AsyncCodex()&lt;/code&gt; mirrors the sync surface&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Concurrency&lt;/strong&gt;: a single &lt;code&gt;Codex&lt;/code&gt; instance can stream &lt;strong&gt;multiple active turns concurrently, routed by turn ID&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-this-matters"&gt;Why this matters
&lt;/h3&gt;&lt;p&gt;&lt;code&gt;thread.run(&amp;quot;...&amp;quot;)&lt;/code&gt; is the one-shot convenience path. The interesting one is &lt;code&gt;thread.turn(...)&lt;/code&gt;, which returns a &lt;code&gt;TurnHandle&lt;/code&gt; exposing &lt;code&gt;steer()&lt;/code&gt;, &lt;code&gt;interrupt()&lt;/code&gt;, and &lt;code&gt;stream()&lt;/code&gt;. &lt;strong&gt;This is exactly the interface you need to build sub-agents and headless automations.&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Sub-agent pattern: a parent Python process spawns child Codex threads via &lt;code&gt;thread_start(...)&lt;/code&gt;, isolated by &lt;code&gt;cwd&lt;/code&gt;, &lt;code&gt;sandbox&lt;/code&gt;, &lt;code&gt;model&lt;/code&gt;, and &lt;code&gt;approval_policy&lt;/code&gt;. Each child can carry its own &lt;a class="link" href="https://modelcontextprotocol.io/" target="_blank" rel="noopener"
 &gt;MCP&lt;/a&gt; servers and plug-in scopes.&lt;/li&gt;
&lt;li&gt;Headless automation: CI jobs, scheduled crons, &lt;a class="link" href="https://docs.github.com/en/actions" target="_blank" rel="noopener"
 &gt;GitHub Actions&lt;/a&gt; workers can launch Codex to review PR diffs, dry-run migrations, or triage error logs and route results back into Python.&lt;/li&gt;
&lt;li&gt;Multi-turn thread management: &lt;code&gt;thread_resume(thread_id)&lt;/code&gt; continues prior threads; &lt;code&gt;thread_fork(...)&lt;/code&gt; branches from a shared context. The same evolutionary line as the external session import RPC analyzed in &lt;a class="link" href="https://ice-ice-bear.github.io/posts/2026-05-07-codex-r-claude-code-bridge/" &gt;the codex-r post&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Anthropic is moving the same direction with its &lt;a class="link" href="https://docs.claude.com/en/api/agent-sdk-overview" target="_blank" rel="noopener"
 &gt;Agent SDK&lt;/a&gt;, but &lt;strong&gt;OpenAI&amp;rsquo;s pitch is &amp;ldquo;one ChatGPT account, one install, and you have headless agents&amp;rdquo;&lt;/strong&gt;. No separate API key, no separate billing, no separate rate-limit dashboard. Your ChatGPT plan is the automation quota.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;flowchart LR
 Parent["Python parent process"]
 Parent --&gt;|"thread_start(model, cwd, sandbox)"| Codex1["Codex thread #1 &amp;lt;br/&amp;gt; (lint sweep)"]
 Parent --&gt;|"thread_start(...)"| Codex2["Codex thread #2 &amp;lt;br/&amp;gt; (test triage)"]
 Parent --&gt;|"thread_start(...)"| Codex3["Codex thread #3 &amp;lt;br/&amp;gt; (doc gen)"]
 Codex1 --&gt;|"TurnHandle.stream()"| Parent
 Codex2 --&gt;|"TurnHandle.steer()"| Parent
 Codex3 --&gt;|"final_response"| Parent&lt;/pre&gt;&lt;h2 id="3-policy--whats-allowed-whats-gray"&gt;3. Policy — what&amp;rsquo;s allowed, what&amp;rsquo;s gray
&lt;/h2&gt;&lt;h3 id="individual-users-terms-of-use-effective-2026-01-01"&gt;Individual users (&lt;a class="link" href="https://openai.com/policies/terms-of-use/" target="_blank" rel="noopener"
 &gt;Terms of Use&lt;/a&gt;, effective 2026-01-01)
&lt;/h3&gt;&lt;p&gt;Explicitly &lt;strong&gt;prohibited&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;Automatically or programmatically extract data or Output.&amp;rdquo; — Bulk scripted extraction is a violation.&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Interfere with or disrupt our Services, including circumvent any rate limits or restrictions or bypass any protective measures or safety mitigations.&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Use Output to develop models that compete with OpenAI.&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Modify, copy, lease, sell or distribute any of our Services.&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Explicitly &lt;strong&gt;permitted&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;you &amp;hellip; (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.&amp;rdquo; — &lt;strong&gt;You own Output.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Our software may include open source software that is governed by its own licenses.&amp;rdquo; — Codex SDK itself ships open source.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Gray area:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Sub-agents and scheduled automation&lt;/strong&gt;: ToU forbids &amp;ldquo;automatic extraction&amp;rdquo; but doesn&amp;rsquo;t address scheduled coding tasks. The help page lists &lt;a class="link" href="https://developers.openai.com/codex/app/automations" target="_blank" rel="noopener"
 &gt;Automations&lt;/a&gt; as a first-class feature, so automation through OpenAI-provided surfaces is intended use. Driving Codex from external queues (Celery, Airflow) sits closer to the rate-limit-circumvention line — sustained heavy use risks being read that way.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Output redistribution&lt;/strong&gt;: you own your Output, but &amp;ldquo;Similarity of content&amp;rdquo; is explicit: other users&amp;rsquo; similar outputs aren&amp;rsquo;t yours.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="business-users-may-2025-business-terms"&gt;Business users (&lt;a class="link" href="https://openai.com/policies/may-2025-business-terms/" target="_blank" rel="noopener"
 &gt;May 2025 Business Terms&lt;/a&gt;)
&lt;/h3&gt;&lt;p&gt;Key differences:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;§4.1&lt;/strong&gt;: Customer retains Input ownership and owns Output; OpenAI assigns its right, title, and interest.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;§4.2&lt;/strong&gt;: &amp;ldquo;OpenAI will not use Customer Content to develop or improve the Services, unless Customer explicitly agrees to such use.&amp;rdquo; — &lt;strong&gt;No training by default for business users.&lt;/strong&gt; The help page reaffirms this.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;§3.3 Restrictions&lt;/strong&gt;: (d) no Reverse Engineering, (e) no using Output to train competing models (except Permitted Exception), (f) no extraction outside Services-permitted paths, (g) no API-key resale, (h) no rate-limit circumvention.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;§1.4 Affiliates&lt;/strong&gt;: affiliates may use the workspace; separate billing requires a separate Order Form.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;§9.3 Feedback&lt;/strong&gt;: feedback can be used by OpenAI without restriction.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Business terms are dramatically more automation-friendly. &lt;strong&gt;§2.2 explicitly grants the right to &amp;ldquo;integrate the Services into Customer Applications&amp;rdquo;&lt;/strong&gt; — embedding SDK-based headless agents in internal tooling is unambiguously allowed. But &lt;strong&gt;§3.3(i) &amp;ldquo;violate or circumvent Usage Limits or otherwise configure the Services to avoid Usage Limits&amp;rdquo;&lt;/strong&gt; is a hard stop — round-robinning across multiple accounts to dodge a workspace quota is a violation.&lt;/p&gt;
&lt;h3 id="one-liner-summary"&gt;One-liner summary
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Personal ChatGPT Plus + SDK automation&lt;/strong&gt; → fine within intended use. Bulk external data extraction / rate-limit circumvention / training competitors is forbidden.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Company workspace + Codex integrated into internal tools&lt;/strong&gt; → explicitly permitted by §2.2. No training on your content by default.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Redistributing Codex Output externally&lt;/strong&gt; → you own it, so yes; but no OpenAI branding misuse, no passing it off as human-written, no using it to train competing models.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="4-which-workflow-when"&gt;4. Which workflow when
&lt;/h2&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Scenario&lt;/th&gt;
 &lt;th&gt;Recommended tool&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Inline IDE completion + refactor + GitHub flow&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://cursor.com/" target="_blank" rel="noopener"
 &gt;Cursor&lt;/a&gt; or &lt;a class="link" href="https://developers.openai.com/codex/ide" target="_blank" rel="noopener"
 &gt;Codex IDE extension&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Terminal-centric agent flow, long multi-turn sessions&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://www.anthropic.com/claude-code" target="_blank" rel="noopener"
 &gt;Claude Code&lt;/a&gt; or &lt;a class="link" href="https://developers.openai.com/codex/cli" target="_blank" rel="noopener"
 &gt;Codex CLI&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Already on ChatGPT Plus/Pro, want single billing&lt;/td&gt;
 &lt;td&gt;Codex CLI + IDE — same ChatGPT account&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Embedded in Anthropic ecosystem (Claude Code sessions)&lt;/td&gt;
 &lt;td&gt;Claude Code primary + &lt;a class="link" href="https://github.com/thedalbee/codex-r" target="_blank" rel="noopener"
 &gt;codex-r&lt;/a&gt; for migration&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Python headless / CI / sub-agent orchestration&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://github.com/openai/codex/tree/main/sdk/python" target="_blank" rel="noopener"
 &gt;Codex Python SDK&lt;/a&gt; or &lt;a class="link" href="https://docs.claude.com/en/api/agent-sdk-overview" target="_blank" rel="noopener"
 &gt;Anthropic Agent SDK&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Enterprise compliance + unified usage logs&lt;/td&gt;
 &lt;td&gt;Codex (Compliance API + RBAC + workspace controls)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Free entry point&lt;/td&gt;
 &lt;td&gt;Codex Free/Go (limited time) or Claude Code free tier&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Stacking tools is fine.&lt;/strong&gt; Cursor for inline edits, Codex CLI in a separate terminal for multi-file work, Codex SDK in a background cron reviewing PR diffs headlessly. &lt;strong&gt;OpenAI&amp;rsquo;s whole point in unifying four surfaces under one ChatGPT account is exactly this composition&lt;/strong&gt; — one bill, IDE + terminal + headless.&lt;/p&gt;
&lt;h2 id="insight"&gt;Insight
&lt;/h2&gt;&lt;p&gt;The real story isn&amp;rsquo;t a pricing-page change. It&amp;rsquo;s that &lt;strong&gt;OpenAI collapsed billing, auth, logs, and automation for coding agents into a single ChatGPT plane&lt;/strong&gt;. The &amp;ldquo;pay for CLI separately, IDE separately, web separately&amp;rdquo; era is over. Anthropic is consolidating the same way (&lt;a class="link" href="https://www.anthropic.com/claude-code" target="_blank" rel="noopener"
 &gt;Claude.ai account = Claude Code account&lt;/a&gt;), but OpenAI gets there first against a much larger installed base.&lt;/p&gt;
&lt;p&gt;The Python SDK landing in the same week is not coincidental. The &lt;code&gt;thread_start&lt;/code&gt; / &lt;code&gt;thread_fork&lt;/code&gt; / &lt;code&gt;TurnHandle.steer&lt;/code&gt; triad is structurally the same abstraction you find in the &lt;a class="link" href="https://docs.claude.com/en/api/agent-sdk-overview" target="_blank" rel="noopener"
 &gt;Anthropic Agent SDK&lt;/a&gt; and &lt;a class="link" href="https://python.langchain.com/docs/concepts/multi_agent/" target="_blank" rel="noopener"
 &gt;LangChain&amp;rsquo;s multi-agent patterns&lt;/a&gt;, but layered on ChatGPT auth. &lt;strong&gt;&amp;ldquo;One ChatGPT plan, headless automation, sub-agent orchestration&amp;rdquo;&lt;/strong&gt; is a GTM weapon that routes around API-key issuance, separate billing, and separate rate-limit management.&lt;/p&gt;
&lt;p&gt;On policy: business terms openly authorize automation, SDK use, and embedded tooling while defaulting to no-training. Individual ToU&amp;rsquo;s &amp;ldquo;automatic extraction&amp;rdquo; clause creates ambiguity, but automation through OpenAI&amp;rsquo;s own Automations / SDK / app-server surfaces is the intended path. &lt;strong&gt;If you&amp;rsquo;re embedding it in company tools, a workspace plan is the correct answer on every axis&lt;/strong&gt; — policy, logging, and rate limits.&lt;/p&gt;
&lt;p&gt;The post-announcement axis of competition for coding agents shifts from &lt;strong&gt;&amp;ldquo;which tool is smarter&amp;rdquo;&lt;/strong&gt; to &lt;strong&gt;&amp;ldquo;which tool collapses my auth, billing, logs, and automation surfaces with the least friction&amp;rdquo;&lt;/strong&gt;. Codex&amp;rsquo;s four-surface unification plus the Python SDK is OpenAI staking that ground first.&lt;/p&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Official docs&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://help.openai.com/en/articles/11369540-codex-in-chatgpt" target="_blank" rel="noopener"
 &gt;Using Codex with your ChatGPT plan&lt;/a&gt; — the help article folding Codex into ChatGPT plans&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://developers.openai.com/codex/" target="_blank" rel="noopener"
 &gt;Codex developer portal&lt;/a&gt; — clients and models&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/openai/codex/tree/main/sdk/python" target="_blank" rel="noopener"
 &gt;Codex Python SDK&lt;/a&gt; — the experimental &lt;code&gt;openai-codex-app-server-sdk&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://developers.openai.com/codex/cli" target="_blank" rel="noopener"
 &gt;Codex CLI&lt;/a&gt; / &lt;a class="link" href="https://developers.openai.com/codex/app" target="_blank" rel="noopener"
 &gt;Codex App&lt;/a&gt; / &lt;a class="link" href="https://developers.openai.com/codex/ide" target="_blank" rel="noopener"
 &gt;Codex IDE&lt;/a&gt; / &lt;a class="link" href="https://chatgpt.com/codex" target="_blank" rel="noopener"
 &gt;Codex Web&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Policy pages&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://openai.com/policies/terms-of-use/" target="_blank" rel="noopener"
 &gt;OpenAI Terms of Use&lt;/a&gt; — effective 2026-01-01, individual ChatGPT users&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://openai.com/policies/may-2025-business-terms/" target="_blank" rel="noopener"
 &gt;May 2025 Business Terms&lt;/a&gt; — API, Enterprise, Business&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://openai.com/policies/usage-policies/" target="_blank" rel="noopener"
 &gt;Usage Policies&lt;/a&gt; — prohibited-use catalog&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://openai.com/policies/privacy-policy/" target="_blank" rel="noopener"
 &gt;Privacy Policy&lt;/a&gt; — data handling&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Related blog posts&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://ice-ice-bear.github.io/posts/2026-05-07-codex-r-claude-code-bridge/" &gt;CODEX-R analysis&lt;/a&gt; — micro-skill that imports Claude Code sessions into Codex&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://ice-ice-bear.github.io/posts/2026-05-07-openai-2026-05-07-announcement-digest/" &gt;OpenAI 2026-05-07 digest&lt;/a&gt; — five announcements landed the same week&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Competitors / related tools&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://www.anthropic.com/claude-code" target="_blank" rel="noopener"
 &gt;Anthropic Claude Code&lt;/a&gt; + &lt;a class="link" href="https://docs.claude.com/en/api/agent-sdk-overview" target="_blank" rel="noopener"
 &gt;Agent SDK&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://cursor.com/" target="_blank" rel="noopener"
 &gt;Cursor&lt;/a&gt; — IDE-as-product coding agent&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/features/copilot" target="_blank" rel="noopener"
 &gt;GitHub Copilot&lt;/a&gt; — inline IDE assistant&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://modelcontextprotocol.io/" target="_blank" rel="noopener"
 &gt;Model Context Protocol&lt;/a&gt; — agent standard layer&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>