Skip to main content

LLM API Comparison: Anthropic vs OpenAI

Observed through greyproxy MITM traffic from Claude Code and OpenCode (March 2026). This documents the wire format as seen by the proxy, not the full API specification.

Scope: Anthropic Messages API (/v1/messages) and OpenAI Responses API (/v1/responses). OpenAI Chat Completions (/v1/chat/completions) is not covered yet.

Endpoints

AnthropicOpenAI
URLPOST https://api.anthropic.com/v1/messagesPOST https://api.openai.com/v1/responses
Query params?beta=true (optional)None observed
Auth headerx-api-key: sk-ant-...Authorization: Bearer sk-...
Streamingstream: true in bodystream: true in body
Response typetext/event-stream (SSE)text/event-stream (SSE)

Request Body Structure

FieldAnthropicOpenAI
Modelmodel: "claude-opus-4-6"model: "gpt-5.1"
System promptSeparate system array of {type, text} blocks{role: "developer", content: "..."} item inside input[]
Messagesmessages[] with uniform {role, content}input[] with heterogeneous items (see below)
Toolstools[] with {name, description, input_schema}tools[] with {type: "function", name, description, parameters, strict}
Max tokensmax_tokens: 16384max_output_tokens: 32000
Thinking/reasoningthinking: {type: "enabled", budget_tokens: N}reasoning: {effort: "medium", summary: "auto"}
Streaming configstream: truestream: true
CachingImplicit via cache_control on content blocksprompt_cache_key: "ses_XXX"
Tool choicetool_choice: {type: "auto"}tool_choice: "auto"

Message/Input Format

This is the biggest structural difference between the two APIs.

Anthropic: messages[]

All items have {role, content}. Content is either a string or array of typed blocks.

{
"messages": [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": [
{"type": "thinking", "thinking": "..."},
{"type": "text", "text": "Hi there!"},
{"type": "tool_use", "id": "toolu_XXX", "name": "Bash", "input": {"command": "ls"}}
]},
{"role": "user", "content": [
{"type": "tool_result", "tool_use_id": "toolu_XXX", "content": "file1.txt\nfile2.txt"}
]}
]
}

OpenAI: input[]

Items are heterogeneous. Some have role, some have type, some have both.

{
"input": [
{"role": "developer", "content": "You are a coding agent..."},
{"role": "user", "content": [{"type": "input_text", "text": "Hello"}]},
{"type": "reasoning", "encrypted_content": "..."},
{"type": "function_call", "call_id": "call_XXX", "name": "bash", "arguments": "{\"command\":\"ls\"}"},
{"type": "function_call_output", "call_id": "call_XXX", "output": "file1.txt\nfile2.txt"},
{"type": "message", "role": "assistant", "content": [{"type": "output_text", "text": "Here are the files."}]}
]
}

Message Type Mapping

ConceptAnthropicOpenAI
System promptsystem: [{type: "text", text: "..."}] (top-level){role: "developer", content: "..."} (in input[])
User message{role: "user", content: "text"} or {role: "user", content: [{type: "text", text: "..."}]}{role: "user", content: [{type: "input_text", text: "..."}]}
Assistant text{role: "assistant", content: [{type: "text", text: "..."}]}{type: "message", role: "assistant", content: [{type: "output_text", text: "..."}]}
Thinking{type: "thinking", thinking: "..."} content block{type: "reasoning", encrypted_content: "..."} top-level item
Tool call{type: "tool_use", id: "toolu_XXX", name: "Read", input: {...}} content block inside assistant message{type: "function_call", call_id: "call_XXX", name: "read", arguments: "{...}"} top-level item
Tool result{type: "tool_result", tool_use_id: "toolu_XXX", content: "..."} content block inside user message{type: "function_call_output", call_id: "call_XXX", output: "..."} top-level item

Key differences:

  • Anthropic nests tool calls inside assistant messages and tool results inside user messages
  • OpenAI places them as top-level items in the input[] array
  • Anthropic tool arguments are a JSON object; OpenAI stringifies them
  • OpenAI reasoning is opaque (encrypted); Anthropic thinking is plaintext (when enabled)

SSE Response Events

Anthropic

EventDescription
message_startResponse metadata (model, usage)
content_block_startNew block: {type: "text"}, {type: "tool_use", name: "..."}, {type: "thinking"}
content_block_deltaIncremental content: text_delta, input_json_delta, thinking_delta
content_block_stopBlock finished
message_deltaFinal usage stats, stop reason
message_stopEnd of response

OpenAI

EventDescription
response.createdResponse metadata (id, model)
response.in_progressProcessing started
response.output_item.addedNew output item: {type: "reasoning"}, {type: "function_call", name: "..."}, {type: "message"}
response.output_text.deltaStreamed text content
response.function_call_arguments.deltaStreamed tool call arguments
response.function_call_arguments.doneComplete tool call arguments
response.reasoning_summary_text.deltaStreamed reasoning summary
response.output_item.doneOutput item finished
response.completedFinal event with full response object and usage

SSE Event Mapping

ConceptAnthropicOpenAI
Text streamingcontent_block_delta with text_deltaresponse.output_text.delta
Tool call startcontent_block_start with type: "tool_use"response.output_item.added with type: "function_call"
Tool call argscontent_block_delta with input_json_deltaresponse.function_call_arguments.delta
Tool call completecontent_block_stopresponse.function_call_arguments.done
Thinkingcontent_block_delta with thinking_deltaresponse.reasoning_summary_text.delta
End of responsemessage_stopresponse.completed

Session and Identity

AnthropicOpenAI
Session ID locationmetadata.user_id field in bodyprompt_cache_key field in body
Session ID formatuser_HASH_account_UUID_session_UUID (36-char hex UUID)ses_XXXX (alphanumeric, ~30 chars)
Also in headersNoSession_id header (same value as prompt_cache_key)
Client identifieranthropic-version header, User-AgentOriginator header (e.g. opencode), User-Agent

Tool Names

Tool names differ in casing between providers. Anthropic uses PascalCase, OpenAI uses lowercase.

FunctionAnthropic (Claude Code)OpenAI (OpenCode)
Read fileReadread
Edit fileEditapply_patch
Write fileWrite(via apply_patch)
Run commandBashbash
Search contentGrepgrep
Find filesGlobglob
Spawn subagentAgenttask
Ask userAskUserQuestionquestion
Web fetchWebFetchwebfetch
Web searchWebSearch(not observed)
Todo listTodoWritetodowrite
Skills/commandsSkillskill
Tool discoveryToolSearch(not observed)
NotebookNotebookEdit(not observed)

Subagent / Task Spawning

AnthropicOpenAI
Tool nameAgenttask
How it worksAgent tool call with prompt and description fieldsTask tool call with prompt and description fields
Session sharingSubagent shares the same session UUID as parentSubagent gets its own prompt_cache_key
Parent-child linkSame session ID; distinguished by system prompt length (main >10K, subagent ~4-5K)function_call_output contains task_id: ses_XXX referencing the subagent's session
ClassificationSystem prompt length thresholdPresence of management tools (task, question, todowrite) indicates main

Thread Classification Heuristics

Used by greyproxy to distinguish main conversations from subagents and utilities.

Anthropic

Based on system prompt length (system[] blocks total character count):

System Prompt LengthToolsClassification
> 10,000 charsAnymain (Claude Code primary conversation)
> 1,000 charsAnysubagent
> 100 chars<= 2mcp (MCP utility, discarded)
<= 100 charsAnyutility (discarded)

OpenAI

Based on tool list contents (system prompt length is identical for main and subagents):

ConditionClassification
Tools include task, question, or todowritemain (OpenCode primary conversation)
Has tools but no management toolssubagent
No toolsutility (e.g. title generator using gpt-5-nano)

Usage / Token Reporting

FieldAnthropicOpenAI
Locationmessage_delta event and message_startresponse.completed event -> response.usage
Input tokensusage.input_tokensusage.input_tokens
Output tokensusage.output_tokensusage.output_tokens
Cache tokensusage.cache_read_input_tokens, usage.cache_creation_input_tokensusage.input_tokens_details.cached_tokens
Thinking tokensNot separately reportedusage.output_tokens_details.reasoning_tokens