Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
Accessing Response.output_text can raise a TypeError when one or more output_text content blocks contain text=None.
This occurs with the Responses API in openai-python and causes downstream frameworks (e.g. LangChain / LangGraph streaming agents) to crash during normal execution.
The failure violates the implied contract of output_text being a safe, aggregated string representation of model output.
Observed error:
TypeError: sequence item 0: expected str instance, NoneType found
Traceback (most recent call last):
File "agent/agent.py", line 255, in stream
async for item in self.graph.astream(...)
File "langgraph/pregel/main.py", line 2971, in astream
async for _ in runner.atick(...)
File "langgraph/pregel/_runner.py", line 304, in atick
await arun_with_retry(...)
File "langgraph/pregel/_retry.py", line 137, in arun_with_retry
return await task.proc.ainvoke(task.input, config)
File "langgraph/_internal/_runnable.py", line 705, in ainvoke
input = await asyncio.create_task(...)
File "langgraph/_internal/_runnable.py", line 473, in ainvoke
ret = await self.afunc(*args, **kwargs)
File "langchain/agents/factory.py", line 1188, in amodel_node
response = await awrap_model_call_handler(...)
File "langchain/agents/factory.py", line 277, in final_normalized
final_result = await result(request, handler)
File "langchain/agents/factory.py", line 261, in composed
outer_result = await outer(request, inner_handler)
File "langchain/agents/middleware/todo.py", line 228, in awrap_model_call
return await handler(request.override(...))
File "langchain/agents/factory.py", line 257, in inner_handler
inner_result = await inner(req, handler)
File "deepagents/middleware/filesystem.py", line 975, in awrap_model_call
return await handler(request)
File "deepagents/middleware/subagents.py", line 490, in awrap_model_call
return await handler(request.override(...))
File "langchain_anthropic/middleware/prompt_caching.py", line 140, in awrap_model_call
return await handler(request)
File "langchain/agents/factory.py", line 1156, in _execute_model_async
output = await model_.ainvoke(messages)
File "langchain_core/runnables/base.py", line 5570, in ainvoke
return await self.bound.ainvoke(...)
File "langchain_core/language_models/chat_models.py", line 425, in ainvoke
llm_result = await self.agenerate_prompt(...)
File "langchain_core/language_models/chat_models.py", line 1132, in agenerate_prompt
return await self.agenerate(...)
File "langchain_core/language_models/chat_models.py", line 1090, in agenerate
raise exceptions[0]
File "langchain_core/language_models/chat_models.py", line 1343, in _agenerate_with_cache
result = await self._agenerate(...)
File "langchain_openai/chat_models/base.py", line 1631, in _agenerate
return _construct_lc_result_from_responses_api(...)
File "langchain_openai/chat_models/base.py", line 4338, in _construct_lc_result_from_responses_api
output_text = _get_output_text(response)
File "langchain_openai/chat_models/base.py", line 4192, in _get_output_text
if hasattr(response, "output_text"):
File "openai/types/responses/response.py", line 316, in output_text
return "".join(texts)
TypeError: sequence item 0: expected str instance, NoneType found
To Reproduce
- Run the provided minimal async reproduction script using:
- langchain-openai with the Responses API
- a self-hosted gpt-oss-120b OpenAI-compatible model via litellm
- Initialize a deep agent with:
- tool support
- in-memory checkpointing and store
- reasoning enabled
- Stream a single agent turn via graph.astream(..., stream_mode="updates")
- During streaming, response.output_text is accessed internally
- Execution fails with:
TypeError: sequence item 0: expected str instance, NoneType found
Code snippets
"""Minimal reproduction script for initializing a deep agent with memory and reasoning."""
import asyncio
import os
import uuid
from deepagents import create_deep_agent
from dotenv import load_dotenv
from langchain.tools import tool
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.store.memory import InMemoryStore
from pydantic import SecretStr
load_dotenv()
@tool
def fun_fact_about_satellites() -> str:
"""Return a fun fact about satellites."""
return "Satellites can be used to monitor environmental changes on Earth in real-time."
async def main() -> None:
"""Create a deep agent, attach in-memory checkpointing/store, and run a single turn."""
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError("OPENAI_API_KEY environment variable is not set.")
base_url = os.getenv("OPENAI_API_BASE_URL")
model = ChatOpenAI(
model="gpt-oss-120b",
temperature=0,
api_key=SecretStr(api_key),
base_url=base_url,
reasoning={
"effort": "medium",
"summary": "auto",
},
)
graph = create_deep_agent(
tools=[fun_fact_about_satellites],
model=model,
system_prompt="You are a helpful assistant that provides concise and accurate information.",
checkpointer=MemorySaver(),
store=InMemoryStore(),
)
thread_id = str(uuid.uuid4())
prompt = "Give me a one-line fun fact about satellites."
inputs = {"messages": [HumanMessage(content=prompt)]}
async for chunk in graph.astream(
input=inputs,
config={"configurable": {"thread_id": thread_id}},
stream_mode="updates",
):
print(chunk)
if __name__ == "__main__":
asyncio.run(main())
OS
Windows
Python version
Python 3.12.3
Library version
openai==2.14.0, langchain-openai==1.1.6, langchain==1.2.3, langgraph==1.0.5, pydantic==2.12.5, deepagents==0.3.4, python-dotenv==1.2.1
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
Accessing Response.output_text can raise a TypeError when one or more output_text content blocks contain text=None.
This occurs with the Responses API in openai-python and causes downstream frameworks (e.g. LangChain / LangGraph streaming agents) to crash during normal execution.
The failure violates the implied contract of output_text being a safe, aggregated string representation of model output.
Observed error:
To Reproduce
Code snippets
OS
Windows
Python version
Python 3.12.3
Library version
openai==2.14.0, langchain-openai==1.1.6, langchain==1.2.3, langgraph==1.0.5, pydantic==2.12.5, deepagents==0.3.4, python-dotenv==1.2.1