All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Hash options for
image_generation=: Pass aHashof tool options (e.g.,chat.image_generation = { size: "1536x1024", quality: "low", model: "gpt-image-2" }) to configure the OpenAI Responses API image generation tool. Useful for selecting a specific GPT Image model, changing size/quality, forcing generate-vs-edit mode withaction:, masked edits viainput_image_mask:, and the rest of the options from OpenAI's image generation docs.chat.image_generation = truestill works and continues to use OpenAI's defaults.
-
image_generation=validates its argument: Onlytrue,false,nil, or aHashare now accepted.nilis normalized tofalse. Any other value raisesArgumentError. (Technically breaking: values like truthy strings or integers that silently worked before now raise.) -
Bumped
openairuntime dependency from~> 0.43to~> 0.59: Picks up the typedactionfield on the image_generation tool,gpt-5.4mini/nano model slugs, and assorted upstream SDK transport fixes. No gem code changes were required for the bump — every method we call kept the same signature.
- Saved images now use the correct file extension: Generated images are sniffed with Marcel and saved as
.png,.jpg, or.webpbased on the decoded bytes. Previously every image was written to001.pngregardless ofoutput_format, which produced misnamed files when callers asked for JPEG or WebP.
- Proxy-aware API key resolution: The gem no longer falls back from
AICHAT_PROXY_KEYtoOPENAI_API_KEY. When proxy is off, it usesOPENAI_API_KEY. When proxy is on (AICHAT_PROXY=true), it usesAICHAT_PROXY_KEY. Missing the expected var raises aKeyErrorwith guidance on which variable to create.
-
proxy:keyword argument oninitialize: Override theAICHAT_PROXYenv default at construction time, e.g.AI::Chat.new(proxy: true). -
Case-insensitive
AICHAT_PROXY:TRUE,True, etc. are now accepted. -
Transactional
proxy=setter: Toggling proxy re-resolves the API key and resets validation. If key resolution fails, the instance is rolled back to its previous state. -
Beginner-friendly error messages: Missing env var and auth failure errors now name the specific variable and where to get a key.
-
Boolean coercion for proxy values:
proxy:kwarg andproxy=setter coerce values with!!. Onlynildefers to the env var. -
Env var names extracted to constants:
PROXY_ENV,PROXY_KEY_ENV,OPENAI_KEY_ENVdefined once on the class.
- Integration API key test fallback: Updated integration test setup to treat an empty
AICHAT_PROXY_KEYas missing before falling back toOPENAI_API_KEY, matching runtime behavior.
-
Default API key lookup order:
AI::Chat.newandAI::Chat.generate_schema!now look forAICHAT_PROXY_KEYfirst, then fall back toOPENAI_API_KEYwhen the first value is missing or empty. -
Explicit API key override behavior: Passing
api_key:still takes highest precedence, and passingapi_key_env_var:still uses that environment variable exactly.
- Unit coverage for API key precedence: Added tests covering default env order, empty
AICHAT_PROXY_KEYfallback, explicitapi_key_env_var:, and explicitapi_key:behavior.
-
Integration test setup: Integration tests now run when either
AICHAT_PROXY_KEYorOPENAI_API_KEYis present. -
Documentation: README now documents the new default key lookup order and updates the direct
OpenAI::Clientexample to match.
-
Proxy default from env:
AI::Chat.newnow enables proxy mode by default whenAICHAT_PROXYis exactly"true". -
Schema generation proxy default:
AI::Chat.generate_schema!now uses the sameAICHAT_PROXYdefault whenproxy:is omitted. -
Explicit override precedence: Explicit
chat.proxy = ...andgenerate_schema!(..., proxy: ...)continue to override env defaults.
-
Unit coverage for proxy defaults: Added tests for env parsing (
"true"exact match), explicit override behavior, andgenerate_schema!precedence. -
Proxy env documentation: README now documents
AICHAT_PROXYbehavior for both chat generation and schema generation.
- IRB display for
get_items:chat.get_itemsnow shows the formattedAI::Itemsoutput in IRB/Rails console (without requiringputs ...inspect). This avoidsppunwrapping Delegator objects.
- Updated dependencies:
openaigem updated from~> 0.34to~> 0.43(fixesCGI.parseissue in Ruby 4)standard(dev) updated from1.50.0to~> 1.53
- Unused dependencies:
- Removed
ostructruntime dependency (not used in library code) - Removed
refinementsdev dependency (not used)
- Removed
- Truncate base64 data URIs in output: Long base64-encoded images in messages are now truncated in
inspectandto_htmloutput for readability. Displays as"data:image/png;base64,iVBORw0K... (12345 chars)".
-
to_htmlcompatibility with awesome_print: UseAmazingPrint::Inspectordirectly instead of theai()method to avoid conflicts when theawesome_printgem is also present in a project. -
to_htmlformatting: Wrap output in<pre>tags with proper styling to preserve newlines and formatting in HTML views.
-
Ruby version requirement: Now supports Ruby 4.0+ (changed from
~> 3.2to>= 3.2). -
Updated dependencies:
amazing_printupdated from~> 1.8to~> 2.0
- Renamed
itemstoget_items: The method now clearly indicates it makes an API call. Returns anAI::Itemswrapper that delegates to the underlying response while providing nice display formatting.
-
Reasoning summaries: When
reasoning_effortis set, the API now returns reasoning summaries inget_items(e.g., "Planning Ruby version search", "Confirming image tool usage"). -
Improved console display:
AI::Chat,AI::Message, andAI::Itemsnow display nicely in IRB and Rails console with colorized, formatted output via AmazingPrint. -
HTML output for ERB templates: All display objects have a
to_htmlmethod for rendering in views. Includes dark terminal-style background for readability. -
AI::Messageclass: Messages are nowAI::Messageinstances (a Hash subclass) with custom display methods. -
AI::Itemsclass: Wraps the conversation items API response with nice display methods while delegating all other methods (like.data,.has_more, etc.) to the underlying response. -
TTY-aware display: Console output automatically detects TTY and disables colors when output is piped or redirected.
-
New example:
examples/16_get_items.rbdemonstrates inspecting conversation items including reasoning, web searches, and image generation.
- Default model: Changed from
gpt-5.1togpt-5.2.
-
Removed
previous_response_id: Useconversation_idinstead for managing conversation state. The gem now exclusively uses OpenAI's Conversations API for continuity. Simply store theconversation_idand set it on a newAI::Chatinstance to continue a conversation. -
Renamed
assistant!togenerate!: The method that sends messages to the API and generates a response is now calledgenerate!to better reflect its purpose.
-
Default model: Changed from
gpt-4.1-nanotogpt-5.1. -
Default reasoning effort: Remains
nil. Forgpt-5.1, this is equivalent to"none"reasoning. -
Web search tool: Renamed from
web_search_previewtoweb_searchto match OpenAI's GA release.
-
last_response_idreader: New public accessor to get the ID of the most recent response. Useful for background mode workflows where you need to track, retrieve, or cancel a specific response from another process. -
Automatic conversation management: The gem now automatically creates and manages conversations via OpenAI's Conversations API. The
conversation_idis set after the firstgenerate!call and maintained across subsequent calls. -
itemsmethod: Retrieve all conversation items (messages, tool calls, reasoning) from OpenAI's API withchat.items. -
Improved test coverage: Added integration tests for conversation continuity, file handling, and conversation items retrieval.
-
Background mode limitation: There is currently no serialization-friendly hook to resume a background response from a different process. You can use
last_response_idto track the response, but resuming requires the originalAI::Chatinstance or manual API calls. -
Manual message manipulation: If you manually add assistant messages to the
messagesarray without a:responseobject, theprepare_messages_for_apimethod may not slice the history as expected on the nextgenerate!call. This is an edge case for users who directly manipulate the messages array.
- Initial implementation.