Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ The library supports an optional parser option leveraging Large Language Models

When the appropriate environment variable(s) are set (see below), these LLM parsers are automatically appended after all existing processors for each defined Provider.

> These integrations may involve some costs for API usage. Use it carefully! As an order of magnitude, a parsing of an email with OpenAI GPT gpt-3.5-turbo model costs $0.004.
> These integrations may involve some costs for API usage. Use it carefully! As an order of magnitude, a parsing of an email with OpenAI GPT gpt-4o-mini model costs less than $0.001.

These are the currently supported LLM integrations:

Expand All @@ -116,7 +116,7 @@ These are the currently supported LLM integrations:

- [OpenAI](https://openai.com/product), these are the supported ENVs:
- `PARSER_OPENAI_API_KEY` (Required): OpenAI API Key.
- `PARSER_OPENAI_MODEL` (Optional): The LLM model to use, defaults to "gpt-3.5-turbo".
- `PARSER_OPENAI_MODEL` (Optional): The LLM model to use, defaults to "gpt-4o-mini".

### Metadata

Expand Down
1 change: 1 addition & 0 deletions changes/373.changed
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Updated default OpenAI model from deprecated gpt-3.5-turbo to gpt-4o-mini and removed unreachable dead code in OpenAI parser.
4 changes: 1 addition & 3 deletions circuit_maintenance_parser/parsers/openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def get_llm_response(self, content) -> Optional[List]:
raise ImportError("openai extra is required to use OpenAIParser.")

client = OpenAI(api_key=os.getenv("PARSER_OPENAI_API_KEY"))
model = os.getenv("PARSER_OPENAI_MODEL", "gpt-3.5-turbo")
model = os.getenv("PARSER_OPENAI_MODEL", "gpt-4o-mini")
try:
response = client.chat.completions.create(
model=model,
Expand Down Expand Up @@ -60,5 +60,3 @@ def get_llm_response(self, content) -> Optional[List]:
except ValueError as err:
logger.error(err)
return None

return None