General cleanup and fixes, and the following notable improvements:
- Document
AZURE_OPENAI_ENDPOINT_EMBEDDING(Gwyneth Peña-Siguenza). - Add link to PyBay 2025 talk video.
- Simplify
load_dotenv(). - Split embedding requests that are too large (Raphael Wirth).
- Overhaul conversation metadata in storage providers.
- Add extra optional keyword parameters to
create_conversation(). - Add
Quantifierto ingestion schema in addition toQuantity. - Extract knowledge concurrently (max 4 by default) (Kevin Turcios).
- Fixes for
get_multiple()implementations. - Tweak defaults in
ConversationSettings.
- Pass LLM requests to MCP client instead of calling the OpenAI API.
- Add
--databaseoption to MCP server.
- The tools/query.py tool now supports
@-commands. Try@help. - Add tools/ingest_podcast.py (a tool that ingests podcasts).
- Fix coverage support for MCP server test.
- Use an updated "Adrian Tchaikovsky" podcast data dump (Rob Gruen).
- Fix Windows testing issues. Run CI on Windows too (Raphael Wirth).
- Run tests in CI using secrets stored in repo (Rob Gruen).
- Migrate package build from setuptools to uv_build.
- Add
install-libatomictarget toMakefile(Bernhard Merkle).
Brown bag release!
- Put
blackback with the runtime dependencies (it's used for debug output).
- Limit dependencies to what's needed at runtime;
dev dependencies can be installed separately with
uv sync --extra dev. - Add
endpoint_envvararg toAsyncEmbeddingModelto allow configuring a non-standard embedding service.
- First public release, for PyBay '25 talk