Pytector v0.2.2 - LangChain Integration Is Here
Pytector v0.2.2 - LangChain Integration Is Here
Pytector now plugs directly into LangChain LCEL as a first-class guardrail step, so teams can block unsafe prompts before prompt formatting, retrieval, or model execution.
What’s New
- Added
PytectorGuard(pytector.langchain) for drop-in LCEL usage:chain = guard | ...
- Added
PromptInjectionBlockedErrorfor clear blocked-request handling. - Added configurable guard behavior:
fallback_messagefor controlled user-facing responsesblock_on_api_errorto choose fail-closed vs pass-through on API failure
- Supports both detection paths:
- local model path (default
deberta) - Groq path (
use_groq=True,api_key, optionalgroq_model)
- local model path (default
Why This Matters
- Easier adoption: one guard component secures existing chains.
- Safer architecture: blocks malicious prompts before downstream cost and risk.
- Better DX: explicit errors and predictable fallback behavior.
Packaging
- New optional dependency extra:
pip install pytector[langchain]
Docs, Notebook, and Validation
- Added dedicated LangChain docs page.
- Added README LCEL example.
- Added LangChain section to
notebooks/pytector_demo.ipynb. - Added
tests/test_langchain_guard.pycovering core guardrail behavior.