Skip to content

Pytector v0.2.2 - LangChain Integration Is Here

Choose a tag to compare

@MaxMLang MaxMLang released this 14 Feb 02:29
· 6 commits to main since this release

Pytector v0.2.2 - LangChain Integration Is Here

Pytector now plugs directly into LangChain LCEL as a first-class guardrail step, so teams can block unsafe prompts before prompt formatting, retrieval, or model execution.

What’s New

  • Added PytectorGuard (pytector.langchain) for drop-in LCEL usage:
    • chain = guard | ...
  • Added PromptInjectionBlockedError for clear blocked-request handling.
  • Added configurable guard behavior:
    • fallback_message for controlled user-facing responses
    • block_on_api_error to choose fail-closed vs pass-through on API failure
  • Supports both detection paths:
    • local model path (default deberta)
    • Groq path (use_groq=True, api_key, optional groq_model)

Why This Matters

  • Easier adoption: one guard component secures existing chains.
  • Safer architecture: blocks malicious prompts before downstream cost and risk.
  • Better DX: explicit errors and predictable fallback behavior.

Packaging

  • New optional dependency extra:
    • pip install pytector[langchain]

Docs, Notebook, and Validation

  • Added dedicated LangChain docs page.
  • Added README LCEL example.
  • Added LangChain section to notebooks/pytector_demo.ipynb.
  • Added tests/test_langchain_guard.py covering core guardrail behavior.