Packmind seamlessly captures your engineering playbook and turns it into AI context, guardrails, and governance.
-
Updated
May 2, 2026 - TypeScript
Packmind seamlessly captures your engineering playbook and turns it into AI context, guardrails, and governance.
🔥🔥🔥 Enterprise AI middleware, alternative to unifyapps, n8n, lyzr
Open Source Reliability Harness: Make your agents follow rules. One line of code to enforce, trace, and improve.
FSPEC: The Spec-Driven, Multi-Agent Coding Factory. It is infrastructure for the "Dark Factory"—the emerging model of fully autonomous software development where AI agents handle all implementation while humans focus on defining what to build and why.
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII filtering, secret redaction, tool permissions, and async guardrails. Built on pydantic-ai's native capabilities API.
Open-source Claude Code skills — 6 cognitive firewalls block AI hallucination, bias & sloppy reasoning. npx skills add
Real-time AI safety guardrails for LLM apps. 10 scanners: prompt injection, PII, harmful content, code vulnerabilities, obfuscation detection. Sub-ms latency. Python + TypeScript SDKs. MCP proxy. Claude Code hooks.
Validate that supporting text quotes in your data actually appear in their cited references
Give AI coding agents (Claude Code, Cursor, Aider, Codex) a structured autonomous loop with guardrails — boundaries, 5 verification gates, 3-layer self-reflection, and autonomous remediation. pip install ouro-loop. Zero dependencies.
Mechanical enforcement tools to prevent AI agents from bypassing established project standards.
A Python implementation of the VETTING (Verification and Evaluation Tool for Targeting Invalid Narrative Generation) framework for LLM safety and educational applications.
Six in-process middlewares for OpenClaw: HITL approvals, prompt-injection guardrails, PII redaction, tool-call budgets, context compaction, and complexity-aware model routing. Zero telemetry, all state local.
AI coding rules that actually work. Enforce instruction files via hooks — CLAUDE.md, .cursorrules, copilot-instructions, and more.
LLM budget control and cost governance for AI agents. Python library for token budgets, usage limits and guardrails for OpenAI, Anthropic, LangChain, LangGraph and agentic systems.
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
Forge rough ideas into drift-proof execution contracts for AI coding agents. 6-step pipeline, 16 agents, 8 skills, lifecycle hooks, 5 tech presets.
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
paceval is a high-performance mathematical runtime for deterministic AI and energy-efficient edge computing.
Simple implementation of AI Guardrails using Google ADK.
Runtime guardrails for AI agents that enforce token budgets, loop limits, and tool rate limits locally.
Add a description, image, and links to the ai-guardrails topic page so that developers can more easily learn about it.
To associate your repository with the ai-guardrails topic, visit your repo's landing page and select "manage topics."