From 06c4a37830d052780e452b1e13a4ffdb984e63ed Mon Sep 17 00:00:00 2001 From: SentienceDEV Date: Wed, 25 Feb 2026 21:49:44 -0800 Subject: [PATCH] blog updated --- demo/BLOG_POST.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/demo/BLOG_POST.md b/demo/BLOG_POST.md index 1b6c38f..e4da46c 100644 --- a/demo/BLOG_POST.md +++ b/demo/BLOG_POST.md @@ -6,6 +6,8 @@ AI agents are powerful, but how do you ensure they don't go rogue? Today we're releasing **Predicate Secure** - a drop-in security wrapper that adds enterprise-grade authorization and verification to browser automation agents. Think of it as a safety harness for your AI agents. +> **📦 Open Source:** The complete demo is available on GitHub at [PredicateSystems/predicate-secure](https://github.com/PredicateSystems/predicate-secure) (see the `demo/` folder). Get started in 5 minutes with local LLM verification. + **Predicate Secure integrates with your existing AI agent frameworks in just 3-5 lines of code** - including browser-use, LangChain, PydanticAI, raw Playwright, and OpenClaw. This frictionless adoption means you can add robust security without rewriting your agents. This post walks through our comprehensive demo that showcases the complete agent security loop: pre-execution authorization, browser automation, and post-execution verification using local LLMs. @@ -17,6 +19,7 @@ When AI agents interact with browsers and web services, they need guardrails. A - Navigating to unauthorized domains - Clicking sensitive buttons or forms - Exposing credentials or API keys +- Performing unauthorized actions (e.g., deleting all emails) - Executing actions outside policy boundaries Traditional approaches rely on prompt engineering or hope for the best. **Predicate Secure takes a different approach**: enforce policy before execution, verify outcomes after.