Skip to content

Security: OWASP Agent Memory Guard – memory poisoning protection for OpenAI agentsΒ #3329

@vgudur-dev

Description

@vgudur-dev

Security Resource for OpenAI Agents SDK Users

Hi OpenAI team! πŸ‘‹

As the Agents SDK gains adoption, I wanted to share a security tool that addresses a key attack vector for production agent deployments:

OWASP Agent Memory Guard (pip install agent-memory-guard)

It's an OWASP-backed Python middleware that detects and blocks memory poisoning attacks in LLM agents β€” where adversarial content stored in agent memory can later be recalled to hijack agent behavior.

This is particularly relevant for agents that:

  • Process external documents or web content
  • Use persistent memory across sessions
  • Operate in multi-tenant environments

GitHub: https://github.com/OWASP/www-project-agent-memory-guard
PyPI: https://pypi.org/project/agent-memory-guard/

Would love feedback on integration with the Agents SDK memory patterns!

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionQuestion about using the SDK

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions