A standalone LangGraph experiment demonstrating portable learner memory with scoped access, version history, and interoperable export formats.
Created for: Global Portable Memory Workshop (Gates Foundation, March 2026)
This experiment implements the "minimal interoperable schema" question from the Pre-read document: What must an AI learning system remember about a learner to personalize effectively, and how should that be represented for portability across tools?
- Multi-agent memory sharing - Math Tutor and Writing Coach share memory with scoped access
- Source type distinction -
user_declaredvsmodel_inferredprevents identity hardening - Version history - Corrections create audit trail, previous versions can be restored
- Multi-dimensional scopes - Filter by subject, category, and purpose
- Interoperable exports - CLR and Caliper formats with round-trip support
# 1. Copy environment file and add your API key
cp .env.example .env
# Edit .env with your OPENAI_API_KEY
# 2. Install dependencies
make sync
# 3. Run the demos
make demo-alice # Cross-agent memory sharing (Alice story)
make demo-correction # Version history and corrections
# Or start LangGraph Studio with both agents
make devUses OpenAI + in-memory storage:
# .env
MODEL_PROVIDER=openai
STORAGE_BACKEND=memory
OPENAI_API_KEY=sk-...Uses OpenAI + DynamoDB Local for persistence testing:
make dynamodb-start # Start DynamoDB Local
make dynamodb-create-tables # Create tables
STORAGE_BACKEND=dynamodb DYNAMODB_ENDPOINT=http://localhost:8000 make devUses Bedrock + DynamoDB for production:
# Deploy infrastructure
./scripts/deploy-infrastructure.sh dev
# Build and push Docker image
make docker-build
# Push to ECR (see CloudFormation outputs for URI)
# Environment in AWS
MODEL_PROVIDER=bedrock
STORAGE_BACKEND=dynamodb
# No DYNAMODB_ENDPOINT = uses real DynamoDB
# Credentials via IAM roleImplements the five-component architecture from Pre-read Section 2.3:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PortableMemoryService β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Memory Store β Policy/AuthZ β Context Router β
β - MemoryItem β - Grants β - Multi-dimensional β
β - Categories β - Scopes β scope filtering β
β - Source types β - Validation β - Purpose-based access β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Audit Log β Interoperability β
β - All operations logged β - CLR export/import β
β - Evaluation metrics β - Caliper export/import β
β - Minimization tracking β - Scope-filtered exports β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Both agents share the same PortableMemoryService but see different memories based on their grants:
| Agent | Scope | Can See |
|---|---|---|
| math_tutor | Math subjects + general | Math mastery, general preferences |
| writing_coach | Writing subjects + general | Writing mastery, general preferences |
Demonstrates Pre-read Section 3 - cross-agent memory sharing:
- Alice works with Math Tutor, building a learner profile
- She switches to Writing Coach with scoped access
- Writing Coach sees general preferences but NOT math-specific mastery
- Exports show different content based on scope/purpose
Demonstrates Pre-read Section 4.1/4.4 - correction pathways:
- Math Tutor infers "Struggles with word problems" (model_inferred, confidence: 0.48)
- Alice corrects: "I've been practicing, I'm better now" (user_declared, confidence: 0.65)
- Assessment confirms improvement (confidence: 0.78)
- Full version history preserved (v1 β v2 β v3)
| Category | Description | Example |
|---|---|---|
| fact | Background about the learner | "Works as a nurse with rotating shifts" |
| goal | What they're trying to achieve | "Wants to pass the algebra final" |
| preference | How they like to learn | "Prefers step-by-step examples" |
| mastery | What they know/can do | "Understands basic fractions" |
| constraint | Limitations on learning | "Only 30 minutes per day" |
| relationship | Connections to others | "Studies with her sister" |
Pre-read Section 2.4 - preventing identity hardening:
| Source Type | Meaning | UI Indicator |
|---|---|---|
user_declared |
Learner explicitly stated | [U] |
model_inferred |
Agent inferred from conversation | [I] |
User-declared memories have higher trust and are prioritized in exports (e.g., college applications exclude inferences).
Multi-dimensional filtering (Pre-read Section 2.5):
math_scope = Scope(
agent_id="math_tutor",
allowed_subjects=["math", "algebra", "geometry"],
allowed_categories=["mastery", "preference", "goal"],
allowed_purposes=["tutoring", "assessment"],
include_inferred=True,
can_write=True,
)Session-scoped access control:
grant = Grant(
grant_id="g-123",
learner_id="alice",
agent_id="math_tutor",
scope=math_scope,
created_at=datetime.now(UTC),
expires_at=datetime.now(UTC) + timedelta(hours=1),
)Both formats support scope-filtered exports:
from memory.service import PortableMemoryService
service = PortableMemoryService()
clr = service.export_clr(learner_id="alice", grant=math_grant)
# Only includes memories visible to math_tutor scopecaliper = service.export_caliper(learner_id="alice", grant=math_grant)
# AssessedEvent format with scope-filtered memoriesAll changes are tracked (Pre-read Section 4.1):
# Correct a memory
service.correct_memory(
memory_id=mem_id,
new_content="Improving on word problems",
reason="Learner self-reported improvement",
grant=grant,
new_source_type="user_declared",
new_confidence=0.65,
)
# View history
history = service.get_version_history(mem_id)
# Returns: [v1: created, v2: corrected, ...]All operations logged for learner inspection:
entries = service.audit_log.query(
learner_id="alice",
operation="search", # Optional filter
)
# Each entry includes:
# - timestamp, operation, agent_id
# - memories accessed/modified
# - total_available vs returned_after_scope (minimization tracking)# Run all tests (130 tests)
make test
# Run specific test file
uv run pytest tests/unit/test_scopes.py -v
# Run demos (requires API key)
make demo-allportable_memory/
βββ graph/ # LangGraph agents
β βββ server.py # Exports math_tutor, writing_coach
β βββ models.py # Model provider factory (OpenAI/Bedrock)
β βββ agents/
β βββ base.py # Shared agent builder
β βββ math_tutor.py # Math Tutor graph
β βββ writing_coach.py # Writing Coach graph
β
βββ memory/ # Memory system
β βββ schema.py # MemoryItem, LearnerProfile
β βββ service.py # PortableMemoryService (central)
β βββ scopes.py # Multi-dimensional scopes
β βββ grants.py # Access grants
β βββ versioning.py # Version history
β βββ audit.py # Audit logging
β βββ security.py # Security validation (seam)
β βββ conflicts.py # Conflict detection (seam)
β βββ persistence/ # Storage backends
β β βββ __init__.py # StorageBackend interface
β β βββ memory.py # In-memory storage (default)
β β βββ dynamodb.py # DynamoDB storage (AWS)
β βββ formats/ # Export/import
β βββ clr.py # CLR format
β βββ caliper.py # Caliper format
β
βββ api/ # FastAPI demo endpoints
β βββ main.py # Demo API server
β
βββ infrastructure/ # AWS deployment
β βββ cloudformation.yaml # DynamoDB, S3, IAM resources
β βββ agentcore-agent.yaml # AgentCore config
β
βββ scripts/ # Deployment scripts
β βββ create_tables.py # DynamoDB table creation
β βββ deploy-infrastructure.sh
β
βββ tests/ # Test suite
β βββ unit/ # Unit tests
β βββ integration/ # Cross-agent tests
β
βββ Dockerfile # Container image
βββ demo_alice.py # Alice story demo
βββ demo_correction.py # Correction flow demo
The following are designed for future extension (placeholder implementations):
| Seam | Location | Future Work |
|---|---|---|
| OAuth integration | memory/grants.py |
Real OAuth flow |
| Preview mode | memory/service.py |
"What would agent see?" |
| Revocation propagation | memory/grants.py |
Downstream notification |
| On-device models | memory/service.py |
Local extraction |
| Evaluation metrics | memory/audit.py |
Pre-read 4.2 metrics |
| Adversarial robustness | memory/security.py |
Prompt injection defense |
| Conflict resolution | memory/conflicts.py |
Semantic contradiction detection |
This implementation addresses:
- Section 2.3: Five-component architecture
- Section 2.4: Source type distinction, identity hardening prevention
- Section 2.5: Two invariants (no unconstrained access, enforceable operations)
- Section 3: Alice story (Table 1 learner profile structure)
- Section 4.1: Memory lifecycle with versioning
- Section 4.4: Correction pathways, provenance metadata
This experiment supports the Global Portable Memory Workshop's exploration of:
- Minimal Schema (Section 4.1): What's the smallest interoperable format?
- Evaluation (Section 4.2): How do we measure if portable memory works?
- Governance (Section 4.4): Who controls memory, and how?
- 1EdTech (CLR, Caliper, Open Badges) - Tim Couper, Chief Architect
- Learning Economy Foundation (Verifiable Credentials) - Chris Puriofy, CEO
- Inrupt (Solid pods) - Max Leonard, Principal Technologist
- WGU Labs (Achievement Wallet) - Taylor Hansen
This experiment is designed to be extractable to a separate repository:
- Pluggable storage: In-memory (default) or DynamoDB via
STORAGE_BACKENDenv var - Pluggable models: OpenAI (default) or Bedrock via
MODEL_PROVIDERenv var - No platform dependencies: No le-exp-platform imports
- Container-ready: Dockerfile for AWS AgentCore deployment
- Infrastructure as code: CloudFormation template for AWS resources
To extract: copy the portable_memory/ folder, update imports if needed.
| Variable | Values | Default | Description |
|---|---|---|---|
MODEL_PROVIDER |
openai, bedrock |
openai |
LLM provider |
STORAGE_BACKEND |
memory, dynamodb |
memory |
Storage backend |
OPENAI_API_KEY |
- | - | Required for OpenAI |
AWS_REGION |
- | us-east-1 |
AWS region for Bedrock/DynamoDB |
DYNAMODB_ENDPOINT |
URL | - | For DynamoDB Local |
TUTOR_MODEL |
model name | gpt-4o |
Main tutor model |
EXTRACTION_MODEL |
model name | gpt-4o-mini |
Memory extraction model |