AI assistants that work for you — with your tools, your data, your permissions.
What if everyone in your organization could have a personal AI assistant that:
- Knows your systems — connected to the tools you actually use
- Has your permissions — no IT tickets, no admin approval needed
- Runs locally — your data stays yours
- Does real work — not just answers questions, but takes action
This is that platform.
# A developer asks about their work
agent "What are my open tasks and which one should I prioritize?"
# A product manager creates a report
agent "Summarize last week's completed issues and draft release notes"
# An analyst researches a topic
agent Research "Compare our API response times to industry benchmarks"Same platform. Different agents. Every role.
This is the key idea: An agent is just a text file.
---
name: MyAgent
description: What it does
model: openai:gpt-4o
tools: [read, write, web_search]
---
You are a helpful assistant that...That's it. Save it as MyAgent.agent.md in the agents folder and run:
agent MyAgent "Do something useful"- No code changes required — ever
- No deployments — just save and run
- No configuration beyond the file itself
- Share agents by sharing files — email, Git, Slack, whatever
The agents included in this repo are just examples. Delete them, modify them, create your own. The platform doesn't care — it just loads whatever .agent.md files it finds.
flowchart LR
subgraph Agents["📄 Agent Files"]
A1[Research.agent.md]
A2[DevWork.agent.md]
A3[YourAgent.agent.md]
end
subgraph Platform["⚙️ Platform"]
P[Dynamic Agent Harness]
end
subgraph Tools["🔧 Tools"]
T1[Built-in\nread, write, search]
T2[MCP Servers\nJira, GitHub, etc]
T3[Custom\nYour own tools]
end
subgraph Models["🤖 AI Models"]
M1[OpenAI]
M2[Anthropic]
M3[Local/Other]
end
A1 & A2 & A3 --> P
P <--> T1 & T2 & T3
P <--> M1 & M2 & M3
Agents are simple text files that define what the assistant should do. Tools provide the capabilities. Models provide the intelligence. The platform connects them all.
Unlike cloud-hosted AI assistants, Dynamic Agent runs locally on your machine. When it accesses external services, it uses your credentials — the same permissions you already have. No admin setup. No shared API tokens. No data leaving your network unless you want it to.
Agents can use any combination of:
- Built-in tools — file operations, code execution, search
- MCP servers — connect to Atlassian, GitHub, databases, or any Model Context Protocol service
- LangChain tools — the entire LangChain ecosystem is available
- Custom tools — drop a Python file in
src/tools/and it's instantly available
Today it's a CLI. Tomorrow it's a web UI. The architecture is designed for:
- CLI power users — what we have now
- Web API — expose agents as HTTP endpoints
- Browser UI — friendly interface for everyone
- Embedded — run agents inside other applications
The agent runs locally with your credentials, but the interface can be anywhere.
# Quick answers
agent "What's the capital of France?"
# Use specialized agents
agent Calc "sqrt(144) + 10"
agent Research "Latest trends in AI agents"
agent DevWork "Show my open Jira issues"
# Verbose mode — see the tools being called
agent DevWork -v "Find docs on LangChain ChatOpenAI"
# 🔧 Calling: resolve-library-id
# 🔧 Calling: get-library-docs
# ✓ Here's the documentation...# 1. Clone and setup
git clone <repository-url>
cd dynamic-agent
cp .env.example .env
# Add your OPENAI_API_KEY to .env
# 2. Install globally
pipx install -e .
pipx ensurepath
# 3. Run!
agent "Hello, world!"- ✅ CLI interface — power users, scripting, automation
- ✅ Tool plugin system — add custom tools with zero boilerplate
- ✅ MCP integration — connect to any MCP-compatible service
- ✅ OAuth support — automatic browser-based login for protected services
- HTTP API — expose agents as REST endpoints
- Web UI — browser-based interface for non-technical users
- Agent marketplace — share and discover agent definitions
- Multi-user — run as a service with per-user permissions
Everything below is for developers who want to understand or extend the system.
- 🚀 Global CLI: Use
agentcommand from anywhere after install - 📄 YAML Frontmatter Definitions: Define agents in
.agent.mdfiles - 🔧 Dynamic Tool Loading: Built-in tools, custom tools, and MCP server tools
- 🤖 Subagent Support: Define specialized subagents for task delegation
- 🌊 Streaming Output: Real-time response streaming
- 🔍 Web Search: Built-in OpenAI web search and URL fetching
- 💾 Agent Memory: Per-agent results and persistent memory
# 1. Clone and setup
git clone <repository-url>
cd dynamic-agent
cp .env.example .env
# Edit .env with your OPENAI_API_KEY (and optionally ANTHROPIC_API_KEY)
# 2. Install globally with pipx
pipx install -e .
pipx ensurepath
# 3. Use from anywhere!
agent "What is the capital of France?"# Ask anything (uses default Answer agent)
agent "your question here"
# Interactive prompt (no quotes needed)
agent
? what is 7 times 11
✓ 77
# Use a specific agent
agent Calc "sqrt(144) + 10"
agent Research "find info about..."
# Options
agent -v "question" # Verbose - show tool calls
agent -q "question" # Quiet - only the answer
agent --list # List all agents
agent Research --info # Show agent details
agent -I # Interactive chat mode# Clone and setup virtual environment
git clone <repository-url>
cd dynamic-agent
python -m venv .venv
source .venv/bin/activate
# Install with dev dependencies
pip install -e ".[dev]"
# Setup environment
cp .env.example .env
# Edit .env with your API keysAgent files use YAML frontmatter followed by markdown content: (GitHub Copilot-style)
---
name: AgentName
description: What the agent does
version: "1.0"
model: anthropic:claude-sonnet-4-20250514 # or openai:gpt-4o, etc.
base_url: https://api.example.com/v1 # Optional: custom API endpoint
tools:
- read # Built-in: read files
- write # Built-in: write files
- edit # Built-in: edit files
- execute # Built-in: execute code
- search # Built-in: search files
- write_todos # Built-in: manage todos
- mcp:github # MCP server tools
subagents:
- name: researcher
description: Deep research on topics
prompt: You are a research specialist...
tools:
- search
- read
model: openai:gpt-4o # Optional: different model
handoffs:
- target: OtherAgent
description: When to hand off
send: false # true = auto, false = confirm
interrupt_on:
write:
allowed_decisions:
- approve
- edit
- reject
tags:
- category1
- category2
---
# System Prompt
The markdown content after the frontmatter becomes the agent's system prompt.
## Guidelines
- Be helpful
- Be conciseagent [agent_name] [prompt] [options]
# If no agent_name given, uses "Answer" (quick answers)
# If no prompt given, shows interactive prompt
Examples:
agent "what is 2+2" # Quick answer
agent # Interactive prompt
agent Calc "sqrt(144)" # Specific agent
agent -v "question" # Show tool calls
agent -q "question" # Just the answer
Options:
--list, -l List available agents
--info, -i Show agent information
--verbose, -v Show tool calls and details
--quiet, -q Only output the final answer
--interactive, -I Multi-turn chat mode
--model, -m Override agent's model
--agents-dir, -d Agent definitions directory
--base-url, -b Override API base URL
--no-stream Disable streaming output
dynamic-agent/
├── agents/ # Agent definition files
│ ├── Answer.agent.md # Default agent - quick answers
│ ├── Calc.agent.md # Calculator agent
│ ├── Plan.agent.md # Planning agent
│ └── Research.agent.md # Web research agent
├── agent_output/ # Per-agent output directories
│ └── {AgentName}/
│ ├── results/ # Saved research results
│ └── memory/ # Persistent key-value memory
├── config/
│ └── mcp_servers.yaml # MCP server configuration
├── src/
│ ├── core/ # Core components
│ │ ├── loader.py # YAML frontmatter parser
│ │ ├── model_factory.py
│ │ ├── tool_registry.py
│ │ └── agent_factory.py
│ ├── tools/ # Custom tools
│ │ ├── builtin.py # calculate, current_time, list_directory
│ │ ├── web_search.py # DuckDuckGo search + fetch_url
│ │ ├── openai_web_search.py # OpenAI's built-in web search
│ │ ├── fetch_url.py # Fetch web page content
│ │ └── agent_output.py # save_result, list_results, save_memory, recall_memory, list_memories
│ └── cli/ # CLI interface
│ └── main.py
├── .env.example # Environment template
├── pyproject.toml
└── README.md
Create custom tools in src/tools/:
# src/tools/my_tool.py
from langchain_core.tools import tool
@tool
def my_tool(arg1: str, arg2: int = 5) -> str:
"""Description of what the tool does.
Args:
arg1: Description of arg1
arg2: Description of arg2
Returns:
The result
"""
return f"Result: {arg1}, {arg2}"Then reference it in your agent:
tools:
- my_toolConfigure MCP servers in config/mcp_servers.yaml:
# HTTP transport (for remote servers without auth)
context7:
transport: streamable_http
url: "https://mcp.context7.com/mcp"
# Remote servers with OAuth (via mcp-remote proxy)
# mcp-remote handles OAuth 2.1 automatically - browser opens on first use
# Wrapped in sh to suppress noisy stderr output
atlassian:
transport: stdio
command: "sh"
args: ["-c", "npx -y mcp-remote https://mcp.atlassian.com/v1/sse --transport sse-only 2>/dev/null"]
# Stdio transport (local subprocess)
github:
transport: stdio
command: "npx"
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_PERSONAL_ACCESS_TOKEN: "${GITHUB_TOKEN}"Transport types:
streamable_http- HTTP-based MCP (most common for open APIs)sse- Server-Sent Events (for SSE endpoints without auth)stdio- Local subprocess (also use withmcp-remotefor OAuth-protected servers)
Note: For servers requiring OAuth (like Atlassian), use mcp-remote as a stdio proxy. It handles the OAuth flow automatically via browser.
Use in agents:
tools:
- mcp:context7 # All tools from context7 server
- mcp:github:get_repo # Specific tool from github serverRequired API keys (depending on models used):
ANTHROPIC_API_KEY- For Anthropic modelsOPENAI_API_KEY- For OpenAI modelsGROQ_API_KEY- For Groq models- etc.
read- Read fileswrite- Write filesedit- Edit filesexecute- Execute code/commandssearch- Search filestask- Delegate to subagentswrite_todos- Manage todos
calculate- Safe math evaluation (supports sqrt, sin, cos, log, etc.)current_time- Get current datetimelist_directory- List directory contentsweb_search- DuckDuckGo search (free, no API key)openai_web_search- OpenAI's server-side web search (better quality, with citations)fetch_url- Fetch and parse web page contentsave_result/list_results- Save and list outputs in agent's results directorysave_memory/recall_memory/list_memories- Persistent key-value memory
# Calculator - understands natural language math
agent Calc "what is 17 times 2?"
agent Calc "sqrt(144) + 10"
# Research - web search with citations and saved output
agent Research "What's new in LangChain 1.0?"
# Plan - create structured plans
agent Plan "Build a REST API for user management"# Activate venv
source .venv/bin/activate
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black src tests
ruff check src tests
# Type check
mypy srcMIT