🇰🇷 Korean | 🇺🇸 English
This repository explores the evolution of LLM-based Agent Architectures
through hands-on experiments
from ReAct-centered first-generation agents to workflow-orchestrated
Post-ReAct systems.
Initializes the shared environment and tools used throughout all experiments.
- Verifies LLM connectivity and API key configuration
- Establishes a working baseline before running agent experiments
- Defines external tools such as Tavily Search
- Connects tools via the LangChain Tool interface
- Prepares the Action Layer used by agents
Experiments with first-generation agent patterns where the LLM independently performs the Reason → Act → Observe loop.
- The purest form of the ReAct pattern
- No planning phase; reasoning happens on-the-fly
- The LLM owns the entire workflow decision process
- ReAct augmented with conversation history (memory)
- Observes behavioral changes when context is accumulated
- Transition from stateless to context-aware reasoning
- Breaks problems into sub-questions (Self-Ask strategy)
- Integrates external search tools for information retrieval tasks
- Performs QA against a provided document store
- Demonstrates that even with retrieval, control still resides in the LLM
Moves beyond LLM-driven control toward system-designed execution, where the workflow is defined externally and the LLM acts as a component.
- Separates planning and execution into distinct phases
- Introduces structural determinism absent in ReAct
- Defines explicit states and transitions
- Treats the agent as part of a controlled workflow rather than a free-form conversation
- Introduces DAG-based execution modeling
- Enables non-linear workflows and reusable nodes
- Combines FSM state control with DAG execution
- Approximates real-world orchestration architectures
- Splits responsibilities across Planner / Researcher / Builder / Critic / Supervisor roles
- Implements a collaborative multi-agent system
- Adds validation and guardrail mechanisms
- Ensures outputs are controlled at the system level rather than blindly trusting LLM responses
Gen1: LLM-driven systems where the model decides what to do and when to do it.
Gen2: System-driven orchestration where the workflow is engineered and the LLM plays a bounded role.
This lab demonstrates that building LLM agents is no longer just prompt
engineering ---
it is fundamentally a software architecture problem.