Skip to content

Latest commit

 

History

History
195 lines (125 loc) · 5.97 KB

File metadata and controls

195 lines (125 loc) · 5.97 KB

Smart Learning Assistant for Online Education Platforms (LangChain / LangGraph-based)

🧠 Multi-Agent System with Long-Term User Memory & Self-Corrective RAG

A modular multi-agent-system designed to integrate seamlessly into e-learning platforms, acting as a smart learning assistant. It consists of:

  • 🗂 Supervisor / Router Node – Directs incoming requests to the most suitable worker / agent / node for processing.
  • 🔁 Self-Corrective RAG Agent – Retrieves and generates answers by first searching internal documents. If no results are found, it rewrites the query; if still no relevant results appear, it falls back to web search, and finally verifies its own output against sources to combat hallucinations.
  • 🧠 Long-Term Memory Node – Extracts personal information from learner interactions, enabling valuable analytics for educators and platform owners.
  • 📊 Computations Node – Performs basic algebra calculations today, with planned upgrades to handle calculus operations such as integration and differentiation for STEM learners.

Agent in action

1


🚀 Overview

This project is a highly capable intelligent agent system designed to reason, calculate, learn from interactions, and provide accurate answers through an advanced retrieval-augmented generation (RAG) pipeline.

It consists of several key components:

🧮 1. Calculation Tools

  • Supports arithmetic operations using dedicated subagents.
  • Validates results to ensure accuracy.

🧠 2. Long-Term Memory

  • Captures, stores, and updates structured user profile data across sessions.
  • Example memory schema:
class Profile(BaseModel):
    """Represents structured user profile information."""
    name:        Optional[str]  = Field(description="User's name",                                              default=None)
    bachelor:    Optional[str]  = Field(description="Bachelor's degree subject",                                default=None)
    master:      Optional[str]  = Field(description="Master's degree subject",                                  default=None)
    phd:         Optional[str]  = Field(description="PhD subject",                                              default=None)
    connections: list[str]      = Field(description="User's personal connections (friends, family, coworkers)", default_factory=list)
    interests:   list[str]      = Field(description="User's interests",                                         default_factory=list)

Stored using a combination of:

  • 🗃️ In-memory store (for session-based reasoning)
  • 🐘 PostgreSQL (for persistent cross-session memory)

📚 3. Self-Corrective RAG Agent

A subgraph agent responsible for:

  1. Retrieving relevant documents

  2. Generating an answer

  3. Checking for hallucinations

  4. If hallucinated:

    • Attempts regeneration or query reformulation (up to a configurable retry limit)
    • Falls back to web search if needed

🛠️ Setup & Installation

1. Clone the repo

git clone https://github.com/your-org/multi-agent-system.git
cd multi-agent-system

2. Install dependencies

pip install -r requirements.txt

Make sure you have:

  • Python 3.11+
  • PostgreSQL running and accessible
  • Environment variables set (e.g. OpenAI API keys, etc.)

⚙️ Usage

🧪 Quick Start (Recommended)

After cloning the repository and installing dependencies, launch the agent using:

langgraph dev

This will open the agent in your browser via LangGraph Studio, providing a full interactive environment. You can test the entire agent pipeline — from memory capture to calculations and RAG-based responses. All local code changes will be reflected live in the Studio session. All required dependencies are already included in the provided requirements.

Running Locally in a Jupyter Environment

The main entry point for this system is the compiled LangGraph agent. Here's a basic example of how to interact with it programmatically:

from langchain_core.messages import HumanMessage

config = {"configurable": {"thread_id": "1", "user_id": "1"}}

# Example user message to initiate profile memory
input_messages = [HumanMessage(content="Hi my name is Lance.")]

# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
    chunk["messages"][-1].pretty_print()

Configurable parameters:

  • user_id: used to isolate memory and personalize responses
  • retries: max attempts for the RAG agent
  • web_fallback: toggle fallback behavior

🧪 Features In Depth

🔁 Self-Corrective RAG Flow

  1. Retrieve documents
  2. Generate candidate answer
  3. Validate for hallucination
  4. Retry / reformulate
  5. Fallback to web search
  6. Return the final answer

📥 TrustCall-Based Profile Learning

Automatically extracts and stores:

  • Academic history
  • Interests
  • Social graph

🧪 Example Use Cases

  • Conversational tutoring agent with memory
  • Personal knowledge assistant
  • Domain-specific customer support
  • Adaptive recommendation systems

🧱 Tech Stack

  • 🐍 Python
  • 🧠 OpenAI GPT (via langchain_openai)
  • 🛠️ LangGraph & LangChain
  • 📦 Pydantic
  • 🐘 PostgreSQL
  • 🔎 TrustCall Extractor

📌 Roadmap

  • GUI interface with streamlit or FastAPI
  • Additional calculator tools (e.g., statistics, matrix math)
  • Real-time memory editing via API

🤝 Contributing

Pull requests are welcome! For major changes, please open an issue first to discuss what you’d like to change.


📄 License

MIT License


🧑‍💻 Maintainers

  • [Ans IMRAN]