This project demonstrates how to build intelligent agents using the atomic-agents framework, with a focus on an RSS feed-aware chatbot. It also includes simpler examples of a basic chatbot and an RSS feed parser.
It is highly recommended to use a virtual environment to manage project dependencies.
-
Create and Activate a Virtual Environment:
# For Linux/macOS python3 -m venv venv source venv/bin/activate # For Windows python -m venv venv .\venv\Scripts\activate
-
Install Dependencies:
Once your virtual environment is activated, install the required packages using the
requirements.txtfile:pip install -r requirements.txt
-
Set up OpenAI API Key:
These agents use OpenAI's language models. You need to have an OpenAI API key. Create a file named
.envin the root directory of the project and add your API key like this:OPENAI_API_KEY="your_openai_api_key_here"
This project contains the following key Python scripts:
rss_chat_agent.py- The main application: an intelligent agent that chats about content from predefined RSS feeds.RSS_feedparser.py- A basic script demonstrating how to parse RSS feeds using thefeedparserlibrary.basic_chatbot.py- A simple example of a conversational agent using theatomic-agentsframework.
This script implements a sophisticated conversational agent capable of discussing topics found within a list of RSS feeds. It leverages the atomic-agents framework for its structure, feedparser for fetching RSS data, and an OpenAI model (e.g., gpt-4o-mini) for natural language understanding and generation.
Key Features:
- RSS Feed Integration: Fetches and processes articles from a user-defined list of RSS feeds.
- Contextual Conversations: Uses the content of the RSS feeds as the primary knowledge base for answering questions.
- Conversational Memory: Remembers previous parts of the conversation to provide coherent and relevant follow-up responses.
- Dynamic Content Refresh: Allows users to refresh the RSS feed data on command.
- Structured Input/Output: Uses Pydantic models for clear and validated agent inputs and outputs.
- Customizable System Prompt: Guides the LLM's behavior, ensuring it sticks to the RSS feed content and maintains a helpful persona.
How it Works:
-
Initialization:
- Loads environment variables (especially
OPENAI_API_KEY). - Defines
PREDEFINED_RSS_FEEDS(a list of RSS feed URLs). - Sets up Pydantic schemas:
RSSArticle(for individual articles),RSSChatAgentInputSchema(for user messages), andRSSChatAgentOutputSchema(for agent responses).
- Loads environment variables (especially
-
RSSFeedContentProvider:- This class is crucial for fetching and formatting RSS feed data.
- On initialization, it fetches all feeds using the
_fetch_all_feedsmethod, which in turn calls_fetch_feedfor each URL. _fetch_feeduses thefeedparserlibrary to parse a feed, extracts relevant article details (title, summary, link, published date), and stores them asRSSArticleobjects. Summaries are truncated tomax_summary_length.- The
get_info()method formats all fetched articles into a single string. This string is then injected into the agent's system prompt, providing the LLM with the necessary context. refresh_feeds()allows re-fetching all feed data.
# Snippet from RSSFeedContentProvider class RSSFeedContentProvider(SystemPromptContextProviderBase): # ... (init and other methods) ... def _fetch_feed(self, url: str) -> List[RSSArticle]: print(f"Fetching feed: {url}") parsed_feed = feedparser.parse(url) articles = [] if parsed_feed.bozo == 0: # Indicates success for entry in parsed_feed.entries: summary = getattr(entry, 'summary', '') if len(summary) > self.max_summary_length: summary = summary[:self.max_summary_length] + "..." articles.append( RSSArticle( title=getattr(entry, 'title', 'N/A'), link=getattr(entry, 'link', 'N/A'), summary=summary, published=getattr(entry, 'published', None) ) ) # ... (error handling) ... return articles def get_info(self) -> str: context_str_parts = ["CONTEXT FROM RSS FEEDS:\n"] # ... (formats feed_contents into a string) ... return "".join(context_str_parts)
-
Agent Configuration and Instantiation:
AgentMemoryis set up to store conversation history.- A
SystemPromptGeneratoris configured with:background: Defines the agent's persona and primary goal (discussing RSS feeds).steps: Instructs the LLM on how to process user queries based on the RSS context.output_instructions: Specifies how the LLM should formulate its responses (e.g., stick to feed content, state if information is not found).context_providers: This dictionary links theRSSFeedContentProviderinstance (rss_provider) to the system prompt. Theatomic-agentsframework will automatically callrss_provider.get_info()and inject its output into the system prompt before each LLM call.
BaseAgentConfigbrings together the OpenAI client, model name, system prompt generator, input/output schemas, and memory.- Finally,
rss_chat_agent = BaseAgent(config=rss_chat_agent_config)creates the agent.
# Snippet for SystemPromptGenerator system_prompt_gen = SystemPromptGenerator( background=[ "You are an AI assistant specialized in discussing content from a predefined list of RSS feeds.", "Your primary role is to answer user questions and engage in conversation based *exclusively* on the information available in the provided RSS feed summaries (under 'CONTEXT FROM RSS FEEDS:').", # ... more background ... ], # ... steps and output_instructions ... context_providers={"rss_feed_data": rss_provider} )
-
Interaction Loop (
if __name__ == "__main__":):- A command-line interface allows users to interact with the agent.
- It handles user input, including special commands:
/exitor/quit: Terminates the chat./refresh: Callsrss_provider.refresh_feeds(), clears agent memory, and notifies the user.
- For standard messages:
- The user's message is added to
AgentMemory. - The message is wrapped in
RSSChatAgentInputSchema. rss_chat_agent.run(agent_input)sends the input (along with memory and RSS context from the system prompt) to the LLM.- The LLM's response, parsed into
RSSChatAgentOutputSchema, is printed, and also added to memory.
- The user's message is added to
To Run:
python rss_chat_agent.pyYou can then chat with the agent in your terminal. Try asking questions about topics you expect to be in the configured RSS feeds.
This is a simple utility script that demonstrates the basic usage of the feedparser library to fetch and parse an RSS feed. The core logic shown here is integrated into the RSSFeedContentProvider class within rss_chat_agent.py.
Purpose:
- To show how to parse an RSS feed URL.
- To extract common information from feed entries, such as title, link, publication date, and summary.
- To handle potential parsing errors (
feed.bozo).
Code:
import feedparser
url = "https://feeds.a.dj.com/rss/WSJcomUSBusiness.xml" # Example URL
feed = feedparser.parse(url)
if feed.bozo == 0:
print("Feed title:", feed.feed.title)
for entry in feed.entries:
print("Entry title:", entry.title)
print("Entry link:", entry.link)
print("Entry published date:", entry.published)
print("Entry summary:", entry.summary)
print("-" * 20)
else:
print("Error parsing feed:", feed.bozo_exception)How it's related to rss_chat_agent.py:
The rss_chat_agent.py script uses the same feedparser.parse(url) mechanism within its RSSFeedContentProvider._fetch_feed method to get data from each RSS feed URL. It then structures this data more formally using the RSSArticle Pydantic model.
To Run:
python RSS_feedparser.pyThis will print the content of the example RSS feed to your console.
This script provides a minimal example of a conversational agent built using the atomic-agents framework and OpenAI. It's a good starting point to understand the fundamental components of an agent without the added complexity of RSS feed integration.
Key Features:
- Simple Conversation: Engages in a basic chat with the user.
- Agent Memory: Remembers the conversation history.
- OpenAI Integration: Uses an OpenAI model for responses.
- Console Interaction: Provides a simple command-line interface.
How it Works:
-
Initialization:
- Sets up an OpenAI client using
instructor. - Initializes
AgentMemoryand adds an initial greeting message from the assistant.
- Sets up an OpenAI client using
-
Agent Configuration:
- A
BaseAgentis configured with:- The OpenAI client.
- A model (e.g., "gpt-4o-mini").
- The
AgentMemoryinstance. BaseAgentInputSchemaandBaseAgentOutputSchemaare used for generic chat message inputs and outputs. The system prompt for this basic agent is implicitly the default one provided byBaseAgentor can be customized if needed (though not explicitly done in this minimal example for simplicity, unlikerss_chat_agent.py).
- A
-
Interaction Loop:
- Prompts the user for input.
- Allows the user to exit with
/exitor/quit. - For each user message:
- The message is wrapped in
BaseAgentInputSchema. agent.run(input_schema)sends the message (and conversation history) to the LLM.- The agent's response (as
BaseAgentOutputSchema) is printed. - (Note: In the provided
basic_chatbot.py, the agent's response is not explicitly added back to its own memory in the loop, which is a simplification. A more robust chatbot would add its own responses to memory as well, similar to howrss_chat_agent.pydoes.)
- The message is wrapped in
Code:
import os
import instructor
import openai
from rich.console import Console
from atomic_agents.lib.components.agent_memory import AgentMemory
from atomic_agents.agents.base_agent import BaseAgent, BaseAgentConfig, BaseAgentInputSchema, BaseAgentOutputSchema
# Initialize console for pretty outputs
console = Console()
# Memory setup
memory = AgentMemory()
# Initialize memory with an initial message from the assistant
# (Using BaseAgentOutputSchema for consistency, though content is simple string)
initial_message_content = "Hello! How can I assist you today?"
initial_message_schema = BaseAgentOutputSchema(chat_message=initial_message_content)
memory.add_message(role="assistant", content=initial_message_schema)
# OpenAI client setup using the Instructor library
client = instructor.from_openai(openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY")))
# Agent setup with specified configuration
agent = BaseAgent(
config=BaseAgentConfig(
client=client,
model="gpt-4o-mini",
memory=memory,
input_schema=BaseAgentInputSchema, # Not Included in the actual code. Default input for simple chat
output_schema=BaseAgentOutputSchema # Not Included in the actual code. Default output for simple chat
# A custom system_prompt_generator could be added here for more specific behavior
)
)
# Start a loop to handle user inputs and agent responses
while True:
user_input = console.input("[bold blue]You:[/bold blue] ")
if user_input.lower() in ["/exit", "/quit"]:
console.print("Exiting chat...")
break
# Add user message to memory
user_input_schema = BaseAgentInputSchema(chat_message=user_input)
memory.add_message(role="user", content=user_input_schema)
# Process the user's input through the agent and get the response
response = agent.run(user_input_schema)
# Display the agent's response
console.print("Agent: ", response.chat_message)
# Add agent's response to memory for context in next turn
memory.add_message(role="assistant", content=response)To Run:
python basic_chatbot.pyThis will start a simple chat session with the agent in your terminal.
This README should provide a good overview of your project and how to get started with each script.