Skip to content

kksen18-collab/stock-picker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stock Picker

Production-style multi-agent stock research and selection system built with crewAI.

The project researches a user-selected sector, identifies trending companies, performs financial analysis, and chooses the best candidate for investment with a final decision report and optional push notification.

Table of Contents

What This Project Does

Given a sector (for example: Technology, Healthcare, Finance), this crew executes a three-stage investment workflow:

  1. Finds 2-3 trending companies in the selected sector from recent news.
  2. Produces detailed research for each selected company.
  3. Picks one best company for investment, explains the decision, and optionally sends a push notification.

High-Level Architecture

This project is a crewAI architecture with a manager-led hierarchy.

Core components

  • Entry point: src/stock_picker/main.py
  • Crew orchestration and memory wiring: src/stock_picker/crew.py
  • Agent definitions (roles/goals/LLMs): src/stock_picker/config/agents.yaml
  • Task graph and outputs: src/stock_picker/config/tasks.yaml
  • Custom push tool: src/stock_picker/tools/push_tool.py
  • Generated artifacts: output/
  • Memory storage: memory/

Design pattern

  • Orchestration model: manager-driven, hierarchical process.
  • Task dependencies: context-based task chaining.
  • Output contracts: structured Pydantic outputs for intermediate stages.
  • Persistence: long-term and retrieval-backed short/entity memory.

Execution Flow (Command to Final Decision)

When you run the crew:

  1. main.py asks for a sector input in the terminal.
  2. main.py creates inputs = { "sector": ..., "current_date": ... }.
  3. StockPicker().crew().kickoff(inputs=inputs) starts execution.
  4. The manager agent coordinates agents under Process.hierarchical.
  5. Task 1 (find_trending_companies) runs and writes output/trending_companies.json.
  6. Task 2 (research_trending_companies) consumes Task 1 context and writes output/research_report.json.
  7. Task 3 (pick_best_company) consumes Task 2 context, sends push notification, and writes output/decision.md.
  8. Final result is printed to terminal.

Crew Terminology Explained

crewAI can feel abstract at first. Here is the terminology mapped directly to this repository.

Crew

A crew is the full multi-agent system that executes your workflow.

In this project, the crew is created in crew.py and includes:

  • Agents
  • Tasks
  • Process strategy (hierarchical)
  • Manager agent
  • Memory systems

Agent

An agent is an AI worker with a role, goal, backstory, and optional tools.

Defined in agents.yaml and instantiated in crew.py:

  • trending_company_finder
  • financial_researcher
  • stock_picker
  • manager (orchestration only)

Task

A task is a unit of work assigned to one agent.

Defined in tasks.yaml with:

  • Description
  • Expected output
  • Assigned agent
  • Context dependencies
  • Output file

Process

Process determines orchestration style.

  • Process.hierarchical means a manager agent can plan/delegate dynamically.
  • This differs from pure linear execution where tasks are always fired in fixed order without manager intervention.

Context

Context is prior task output passed to later tasks.

In this project:

  • Research task consumes company-finder output.
  • Decision task consumes research output.

This creates a directed task graph with dependency-aware execution.

Tools

Tools are external capabilities an agent can invoke.

Used here:

  • SerperDevTool for web/news search.
  • PushNotificationTool for Pushover notification delivery.

Memory Architecture Explained

This project uses three active memory systems in the crew, and one implied conceptual layer.

1) Long-Term Memory (persistent across runs)

  • Class: LongTermMemory
  • Storage: SQLite via LTMSQLiteStorage
  • Path: ./memory/long_term_memory_storage.db
  • Purpose: keep durable learnings or prior run knowledge across sessions.

Think of this as durable historical memory.

2) Short-Term Memory (working memory for current and recent context)

  • Class: ShortTermMemory
  • Storage: RAGStorage with OpenAI embeddings (text-embedding-3-small)
  • Path: ./memory/
  • Purpose: retrieval-backed temporary context used while reasoning and composing outputs.

Think of this as active working memory.

3) Entity Memory (facts about entities)

  • Class: EntityMemory
  • Storage: RAGStorage with OpenAI embeddings
  • Path: ./memory/
  • Purpose: track facts tied to named entities (for example companies/tickers).

Think of this as per-entity knowledge state.

4) Contextual Memory (conceptual layer)

Contextual memory is not a separate class instantiated here, but functionally emerges from:

  • Task context chaining in tasks.yaml
  • Short-term retrieval context
  • Entity-specific recall

In practice, contextual memory is the crew's ability to answer based on "what has happened so far in this run" and "what is relevant now".

Why this memory design matters for stock picking

  • Reduces repeated selection of the same companies over time.
  • Preserves sector-level and company-level learnings.
  • Improves coherence between discovery, research, and decision tasks.

Repository Structure

.
|-- pyproject.toml
|-- README.md
|-- .env.example
|-- src/
|   `-- stock_picker/
|       |-- main.py
|       |-- crew.py
|       |-- config/
|       |   |-- agents.yaml
|       |   `-- tasks.yaml
|       `-- tools/
|           `-- push_tool.py
|-- output/
`-- memory/

Prerequisites

  • Python 3.10 to 3.12
  • uv
  • OpenAI API key
  • Serper API key (for web/news search)
  • Optional: Pushover credentials for push notifications

Quick Start

1) Install uv (if not already installed)

pip install uv

2) Install dependencies

uv sync

Alternative:

crewai install

3) Set environment variables

Copy .env.example to .env, then fill in real values.

4) Run

crewai run

Environment Variables

Use the provided .env.example as a template.

Required for core workflow:

  • OPENAI_API_KEY: used for LLM and embeddings.
  • SERPER_API_KEY: used by SerperDevTool for internet/news search.

Optional for push alerts:

  • PUSHOVER_USER: Pushover user key.
  • PUSHOVER_TOKEN: Pushover app token.

Run the Project

From repository root:

crewai run

You will be prompted:

Enter the sector you want to research (e.g. Technology, Healthcare, Finance):

The crew then executes all tasks and prints final decision output in terminal.

Outputs

After a successful run, the following files are generated in output/:

  • trending_companies.json: discovered candidates.
  • research_report.json: detailed per-company research.
  • decision.md: final pick and rationale.

How Hierarchical and Sequential Behavior Works Here

This project uses both ideas together:

  • Hierarchical orchestration: enabled by process=Process.hierarchical and manager_agent.
  • Sequential dependencies: enforced by explicit context links in tasks.

In effect:

  • The manager can control and delegate how work gets done.
  • The dependency graph still ensures that downstream tasks receive required upstream outputs.

This yields controlled flexibility: dynamic orchestration with deterministic data flow.

Configuration Deep Dive

Agents (config/agents.yaml)

Each agent includes:

  • role: what the agent is.
  • goal: mission criteria.
  • backstory: behavior shaping context.
  • llm: model identifier.

Current model choices:

  • Worker agents: openai/gpt-4o-mini
  • Manager: openai/gpt-4o

Tasks (config/tasks.yaml)

Each task includes:

  • description
  • expected_output
  • agent
  • context (optional dependencies)
  • output_file

Structured outputs (crew.py)

Two stages use Pydantic output schemas:

  • Trending companies list
  • Detailed research list

This enforces stronger shape consistency for intermediate outputs.

Tooling and Integrations

SerperDevTool

Used by:

  • Trending finder
  • Financial researcher

Purpose:

  • Current web/news retrieval for evidence-backed analysis.

PushNotificationTool

Used by:

  • Final stock picker

Implementation details:

  • Endpoint: https://api.pushover.net/1/messages.json
  • Method: POST
  • Payload: user, token, message

Customization Guide

Add or change agents

Edit src/stock_picker/config/agents.yaml and add corresponding @agent methods in src/stock_picker/crew.py.

Add or change tasks

Edit src/stock_picker/config/tasks.yaml and add corresponding @task methods in src/stock_picker/crew.py.

Change process style

In crew.py, switch:

  • Process.hierarchical to Process.sequential if you want strict linear execution without manager delegation.

Change storage paths

Update memory storage paths in crew.py if you want environment-specific persistence locations.

Improve decision quality

  • Add valuation metrics (P/E, growth, debt, FCF).
  • Add risk scoring rubric.
  • Add guardrails for diversification and concentration limits.

Troubleshooting

Missing API key errors

  • Verify .env exists and includes required keys.
  • Ensure shell session loads environment variables.

Empty or weak research output

  • Confirm SERPER_API_KEY is valid.
  • Retry with a broader sector keyword.

Push notifications not received

  • Check PUSHOVER_USER and PUSHOVER_TOKEN.
  • Confirm Pushover app/device settings.

Memory not persisting

  • Confirm memory/ is writable.
  • Check for memory/long_term_memory_storage.db creation after run.

Security and Operational Notes

  • Do not commit real .env secrets.
  • Keep .env in .gitignore (already configured).
  • Consider rotating API keys periodically.
  • Validate generated investment output before real-world decisions.

Roadmap Ideas

  • Add backtesting and benchmark comparison.
  • Add portfolio construction instead of single-pick output.
  • Add compliance/risk policy constraints.
  • Add scheduled runs and notification digests.
  • Add unit and integration tests for tools and task contracts.

License

This project is licensed under the MIT License. See the LICENSE file for details.

About

Agentic stock intelligence powered by crewAI: autonomous multi-agent research crews that discover trends, analyze companies, and deliver explainable investment

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages