CrawlLama 🦙 is an local AI agent that answers questions via Ollama and integrates web- and RAG-based research.
-
Updated
Mar 28, 2026 - Python
CrawlLama 🦙 is an local AI agent that answers questions via Ollama and integrates web- and RAG-based research.
Stateful AI Agent for Knowledge Extraction
Advanced multi-agent Medical AI Assistant powered by LangGraph that delivers empathetic, doctor-like responses using a hybrid pipeline of LLM reasoning, RAG from medical PDFs, and intelligent fallback tools. Features Long-term memory with SQLite, dynamic tool routing, and state reasoning for reliable, context-aware consultation.
Repository to host example code for the ARK Agent.
Persistent memory for Claude, installable as an MCP plugin. Local-first, never forgets.
PromptWeaver: RAG Edition helps design effective prompts for Traditional, Hybrid, and Agentic RAG systems. It offers templates, system prompts, and best practices to improve accuracy, context use, and LLM reasoning.
A local Retrieval-Augmented Generation (RAG) system for answering questions about TouchDesigner using wiki pages, tutorials, and other structured or semi-structured content. Powered by FAISS and local LLMs via Ollama.
Code to make any AI have unlimited context persistent memory. In the example, a software for any AI to read the Uniform Commercial Code of Michigan. A document of 220,000 tokens
Sub-linear knowledge retrieval via quantum-inspired hyperdimensional folded space (0.88ms @ 100% accuracy)
Notebook examples for using OpenAI's Assistants API with the file search (knowledge retrieval) functionality.
Local-first, evidence-backed memory sidecar for AI agents.
OllamaMulti-RAG 🚀 is a multimodal AI chat app combining Whisper AI for audio, LLaVA for images, and Chroma DB for PDFs, enhanced with Ollama and OpenAI API. 📄 Built for AI enthusiasts, it welcomes contributions—features, bug fixes, or optimizations—to advance practical multimodal AI research and development collaboratively.
⚡️ Local RAG API using FastAPI + LangChain + Ollama | Upload PDFs, DOCX, CSVs, XLSX and ask questions using your own documents — fully offline!
Local Retrieval-Augmented Generation (RAG) system built with FastAPI, integrating vector search, Elasticsearch, and optional web search to power LLM-based intelligent question answering using models like Mistral or GPT-4.
Self-healing AI research engine with grounded RAG, FinOps cost tracking, and resilient API fallback powered by Gemini.
AI-powered support copilot for ticket classification and query resolution. RAG, Chroma DB, Streamlit. Atlan AI Engineer Internship.
Scientific Agent: A Retrieval-Augmented Generation (RAG) System for Domain-Aware Literature Review Automation
QueryVault is a robust RAG system for structured Q&A data. It ingests JSON files, embeds content via ChromaDB, and serves context-aware answers using FastAPI and Google Gemini. With a modular design and CLI tools, it's built for scalable, secure AI-powered knowledge retrieval.
Local-first CLI for ingesting PDFs and retrieving provenance-grounded answers with predictable machine-readable output.
🚀 Revolutionize your data interaction with a cutting-edge chatbot built on Retrieval-Augmented Generation (RAG) and OpenAI’s GPT-4. Upload documents, create custom knowledge bases, and get precise, contextual answers. Ideal for research, business operations, customer support, and more!
Add a description, image, and links to the knowledge-retrieval topic page so that developers can more easily learn about it.
To associate your repository with the knowledge-retrieval topic, visit your repo's landing page and select "manage topics."