CS undergrad (2027) obsessed with one question: how do AI systems actually think at scale?
I build at the intersection of retrieval, reasoning, and real-world impact β from RAG pipelines to multi-step agents. My work leans backend-heavy, systems-first, and always production-aware.
- π§ Focus: AI Systems Β· RAG Architectures Β· Agent Frameworks Β· Backend Engineering
- π¬ Currently building: CodeRAG β an AI debugging agent
- π± Learning: Advanced agentic systems Β· System design at scale Β· Hybrid search architectures
- π€ Open to: AI/ML collabs, open-source contributions, research discussions
- π Punjab, India
- π§© Strong understanding of AI system design (RAG + Agents)
- βοΈ Ability to build end-to-end backend systems
- π Focus on problem-solving, not just implementation
- π Experience with real-world datasets and ML workflows
- π Builder mindset β turning ideas into working systems
AI / ML
Backend
Frontend
Databases & Search
Data Science
Tools & Platforms
An AI-powered debugging system that finds root causes β not just symptoms.
Most debugging tools tell you where the error is. CodeRAG tells you why it happened β by reasoning across your entire codebase context.
User reports bug β CodeRAG indexes codebase + logs + docs + git history
β Hybrid search retrieves relevant context
β LangGraph agent reasons across evidence
β Root cause + suggested fix returned
| Layer | Technology |
|---|---|
| π Backend | FastAPI |
| π₯οΈ Frontend | Next.js |
| 𧬠Code Embeddings | CodeBERT |
| ποΈ Vector Store | ChromaDB |
| π Search Engine | Elasticsearch |
| π€ Agent Framework | LangGraph |
| π Search Strategy | Hybrid (BM25 + Vector) |
What makes it different:
- π Code-aware search β understands functions, classes, call graphs
- π Cross-context reasoning β correlates logs, docs, and commits
- π§© Multi-step agents β doesn't guess; it traces
- π‘ Actionable output β fix suggestions with reasoning, not just pointers
class Sneha:
status = "Shipping AI systems that retrieve, reason, and solve real problems"
learning = ["Advanced RAG patterns", "Agentic system design", "Distributed systems"]
exploring = ["Multi-agent orchestration", "Graph-based retrieval", "Eval frameworks for LLMs"]
open_to = "Interesting problems worth solving"- π Winter of Blockchain β Open-source contributor in Web3 ecosystem
- π€ AI/ML Coursework β Deep Learning Β· NLP Β· Computer Vision Β· Generative AI
- ποΈ Projects shipped β AI debugging systems, web apps, RAG pipelines
- π GitHub Achievements β Pull Shark Γ 2 Β· Quickdraw
"Good software is a system, not a script."
I approach every project by asking:
- What breaks at scale? β Design for it upfront
- Where does reasoning fail? β Add structure and evaluation
- What's the actual problem? β Solve that, not the surface symptom
I prefer boring infrastructure that works over clever code that doesn't.

