-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Description
Hi, and thanks for Semantic Kernel. It has become a key framework for building orchestrated AI apps on top of multiple models and tools.
I maintain an MIT-licensed open-source project called WFGY (~1.5k GitHub stars).
Its main diagnostic component is a 16-problem “ProblemMap” for RAG and LLM pipelines, which organizes failure modes across:
- data ingestion, normalization, and chunking
- embeddings and vector store configuration
- retrievers, ranking, and re-ranking
- planner, tool call, and kernel routing behavior
- evaluation gaps, logging, and guardrails
ProblemMap overview:
https://github.com/onestardao/WFGY/blob/main/ProblemMap/README.md
The ProblemMap is already referenced by external research projects and curated lists, including:
- ToolUniverse from Harvard MIMS Lab
- Multimodal RAG Survey from QCRI LLM Lab
- Rankify from University of Innsbruck
Given that Semantic Kernel is used to compose quite complex RAG workflows, many developers end up asking if a failure comes from their data layer, embeddings, planner, or prompt logic. The ProblemMap tries to give a neutral checklist for exactly these questions.
I would like to propose adding WFGY ProblemMap as an optional reference in the docs, for example:
- a short “RAG failure mode checklist” page under guidance or best practices
- a link from any existing troubleshooting sections that talk about retrieval quality
If this is interesting, I would be glad to draft a PR that:
- maps the 16 problems to typical Semantic Kernel patterns
- shows how to use the checklist without coupling to any specific model provider
- keeps content concise and aligned with Microsoft documentation style
Thanks for considering this suggestion, and for all the work you do on Semantic Kernel.