This repo is a small playground that combines a standard Kedro example project with an MCP server and two different MCP agents.
The underlying project structure was generated with the kedro new command using the Spaceflights example. If you want a full walkthrough of Kedro concepts (project layout, pipelines, configuration, CLI, etc.), follow the official course instead of this README:
Below we focus only on the MCP/MCP-agent pieces: mcp_server.py, agent.py, and agent_langgraph.py.
The main way to use this project is to run the MCP server and connect it to VS Code's agent mode. The server wraps your Kedro project so that the agent inside VS Code can discover and execute pipelines as tools — all running in your preferred environment.
python mcp_server.py --transport streamable-httpThis starts the server at http://localhost:8001/mcp.
- Open VS Code.
- Press ⇧⌘P (macOS) / Ctrl+Shift+P (Windows/Linux) to open the Command Palette.
- Choose MCP: Open User Configuration.
- Add the following configuration:
{
"servers": {
"kedro": {
"url": "http://localhost:8001/mcp"
}
}
}- Switch to Agent Mode in the Copilot chat panel.
VS Code will connect to the running server and expose the Kedro tools (list_pipelines, get_pipeline_info, run_pipeline, list_datasets) to the agent. You can then ask the agent to inspect pipelines, run them, list datasets, etc. — it will call the MCP tools under the hood.
mcp_server.py turns the Kedro project into a Model Context Protocol (MCP) server using the FastMCP class from the mcp package:
- Uses a factory pattern (
create_mcp_server) that bootstraps the Kedro project and returns a configuredFastMCPinstance. - Includes a shared execution core (
_execute_pipeline) that returns structured results (run_id,status,duration_ms) with optional error details and debug tracebacks. - Exposes four MCP tools:
list_pipelines– returns a JSON array of registered pipeline names.get_pipeline_info– returns detailed node-level information (inputs, outputs, tags) for a given pipeline.run_pipeline– executes a pipeline with full parameter support (tags, node selection, from/to nodes, runner choice, namespaces,only_missing_outputs, etc.).list_datasets– lists all datasets in the Kedro data catalog.
- Supports multiple transports configurable via CLI flags:
stdio(default) – for subprocess-based usage by MCP clients.sse– Server-Sent Events over HTTP.streamable-http– streaming HTTP transport.
- Debug mode (
--debugorKEDRO_SERVER_DEBUG=1) includes stack traces in error responses.
The agents start the server as a subprocess over stdio by default. You can also run it standalone:
python mcp_server.py # stdio (default)
python mcp_server.py --transport sse # SSE on 127.0.0.1:8001
python mcp_server.py --transport streamable-http --port 9000
python mcp_server.py --debug # include tracebacksagent.py is a simple but fully traced MCP agent:
- Starts
mcp_server.pyas a subprocess over stdio and opens an MCPClientSession. - Calls
session.list_tools()to discover the tools exposed by the server. - Implements a manual tool-calling loop on top of the OpenAI Chat Completions API:
- Converts MCP tool schemas into OpenAI tool definitions.
- Sends user prompts and tool definitions to the model.
- Reads
tool_callsfrom the model’s response and executes them on the MCP server withsession.call_tool. - Feeds tool outputs back into the LLM until it returns a final natural language answer.
- Integrates Langfuse for observability in an “old style” way:
- Imports
AsyncOpenAIfromlangfuse.openai. - All calls to
self.llm.chat.completions.create(...)are automatically traced to your Langfuse project, assuming the usualLANGFUSE_*andOPENAI_API_KEYenvironment variables are set.
- Imports
This file is the reference implementation if you want to see a working, end‑to‑end integration of MCP + Kedro + OpenAI + Langfuse without any LangChain/LangGraph abstractions.
Run it from an activated virtual environment:
python agent.pyYou’ll get an interactive prompt where you can ask questions; the agent will inspect and run Kedro pipelines via MCP tools, and all LLM calls will appear as traces in Langfuse.
agent_langgraph.py is an alternative agent built on LangGraph and LangChain. It keeps the same MCP server but delegates orchestration to a graph-based agent:
- Starts and connects to
mcp_server.pyvia the MCP stdio client (same asagent.py). - Uses
langchain_mcp_adapters.tools.load_mcp_toolsto turn MCP tools into LangChain tools automatically. - Creates a ReAct-style agent using LangGraph / LangChain:
- LLM:
ChatOpenAIfromlangchain_openai. - Tools: the MCP tools returned by
load_mcp_tools.
- LLM:
- Provides:
process_query(...)– run a single query through the LangGraph agent.chat()– interactive loop with conversation memory (state carries previous messages across turns).stream_query(...)– stream partial responses as they’re produced.
Run it with:
python agent_langgraph.pyYou’ll get a LangGraph-based conversational agent that can still inspect and run Kedro pipelines via MCP.