A meta web extension for pi that routes search, content extraction, quick grounded answers, and research through configurable per-tool providers, with explicit provider-specific option schemas for each managed tool.
Most web extensions hard-wire a single backend. pi-web-providers lets you
mix and match providers per tool instead, so web_search, web_contents,
web_answer, and web_research can each use a different backend or be turned
off entirely. Treat web_answer as a fast path for simple grounded questions,
not as a replacement for source inspection or deeper research.
- Multiple providers: Brave, Claude, Cloudflare, Codex, Exa, Firecrawl, Gemini, Linkup, Ollama, OpenAI, Perplexity, Parallel, Serper, Tavily, Valyu
- Provider-aware tool options: pi only exposes the provider settings that actually apply to the backend you selected, so tool calls are easier to discover and harder to get wrong
- Batched search and answers: run several related queries or questions in a
single
web_searchorweb_answercall and get grouped results back in one response - Background contents prefetch: optionally start
web_contentsextraction fromweb_searchresults in the background and reuse the cached pages later for faster follow-up reads
pi install npm:pi-web-providersRun:
/web-providers
This edits the global config file ~/.pi/agent/web-providers.json. The
settings UI mirrors the three sections below: tools, providers, and settings.
Each tool can be routed to any compatible provider:
Built-in local providers
| Provider | search | contents | answer | research | Auth |
|---|---|---|---|---|---|
| Claude | ✔ | ✔ | Local Claude Code auth | ||
| Codex | ✔ | Local Codex CLI auth |
API-backed providers
| Provider | search | contents | answer | research | Auth |
|---|---|---|---|---|---|
| Brave | ✔ | ✔ | ✔ | BRAVE_SEARCH_API_KEY / BRAVE_ANSWERS_API_KEY |
|
| Cloudflare | ✔ | CLOUDFLARE_API_TOKEN + CLOUDFLARE_ACCOUNT_ID |
|||
| Exa | ✔ | ✔ | ✔ | ✔ | EXA_API_KEY |
| Firecrawl | ✔ | ✔ | FIRECRAWL_API_KEY |
||
| Gemini | ✔ | ✔ | ✔ | GOOGLE_API_KEY |
|
| Linkup | ✔ | ✔ | LINKUP_API_KEY |
||
| Ollama | ✔ | ✔ | OLLAMA_API_KEY |
||
| OpenAI | ✔ | ✔ | ✔ | OPENAI_API_KEY |
|
| Parallel | ✔ | ✔ | PARALLEL_API_KEY |
||
| Perplexity | ✔ | ✔ | ✔ | PERPLEXITY_API_KEY |
|
| Serper | ✔ | SERPER_API_KEY |
|||
| Tavily | ✔ | ✔ | TAVILY_API_KEY |
||
| Valyu | ✔ | ✔ | ✔ | ✔ | VALYU_API_KEY |
Advanced option: custom can route any managed tool through a local wrapper
command using a JSON stdin/stdout contract.
See example-config.json for the minimal default
configuration.
Each managed tool maps to one provider id under the top-level tools key.
Removing a tool mapping turns that tool off. A tool is only exposed when it is
mapped to a compatible provider and that provider is currently available.
Shared defaults and tool-specific settings live under settings; search-specific
settings live under settings.search, and async research uses
settings.researchTimeoutMs. Provider option schemas are strict: only the keys
shown for the active provider are accepted.
Search the public web for up to 10 queries in one call. It returns grouped
titles, URLs, and snippets for each query. Batch related queries when grouped
comparison matters; use separate sibling web_search calls when independent
results should arrive as soon as they are ready.
Parameters and behavior
| Parameter | Type | Default | Description |
|---|---|---|---|
queries |
string[] | required | One or more search queries to run (max 10) |
maxResults |
integer | 5 |
Result count per query, clamped to 1–20 |
options |
object | — | Provider-specific settings exposed by the selected provider schema |
options is omitted when the configured search provider has no per-call
provider options. Runtime controls are not accepted in tool calls. Configure
retry, timeout, and background contents prefetch under settings and
settings.search; prefetch starts only when settings.search.provider is set.
Read the main text from one or more web pages. It reuses cached pages when they
match and fetches only missing or stale URLs. Batch related pages when they are
meant to be read as one bundle; use separate sibling web_contents calls when
each page can be acted on independently.
Parameters and behavior
| Parameter | Type | Default | Description |
|---|---|---|---|
urls |
string[] | required | One or more URLs to extract |
options |
object | — | Provider-specific settings exposed by the selected provider schema |
web_contents reuses any matching cached pages already present in the local
in-memory cache—whether they came from prefetch or an earlier read—and only
fetches missing or stale URLs.
Answer one or more simple factual questions using web-grounded evidence. Use it
as a lightweight shortcut when you want a concise grounded answer without
manually selecting and reading sources. Prefer web_search plus web_contents
when source selection matters or you need to inspect primary sources directly;
prefer web_research for open-ended, controversial, or multi-step
investigations.
When you ask more than one question, the response is grouped into per-question sections. Batch related questions when the answers belong together; split them into sibling calls when earlier independent answers can unblock the next step.
Parameters and behavior
| Parameter | Type | Default | Description |
|---|---|---|---|
queries |
string[] | required | One or more questions to answer in one call (max 10) |
options |
object | — | Provider-specific settings exposed by the selected provider schema |
Responses are grouped into per-question sections when more than one question is provided.
Investigate a topic across web sources and produce a longer report.
web_research is always asynchronous: it starts a background run, returns a
short dispatch notice immediately, and later posts a completion message with a
saved report path.
Parameters and behavior
| Parameter | Type | Default | Description |
|---|---|---|---|
input |
string | required | Research brief or question |
options |
object | — | Provider-specific settings exposed by the selected provider schema |
options is provider-specific. Equivalent concepts can use different field
names across SDKs—for example Perplexity uses country, Exa uses
userLocation, and Valyu uses countryCode. Runtime controls are not accepted
in tool calls.
Unlike the other managed tools, web_research does not accept local timeout,
retry, polling, or resume controls. Research has one opinionated execution
style: pi starts it asynchronously, tracks it locally, and saves the final
report under .pi/artifacts/research/.
The built-in providers below integrate with official SDKs or documented APIs.
Brave
- API: Brave Search API and Brave Answers API
- Supports
web_searchvia Web Search, plus optionalllm_context,news,videos,images, andplacessearch modes - Supports
web_answerandweb_researchvia Brave Answers streaming chat completions web_contentsstays routed to URL-fetch providers; Brave LLM Context is query-based retrieval and is exposed as a search mode instead- Brave Answers may require a different key or plan than Brave Search
Setup
{
"tools": {
"search": "brave",
"answer": "brave",
"research": "brave"
},
"providers": {
"brave": {
"credentials": {
"search": "BRAVE_SEARCH_API_KEY",
"answers": "BRAVE_ANSWERS_API_KEY"
}
}
}
}Use providers.brave.options.search.mode or per-call search options to select
llm_context, news, videos, images, or places. Places details and
descriptions are opt-in because they can add calls, latency, and place-specific
semantics.
Claude
- SDK:
@anthropic-ai/claude-agent-sdk - Uses Claude Code's built-in
WebSearchandWebFetchtools with structured JSON output - Exposes
model,thinking,effort,maxThinkingTokens,maxTurns, andmaxBudgetUsdas provider options for search and answer calls - Great for search plus grounded answers if you already use Claude Code locally
Cloudflare
- SDK:
cloudflare - Supports
web_contentsvia Cloudflare Browser Rendering's/markdownendpoint - Good for JavaScript-heavy pages that need a real browser render before extraction
- Exposes
gotoOptions.waitUntilas the provider-specific contents option
Setup
- In the Cloudflare dashboard, create an API token.
- Grant it this permission:
Account | Browser Rendering | Edit
- Scope it to the account you want to use.
- Copy that account's Account ID from the Cloudflare dashboard.
- Configure pi with both values:
{
"tools": {
"contents": "cloudflare"
},
"providers": {
"cloudflare": {
"credentials": {
"api": "CLOUDFLARE_API_TOKEN"
},
"accountId": "CLOUDFLARE_ACCOUNT_ID"
}
}
}If Cloudflare returns 401 Authentication error, the token permission, token
scope, or account ID is usually wrong.
Codex
- SDK:
@openai/codex-sdk - Runs in read-only mode with web search enabled
- Exposes
model,modelReasoningEffort, andwebSearchModeas provider options forweb_search - Best if you already use the local Codex CLI and auth flow
Exa
- SDK:
exa-js - Supports
web_search,web_contents,web_answer, andweb_research web_researchis exposed through pi's async research workflow- Neural, keyword, hybrid, and deep-research search modes
- Inline text-content extraction on search results
- Exposes search options such as
category,type, date filters,includeDomains,excludeDomains,userLocation, andcontents - Persisted Exa defaults are scoped under
providers.exa.options.search web_contents,web_answer, andweb_researchcurrently use fixed provider behavior with no extra per-call provider options
Firecrawl
- SDK:
@mendable/firecrawl-js - Supports
web_searchandweb_contents - Search can optionally include Firecrawl scrape-backed result enrichment
- Contents extraction uses Firecrawl scrape with markdown-first defaults
- Exposes search options such as
lang,country,sources,categories,location,timeout, andscrapeOptions - Exposes contents options such as
formats,onlyMainContent,includeTags,excludeTags,waitFor,headers,location,mobile, andproxy
Gemini
- SDK:
@google/genai - Supports
web_search,web_answer, andweb_research web_researchis exposed through pi's async research workflow- Google Search grounding for answers
- Deep-research agents via Google's Gemini API
- Exposes
modelandgeneration_configfor search,modelandconfigfor answers, and only the conservative deep-research optionagent_config.thinking_summariesfor research - Gemini research intentionally does not expose or send Interactions API
tools,response_format,response_modalities, orsystem_instructionbecause the default deep-research agent rejects several of those fields
Linkup
- SDK:
linkup-sdk - Supports
web_searchvia Linkup Search with fixedsearchResultsoutput - Supports
web_contentsvia Linkup Fetch and always returns markdown - Exposes search options
depth,includeImages,includeDomains,excludeDomains,fromDate, andtoDate - Exposes contents options
renderJs,includeRawHtml, andextractImages - Good fit for a simple search-plus-markdown setup without extra provider wiring
Ollama
- API: Ollama Web Search and Fetch API
- Supports
web_searchvia Ollama'sPOST /api/web_searchendpoint - Supports
web_contentsvia Ollama'sPOST /api/web_fetchendpoint - Authenticates with an Ollama API key using
OLLAMA_API_KEYby default - Optional
baseUrloverrides the defaulthttps://ollama.comAPI host for proxies or compatible endpoints - Ollama caps search requests at 10 results, so
web_search.maxResultsis clamped to1–10for this provider
Minimal config:
{
"tools": {
"search": "ollama",
"contents": "ollama"
},
"providers": {
"ollama": {
"credentials": {
"api": "OLLAMA_API_KEY"
}
}
}
}OpenAI
- SDK:
openai - Supports
web_search,web_answer, andweb_research - Uses the Responses API for structured web search, grounded answers, and deep-research runs
- Always enables OpenAI's built-in
web_search_previewtool for search, answer, and research calls - Exposes
modelandinstructionsforweb_searchandweb_answer - Exposes
model,instructions, andmax_tool_callsforweb_research - Good fit when you want official OpenAI web-grounded search, answers, and deep research behind pi's managed tool abstractions
Setup
- Create or reuse an OpenAI API key.
- Configure pi to route
web_search,web_answer,web_research, or any subset of them toopenai. - Optionally set default models under
providers.openai.options.search.model,providers.openai.options.answer.model, andproviders.openai.options.research.model.
{
"tools": {
"search": "openai",
"answer": "openai",
"research": "openai"
},
"providers": {
"openai": {
"credentials": {
"api": "OPENAI_API_KEY"
},
"options": {
"search": {
"model": "gpt-4.1"
},
"answer": {
"model": "gpt-4.1"
},
"research": {
"model": "o4-mini-deep-research"
}
}
}
}
}You can also set instructions as a provider default under
providers.openai.options.search, providers.openai.options.answer, or
providers.openai.options.research, and set max_tool_calls under
providers.openai.options.research. All of them can also be overridden per
call.
Perplexity
- SDK:
@perplexity-ai/perplexity_ai - Supports
web_search,web_answer, andweb_research web_researchis exposed through pi's async research workflow- Uses Perplexity Search for
web_search - Uses Sonar for
web_answerandsonar-deep-researchforweb_research - Exposes search options
country,search_mode,search_domain_filter, andsearch_recency_filter - Exposes
modelfor answer and research calls
Parallel
- SDK:
parallel-web - Agentic and one-shot search modes
- Page content extraction with excerpt and full-content toggles
- Exposes search option
mode - Exposes contents options
excerptsandfull_content
Serper
- API: Serper HTTP API
- Supports
web_searchvia Serper's Google search endpoint - Good fit for fast, straightforward Google-style organic search results
- Exposes search options
gl,hl,location,page, andautocorrect - Preserves rich metadata from Serper responses, including ranking position,
sitelinks, attributes, and top-level response context such as
knowledgeGraph,answerBox,peopleAlsoAsk, andrelatedSearches - Optional
baseUrloverrides are supported for proxies and testing
Minimal config:
{
"tools": {
"search": "serper"
},
"providers": {
"serper": {
"credentials": {
"api": "SERPER_API_KEY"
}
}
}
}Tavily
- SDK:
@tavily/core - Supports
web_searchvia Tavily Search - Supports
web_contentsvia Tavily Extract - Good for pairing LLM-oriented web search with lightweight page extraction
- Exposes search options
topic,searchDepth,timeRange,country,exactMatch,includeAnswer,includeRawContent,includeImages,includeFavicon,includeDomains,excludeDomains, anddays - Exposes contents options
extractDepth,format,includeImages,query,chunksPerSource, andincludeFavicon
Valyu
- SDK:
valyu-js - Supports
web_search,web_contents,web_answer, andweb_research web_researchis exposed through pi's async research workflow- Web, proprietary, and news search types
- Exposes search options
searchType,responseLength, andcountryCode - Exposes answer and research options
responseLengthandcountryCode - Persisted Valyu defaults are scoped under
providers.valyu.options.search,providers.valyu.options.answer, andproviders.valyu.options.research web_contentscurrently uses fixed provider behavior with no extra per-call provider options
The custom provider lets you bring your own wrapper command for any
managed tool. Each capability can point at a different local command under
providers["custom"].options.
custom does not expose standard per-call options fields. Put
provider-specific behavior in the wrapper configuration or in the wrapper
implementation.
The repo includes actual wrapper examples under
examples/custom/wrappers/. They are
small bash scripts that use jq for JSON handling. Each one uses a different
backend pattern:
codex --search execforweb_search- Gemini API via
curlforweb_contents claude -pforweb_answer- Perplexity API via
curlforweb_research
Configuration example
Copy the example wrappers into a local ./wrappers/ directory, then configure:
{
"tools": {
"search": "custom",
"contents": "custom",
"answer": "custom",
"research": "custom"
},
"providers": {
"custom": {
"options": {
"search": {
"argv": ["bash", "./wrappers/codex-search.sh"]
},
"contents": {
"argv": ["bash", "./wrappers/gemini-contents.sh"]
},
"answer": {
"argv": ["bash", "./wrappers/claude-answer.sh"]
},
"research": {
"argv": ["bash", "./wrappers/perplexity-research.sh"]
}
}
}
}
}Those example wrappers deliberately use different local CLIs and APIs so you can see several wrapper styles in one setup without extra glue code.
Each capability can also set an optional cwd and env block. Use cwd when
one wrapper must run from a specific directory. Use env for per-command
variables; each value can be a literal string, an environment variable name, or
!command.
web_research uses the same async workflow as every other research provider:
pi starts the wrapper in the background, tracks the job locally, and writes the
final report to a file when it finishes.
Wrapper contract:
stdin: one JSON request object withcapabilityplus the per-call managed inputs (query,urls,input,maxResults,options,cwd)stdout: one JSON response objectsearch:{ "results": [{ "title", "url", "snippet" }] }contents:{ "answers": [{ "url", "content"?: "...", "summary"?: unknown, "metadata"?: {}, "error"?: "..." }] }answer/research:{ "text": "...", "summary"?: "...", "itemCount"?: 1, "metadata"?: {} }
stderr: optional progress lines- exit code
0: success - non-zero exit code: failure
See examples/custom/README.md for a
copy-and-pasteable setup, and see
examples/custom/wrappers/ for the actual
wrapper files.
The settings block holds shared execution defaults that apply to all
providers unless overridden in a provider's own settings block:
| Field | Default | Description |
|---|---|---|
requestTimeoutMs |
30000 |
Maximum time for a single provider request |
retryCount |
3 |
Retries for transient failures |
retryDelayMs |
2000 |
Initial delay before retrying |
researchTimeoutMs |
1800000 |
Maximum total time for an async web_research job (30 min) |
Use the opt-in live smoke runner to validate the configured providers with the same config-resolution and execution path the extension uses at runtime:
npm run smoke:liveOptional filters:
npm run smoke:live -- --provider gemini
npm run smoke:live -- --tool contents
npm run smoke:live -- --include-researchThe default run exercises search, contents, and answer. Research probes
are excluded unless you pass --include-research, because they are slower and
may incur higher provider cost.