Agent-friendly structured data access for Data Feeds and Data Streams#3780
Open
gfletcher-cll wants to merge 28 commits into
Open
Agent-friendly structured data access for Data Feeds and Data Streams#3780gfletcher-cll wants to merge 28 commits into
gfletcher-cll wants to merge 28 commits into
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Contributor
|
👋 gfletcher-cll, thanks for creating this pull request! To help reviewers, please consider creating future PRs as drafts first. This allows you to self-review and make any final changes before notifying the team. Once you're ready, you can mark it as "Ready for review" to request feedback. Thanks! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds agent-friendly, structured data access for Data Feeds and Data Streams, and improves dataset discoverability for correct programmatic usage.
Replaces large, component-rendered tables with deterministic
.txtdataset endpoints, enabling scoped retrieval (per-network for feeds, per-category for streams) while aligning with how agents select and query data.Key changes
Structured dataset endpoints (docs-native):
Data Feeds:
/data-feeds/feed-addresses/{type}.txt(per-type index)/data-feeds/feed-addresses/{type}/{network}.txt(per-network datasets)Data Streams:
/data-streams/stream-ids/{type}.txt(category-scoped stream IDs)/data-streams/networks.txt(verifier proxy metadata)Docs rendering (
.md.ts):Replaced full data tables with:
Prevents large payloads and HTML scraping
Output normalization:
llms.txt updates (feeds + streams):
Introduced explicit category/type → dataset mapping
Encodes canonical retrieval pattern:
Ensures agents choose the correct dataset (e.g. smartdata vs default, rwa vs crypto) instead of defaulting to broad datasets
How to validate
Data Feeds
Type-specific datasets:
Per-network dataset:
Docs page (should show link + small example only):
Data Streams
Category datasets:
Network metadata:
Docs page (should show link + example only):
Result
Deterministic, low-token data access for agents
Clear separation:
Agents reliably select the correct dataset based on feed type or stream category