Public Provider Configuration Tool - A TypeScript/Node.js-based tool to automatically fetch and standardize AI model information from various providers.
# Install dependencies
pnpm install
# Build the project (uses Vite for bundling)
pnpm build
# Run with all providers
pnpm start fetch-all
# Run with specific providers
pnpm start fetch-providers -p ppinfra,tokenflux,groq,ollama,siliconflow
# Development mode
pnpm run dev fetch-all
# Run with custom output directory
pnpm start fetch-all -o custom_output
# Clean build artifacts
pnpm clean# Test PPInfra API endpoint
curl --request GET --url https://api.ppinfra.com/openai/v1/models
# Validate generated JSON
jq empty dist/*.json
# Check file sizes
du -h dist/*.json├── src/
│ ├── models/ # TypeScript interfaces and types
│ ├── providers/ # Provider implementations
│ ├── fetcher/ # Data fetching logic
│ ├── output/ # Output handling
│ ├── processor/ # Data processing logic
│ ├── config/ # Configuration management
│ └── cli.ts # CLI entry point
├── dist/ # Generated JSON output files
├── build/ # Compiled JavaScript build output (Vite)
├── manual-templates/ # Manually maintained provider templates
├── config/ # Configuration files
├── docs/ # Documentation
└── dist_rust_backup/ # Backup of Rust version output for comparison
src/cli.ts- CLI entry point with Commander.jssrc/providers/PPInfraProvider.ts- PPInfra API implementationsrc/providers/TokenfluxProvider.ts- Tokenflux implementationsrc/providers/GroqProvider.ts- Groq API implementationmanual-templates/ollama.json- Ollama template definitionsmanual-templates/siliconflow.json- SiliconFlow template definitionsvite.config.ts- Vite build configuration for library bundlingdocs/architecture-overview.md- Complete architecture documentation.github/workflows/fetch-models.yml- Automated fetching workflow (Node.js)
Single provider JSON:
{
"provider": "ppinfra",
"providerName": "PPInfra",
"lastUpdated": "2025-01-15T10:30:00Z",
"models": [
{
"id": "model-id",
"name": "Model Name",
"contextLength": 32768,
"maxTokens": 4096,
"vision": false,
"functionCall": true,
"reasoning": {
"supported": true,
"default": true
},
"type": "chat"
}
]
}-
Create Provider Implementation
- Create new file in
src/providers/(e.g.,src/providers/newprovider.ts) - Implement the
Providerinterface with required methods:async fetchModels(): Promise<ModelInfo[]>providerId(): stringproviderName(): string
- Create new file in
-
Add Module Export
- Add export to
src/providers/index.ts:export { NewProviderProvider } from './newprovider';
- Add export to
-
Register in CLI
- Import the provider in
src/cli.ts:import { NewProviderProvider } from './providers/newprovider'; - Add provider initialization in the appropriate command handler
- Import the provider in
-
Update Documentation
- Add JSON link to README.md "Available Model Data" section
- Update "Currently Supported Providers" section with provider status
import { Provider } from './provider';
import { ModelInfo, ModelType, createModelInfo } from '../models/model-info';
import axios from 'axios';
interface NewProviderModel {
id: string;
name: string;
contextSize: number;
maxOutputTokens: number;
features: string[];
modelType: string;
}
interface NewProviderResponse {
data: NewProviderModel[];
}
export class NewProviderProvider implements Provider {
private apiUrl: string;
private client = axios.create();
constructor(apiUrl: string) {
this.apiUrl = apiUrl;
}
private convertModel(model: NewProviderModel): ModelInfo {
const vision = model.features.some(f => f.includes('vision') || f.includes('image'));
const functionCall = model.features.some(f => f.includes('function') || f.includes('tool'));
const reasoning = model.features.some(f => f.includes('reasoning') || f.includes('thinking'));
const modelType = model.modelType.toLowerCase() === 'chat' ? ModelType.Chat :
model.modelType.toLowerCase() === 'completion' ? ModelType.Completion :
model.modelType.toLowerCase() === 'embedding' ? ModelType.Embedding :
ModelType.Chat;
return createModelInfo(
model.id,
model.name,
model.contextSize,
model.maxOutputTokens,
vision,
functionCall,
reasoning,
modelType
);
}
async fetchModels(): Promise<ModelInfo[]> {
try {
const response = await this.client.get<NewProviderResponse>(this.apiUrl);
const models = response.data.data
.map(model => this.convertModel(model));
return models;
} catch (error) {
throw new Error(`Failed to fetch models from NewProvider: ${error instanceof Error ? error.message : String(error)}`);
}
}
providerId(): string {
return 'newprovider';
}
providerName(): string {
return 'New Provider';
}
}The system will automatically:
- Generate
{provider_id}.jsonfile indist/directory - Include provider data in
all.jsonaggregated file - Update GitHub Actions to fetch from new provider
- Create downloadable JSON links in releases
The workflow automatically:
- Runs on workflow dispatch (manual trigger)
- Runs on tag push (
release-*.*.*) - Fetches latest model data
- Uploads artifacts for manual triggers
- Creates releases with downloadable JSON files for tag triggers
Optional API keys can be set as GitHub secrets or local environment variables:
GROQ_API_KEY- Required for Groq provider (API access)- Add others as needed
- PPInfra: ✅ No API key required - public API
- OpenRouter: ✅ Supplied by models.dev - no live fetch
- Vercel AI Gateway: ✅ Supplied by models.dev - no live fetch
- GitHub AI Models: ✅ Supplied by models.dev - no live fetch
- Tokenflux: ✅ No API key required - public marketplace API
- DeepSeek: ✅ Supplied by models.dev - no live fetch
- Ollama: ✅ No API key required - template-based provider
- SiliconFlow: ✅ No API key required - template-based provider
- Gemini: ✅ Supplied by models.dev - no live fetch
- Groq: ❌ API key required - private API access only
- OpenAI: ✅ Supplied by models.dev - no live fetch
- Anthropic: ✅ Supplied by models.dev - no live fetch
- Build failures: Run
pnpm buildor check Vite configuration and TypeScript compilation - JSON validation errors: Check API response format changes
- Rate limiting: Adjust rate limits in provider configurations
- Network timeouts: Increase timeout values in HTTP client
- Coverage issues: Use
pnpm coverageto generate test coverage reports
- Migrate from Rust to TypeScript/Node.js
- Add OpenAI provider implementation (65+ models with template matching)
- Add Anthropic provider implementation (8 Claude models with API key support)
- Implement configuration file loading
- Add OpenRouter provider implementation
- Add Google Gemini provider implementation
- Add Vercel AI Gateway provider implementation
- Add GitHub AI Models provider implementation
- Add Tokenflux provider implementation
- Add Groq provider implementation
- Add DeepSeek provider implementation
- Add Ollama provider implementation (template-based)
- Add SiliconFlow provider implementation (template-based)
- Add rate limiting and retry logic
- Add comprehensive error handling
- Implement template validation system
- Add provider health check endpoints
- Add Azure OpenAI provider implementation
- Add AWS Bedrock provider implementation
- Add Cohere provider implementation
- Add Mistral AI provider implementation
- Add Stability AI provider implementation
- Add Replicate provider implementation
- Add Hugging Face provider implementation
- Add advanced caching system
- Add provider status monitoring
- Add web UI for model browsing