High-performance API aggregation and caching service for GitHub, LeetCode, and WakaTime developer statistics.
Tech: TypeScript β’ Express β’ Redis β’ REST β’ GraphQL β’ Docker β’ Cron Jobs β’ SVG Rendering
Public API Documentation β’ Live Deployment
GreenJ ReadMe Statistics is a backend-focused statistics aggregation service that retrieves, normalizes, caches, and renders developer metrics from multiple external platforms.
The system integrates with GitHub, LeetCode, and WakaTime APIs to generate dynamically rendered SVG statistic cards while minimizing upstream API load through Redis caching and scheduled background refresh jobs.
The project focuses on API orchestration, distributed caching strategies, dynamic SVG rendering, and infrastructure-aware backend system design.
- Express API layer written in TypeScript
- Provider-specific integrations for GitHub, LeetCode, and WakaTime
- Redis caching layer for response storage and refresh coordination
- SVG rendering pipeline for dynamically generated statistic cards
- Cron-based background refresh jobs for cache warming
- Dockerized deployment workflow for local and production environments
The system separates provider integrations, caching, rendering, and scheduling responsibilities so each layer can evolve independently.
graph TD
A[Client] --> B[Express API]
B --> C[Redis Cache]
C -- cache miss --> D[Provider APIs]
C --> F[SVG Renderer]
D --> E[Normalization Layer]
E --> F[SVG Renderer]
F --> G[Response]
- The API server is stateless
- Redis stores cached provider responses and refresh metadata
- Third-party APIs remain the source of truth for developer statistics
- SVG rendering occurs dynamically from normalized provider data
- Background refresh jobs reduce expensive synchronous requests
This architecture minimizes repeated upstream requests while maintaining responsive rendering performance.
- External API responses are cached in Redis to reduce repeated upstream requests
- Cache entries use expiration windows to balance freshness and performance
- Background refresh jobs proactively update commonly requested statistics
- Redis significantly reduces external API latency and rate-limit pressure
The caching layer is a core architectural component because upstream providers expose different latency profiles and rate-limiting constraints.
- Users can register routes for scheduled refresh intervals
- Cron jobs periodically refresh cached statistics in the background
- Precomputed cache updates reduce expensive real-time recomputation
- Refresh workflows improve responsiveness during high request volume
The refresh system helps maintain responsive rendering performance while reducing direct dependency on live upstream API availability.
- GitHub statistics retrieved through REST APIs
- LeetCode data gathered through GraphQL queries and parsing strategies
- WakaTime metrics retrieved through authenticated API requests
- Provider responses normalized into a shared SVG rendering pipeline
Each provider exposes different schemas and response structures, requiring provider-specific transformation logic before rendering.
- Statistic cards are rendered dynamically as SVG images
- Scales cleanly across devices and resolutions
- Lightweight network responses with no frontend application runtime required
- Rendering pipeline supports parameterized customization options
- Theme support allows light and dark mode rendering
- Generated images are designed for GitHub profile and markdown compatible integration
SVG rendering enables lightweight, embeddable statistic visualization without requiring frontend application hosting.
- Redis caching minimizes repeated external API requests
- Background refresh jobs reduce synchronous computation costs
- SVG rendering avoids heavier frontend rendering workflows
- Dockerized deployment improves environment consistency
- Provider-specific handling helps mitigate external rate limits
- External API outages can temporarily impact statistic freshness
- Rate limiting from third-party providers requires aggressive caching strategies
- Cache expiration windows trade freshness for performance
- Background refresh timing must balance API utilization and data accuracy
- Provider schema changes may require parser and normalization updates
These operational constraints informed the decision to prioritize caching, normalization, and asynchronous refresh workflows.
- Distributed Redis clustering for higher cache throughput
- Queue-based refresh workers for decoupled background processing
- Rate-limit aware adaptive refresh scheduling
- Persistent metric snapshot storage for historical analytics
- Horizontal API scaling behind a reverse proxy/load balancer
- Application packaged as a Docker container for deployment consistency
- CI/CD pipeline validates builds and deployment readiness
- Production deployment currently hosted on Render
- Redis cloud instance used for distributed cache persistence
Containerization allows the application to maintain consistent runtime behavior across development and production environments.
- Jest used for functional and API testing
- Postman used for route validation and API inspection
- CI/CD pipeline validates production deployment readiness
- Clone the repository
- Navigate to the project directory
- Install dependencies
- Configure the application's environment variables from template file
- Start the Redis instance (if using a local Redis server)
git clone https://github.com/GreenJ84/github-readme-stats-typescript.git # 1
cd github-readme-stats-typescript # 2
npm install # 3
mv .env.template .env # 4 - Fill in based on template instructions
brew services start redis # 5 - If using local redis on macOS
sudo systemctl start redis-server # 5 - If using local redis on Linux
sudo service redis-server start # 5 - If using local redis on Windows
redis-server # 5 - If using local redis with direct invocation (Linux/MacOS)- Node.js
- Redis instance (local or cloud)
npm run dev # 2The API server will start on:
http://localhost:8000
- Docker Engine
- Build the Docker image
- Run the Docker container
docker build -t greenj-readme-stats . # 1
docker run -p 8000:8000 -d greenj-readme-stats # 2This project is licensed under the MIT License - see the LICENSE.md file for details.
The MIT License is a permissive license that allows users to use, copy, modify, merge, publish, distribute, and sublicense the software, provided that they include the original copyright notice and disclaimer. It also provides an implied warranty of fitness for a particular purpose and limits the liability of the software's authors and contributors.
By using or contributing to this project, you agree to be bound by the terms and conditions of the MIT License.
If you have any questions about the license or would like to use this software under a different license, please contact the project maintainers.
Contributions are welcome!
Please refer to my profile Code of Conduct before contributing to this project.
My Contribution Guide has more details on how to get started contributing.
Feel free to open an issue or submit a pull request if you have a way to improve this project.
Make sure your request is meaningful, thought out, and that you have tested the app locally before submitting a pull request.
If you like this project, give it a β and share it with friends!
Made with TypeScript, Express, Redis and β€οΈβπ₯