Vue 3 + Vite web interface for DGX Spark services. Runs as a Docker container with nginx.
- Image Generation - SDXL models (Pony, NoobAI, Illustrious) and Flux via ComfyUI
- Video Generation - LTX Video 2B/13B via ComfyUI
- Chat - Streaming chat with Ollama LLMs
- Voice Chat - Voice-in/voice-out with Ultravox + Chatterbox TTS
- Telemetry - Real-time GPU, CPU, memory, disk monitoring
- Management - Docker container control (start/stop/restart/logs)
- PWA Mobile App - Mobile-optimized interface, installable as home screen app
The web UI requires these services to be running on your DGX/server:
- Ollama - LLM inference engine (port 11434)
- ComfyUI - Image/video generation (port 8188)
- Ultravox - Speech LLM (port 8100)
- Chatterbox TTS - Text-to-speech (port 8004)
- Telemetry API - System stats (port 8006)
Ollama must allow cross-origin requests:
# If using snap:
sudo snap set ollama origins='*'
sudo snap restart ollama
# If using systemd:
# Edit /etc/systemd/system/ollama.service and add:
# Environment="OLLAMA_ORIGINS=*"
# Then: sudo systemctl daemon-reload && sudo systemctl restart ollamagit clone https://github.com/vybe/sparky
cd sparkyCopy the example files:
cp .env.example .env
chmod +x deploy.shEdit .env with your server details:
DGX_HOST=your-server-ip # Your DGX/server IP or hostname
DGX_USER=your-username # SSH username
DGX_PASS=your-password # SSH password (or use SSH keys)
REMOTE_DIR=/home/$USER/dgx-web-ui # Optional: custom deployment pathImportant: The UI displays hardware specs and network info. Update these for your system:
export const network = {
local: { name: 'Local', ip: '192.168.1.XXX' }, // Your local IP
vpn: { name: 'VPN', ip: '100.XXX.XXX.XXX' } // Your VPN IP (if applicable)
}
export const hardware = {
gpu: { name: 'Your GPU', arch: 'Architecture', compute: 'Specs' },
cpu: { name: 'Your CPU', detail: 'Details' },
memory: { size: 'XX GB', type: 'Type', bandwidth: 'XXX GB/s' },
storage: { size: 'X TB', free: '~X.X TB' }
}
export const availableModels = {
llm: ['model1', 'model2'], // Your installed LLM models
video: ['video-model'], // Your video generation models
image: ['image-model'], // Your image generation models
audio: ['audio-model'] // Your audio models
}Update the same network, hardware, and models objects to match your system.
Tip: Search for 192.168.1. and 100.xxx. to find all places to update IPs.
Ensure these services are accessible on your server:
| Service | Default Port | How to Check |
|---|---|---|
| Ollama | 11434 | curl http://localhost:11434/api/version |
| ComfyUI | 8188 | curl http://localhost:8188/system_stats |
| Ultravox | 8100 | curl http://localhost:8100/v1/models |
| Chatterbox | 8004 | curl http://localhost:8004/voices |
| Telemetry | 8006 | curl http://localhost:8006/stats |
If your ports differ, update config.dgx.js and nginx.conf accordingly.
./deploy.sh deployThe web UI will be available at:
- Web UI:
http://your-server-ip:3080 - API:
http://your-server-ip:3081/docs
Add ?mobile=1 to the URL: http://your-server-ip:3080/?mobile=1
On iOS, use Safari → Share → Add to Home Screen for a standalone app experience.
Before deploying, ensure you've configured:
-
.envfile with your server credentials -
deploy.shcopied and made executable - Network IPs updated in
src/constants/mobileConstants.js - Network IPs updated in
src/components/Dashboard.vue - Hardware specs updated for your system
- Model lists updated based on your installations
- Service ports verified and accessible
- Ollama CORS configured (see above)
📖 See CONFIGURATION.md for the complete configuration guide covering:
- Network IPs and VPN setup
- Hardware specifications display
- Model lists (LLM, image, video, audio)
- Service ports and links
- Backend API customization
- Docker Compose settings
For advanced customization, see SETUP.md
For local development with API tunneling:
# Install dependencies
npm install
# Run dev server
npm run dev
# Open http://localhost:3000The development server expects services to be accessible on localhost ports (typically via SSH tunneling).
Runtime config is loaded from window.DGX_CONFIG (defined in public/config.js or config.dgx.js).
Local development (public/config.js):
window.DGX_CONFIG = {
COMFYUI_URL: 'http://localhost:11005',
OLLAMA_URL: 'http://localhost:11434',
ULTRAVOX_URL: 'http://localhost:11100',
CHATTERBOX_URL: 'http://localhost:11004',
TELEMETRY_URL: 'http://localhost:11006',
APP_NAME: 'DGX Spark UI',
VERSION: '1.0.0'
};Production (config.dgx.js):
Uses nginx proxies - all services accessed via relative paths like /comfyui, /ollama, etc.
- Vue 3 with Composition API
- Vite 7 for build tooling
- Tailwind CSS 4 for styling
- PWA support (manifest + service worker)
- FastAPI (Python)
- Docker container management
- System monitoring
- Claude Code integration
- Multi-stage Docker build
- Nginx for static file serving
- Docker Compose for orchestration
./deploy.sh sync # Sync files only
./deploy.sh build # Rebuild Docker image
./deploy.sh start # Start containers
./deploy.sh stop # Stop containers
./deploy.sh restart # Restart containers
./deploy.sh logs # View container logs
./deploy.sh status # Check container status
./deploy.sh deploy # Full deployment (sync + build + start)| Service | Default Port | Purpose |
|---|---|---|
| Web UI | 3080 | Frontend application |
| API | 3081 | Backend management API |
| ComfyUI | 8188 | Image/video generation |
| Ollama | 11434 | LLM inference |
| Ultravox | 8100 | Speech understanding |
| Chatterbox | 8004 | Text-to-speech |
| Telemetry | 8006 | System stats |
| Document | Purpose |
|---|---|
| GETTING_STARTED.md | Overview and preparation checklist |
| CONFIGURATION.md | Complete configuration guide |
| SETUP.md | Advanced setup and customization |
| SECURITY.md | Security best practices and policies |
| CONTRIBUTING.md | How to contribute to this project |
Ollama CORS is not configured. See Ollama CORS Configuration above.
Check ComfyUI is running:
sudo docker ps | grep comfyui
sudo docker logs comfyui --tail 50Restart if needed:
sudo docker restart comfyuiUltravox container must be running with VLLM_USE_V1=1:
sudo docker ps | grep ultravox
sudo docker restart ultravox-vllmBackend API must be running:
sudo docker ps | grep dgx-api
sudo docker logs dgx-api --tail 50- The backend API requires Docker socket access for container management
- Claude Code integration runs with
--dangerously-skip-permissionsfor autonomous execution - SSH passwords in
.envshould be secured - consider using SSH keys instead - For production, use a reverse proxy (nginx/Caddy) with HTTPS
- Vue 3 + Composition API
- Vite 7
- Tailwind CSS 4
- Docker + nginx (production)
- PWA (manifest + service worker)
- FastAPI (backend)
MIT
Pull requests welcome! Please ensure:
- Code follows existing style
- Documentation is updated
- No hardcoded credentials or environment-specific values
See CONTRIBUTING.md for details.
Built with Claude Code