A modern, self-hosted portfolio website with an AI assistant powered by local LLMs and RAG technology.
- π¨ Modern Portfolio: Beautiful, responsive design showcasing projects, skills, and experience
- π€ AI Assistant: Local LLM-powered chatbot with RAG for intelligent responses
- π Privacy-First: All AI processing happens locally - no data leaves your server
- π Real-time Metrics: Display live system status and LLM performance benchmarks
- π± Fully Responsive: Mobile-first design that works on all devices
- Framework: Next.js 15.5.5 (App Router)
- Language: TypeScript
- Styling: TailwindCSS 4.0
- UI Components: shadcn/ui + Radix UI
- Animations: Framer Motion
- Icons: Lucide React + React Icons
- Framework: FastAPI 0.115.0
- Language: Python 3.11+
- Vector DB: ChromaDB
- LLM: Ollama (local models)
- Embeddings: Ollama (nomic-embed-text)
- Docker and Docker Compose (recommended)
- OR Python 3.11+, Node.js 18+, and Ollama for local development
git clone https://github.com/medevs/local-smart-portfolio.git
cd local-smart-portfolioCreate a backend/.env file:
cd backend
cp .env.example .env # If .env.example exists
# OR create .env manuallyEdit backend/.env:
# Ollama Configuration
OLLAMA_BASE_URL=http://ollama:11434
OLLAMA_MODEL=llama3.2:3b
# Embedding Model (Ollama)
EMBEDDING_MODEL=nomic-embed-text
# ChromaDB Configuration
CHROMA_PERSIST_DIR=./data/chroma_db
CHROMA_COLLECTION_NAME=portfolio_docs
# Security (REQUIRED - Generate a secure key)
ADMIN_API_KEY=your-secure-api-key-here
# CORS Settings (comma-separated)
CORS_ORIGINS=http://localhost:3000,http://localhost:3001
# Optional: Other settings
DEBUG=falseGenerate a secure API key:
# Using Python
python -c "import secrets; print(secrets.token_urlsafe(32))"
# OR using OpenSSL
openssl rand -base64 32Create frontend/.env.local (for local development without Docker):
NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_ADMIN_API_KEY=your-admin-api-key-hereNote for Docker users:
NEXT_PUBLIC_API_URLis already configured as a build argument indocker-compose.yml. The.env.localfile is only needed for local development without Docker.
# Build and start all services
docker compose up -d
# Check service status
docker compose psNote: The
ollama-initservice automatically pulls required models (llama3.2:3b for LLM, nomic-embed-text for embeddings) on first startup. This may take 5-15 minutes depending on your connection.
- Portfolio: http://localhost:3000
- Admin Dashboard: http://localhost:3000/admin
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
# Navigate to backend
cd backend
# Create virtual environment
python -m venv venv
# Activate virtual environment
# Linux/Mac:
source venv/bin/activate
# Windows:
venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Create .env file (see Docker section above)
# Then start the server
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000# Navigate to frontend
cd frontend
# Install dependencies
pnpm install
# OR
npm install
# Create .env.local file (see Docker section above)
# Then start the development server
pnpm dev
# OR
npm run dev# Install Ollama from https://ollama.ai
# Then start the service
ollama serve
# Download the required models
ollama pull llama3.2:3b # LLM model
ollama pull nomic-embed-text # Embedding modellocal-smart-portfolio/
βββ backend/ # FastAPI backend
β βββ app/
β β βββ routers/ # API endpoints
β β βββ services/ # Business logic
β β βββ models/ # Pydantic models
β β βββ utils/ # Utilities
β βββ data/ # ChromaDB and document storage
β βββ Dockerfile
β βββ requirements.txt
β
βββ frontend/ # Next.js frontend
β βββ app/ # Next.js app router pages
β βββ components/ # React components
β βββ data/ # Static data files (customize these!)
β βββ lib/ # Utilities & API client
β βββ Dockerfile
β βββ package.json
β
βββ docker-compose.yml # Docker configuration
βββ docker-compose.prod.yml # Production overrides
βββ README.md
Edit frontend/data/personal.ts with your information:
- Personal details (name, email, location, bio)
- Work experience
- Education
- Skills
- Social media links
- Projects:
frontend/data/projects.ts - Timeline:
frontend/data/timeline.ts - Skills:
frontend/data/skills.tsx - Page Content:
frontend/data/pageContent.ts - About Section:
frontend/data/about.tsx
The portfolio uses TailwindCSS with a custom amber/gold theme. To customize:
- Global Styles: Edit
frontend/app/globals.css - Color Variables: Update CSS custom properties
- Components: Modify Tailwind classes in component files
- Theme: Adjust colors in
frontend/components/layout/ClientLayout.tsx
The portfolio uses an amber/gold color scheme. To customize:
- Global Styles: Edit
frontend/app/globals.css - Color Variables: Update CSS custom properties
- Components: Modify Tailwind classes in component files
- Theme: Adjust colors in
frontend/components/layout/ClientLayout.tsx
To change the LLM model:
- Update
OLLAMA_MODELinbackend/.env(or docker-compose.yml) - Pull the new model:
docker compose exec ollama ollama pull <model-name> - Restart the backend:
docker compose restart backend
To change the embedding model:
- Update
EMBEDDING_MODELinbackend/.env(or docker-compose.yml) - Pull the new model:
docker compose exec ollama ollama pull <model-name> - Reset ChromaDB (embeddings have different dimensions):
docker compose down docker volume rm portfolio_chroma_data docker compose up -d
- Re-ingest your documents
Note: The backend auto-detects embedding dimension mismatches and resets the collection automatically on startup.
# Start all services
docker compose up -d
# Stop all services
docker compose down
# View logs
docker compose logs -f
# View logs for specific service
docker compose logs -f backend
docker compose logs -f frontend
# Restart a service
docker compose restart backend
# Rebuild and restart
docker compose up -d --build
# Check service status
docker compose ps
# Execute commands in containers
docker compose exec backend bash
docker compose exec frontend shFor production, use the production override:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -dThe production configuration:
- Removes port mappings (use a reverse proxy)
- Sets resource limits
- Disables debug mode
- Includes Nginx reverse proxy (optional)
Recommended reverse proxies:
- Nginx
- Traefik
- Caddy
Example Nginx configuration is available in infra/nginx/.
If you have CI/CD set up with GitHub Actions that builds and pushes images to GHCR:
# Using the root file (for backward compatibility)
docker compose -f docker-compose.yml -f docker-compose.homelab.yml up -d
# OR using the organized deployment/ folder
docker compose -f docker-compose.yml -f deployment/docker-compose.homelab.yml up -dNote: Both files are identical. The root version is kept for backward compatibility with existing deployments.
What it does:
- Uses pre-built images from GHCR instead of building locally
- Configures homelab-specific ports and environment variables
- Sets
pull_policy: alwaysto automatically get latest images
Update image names: Edit the image: fields to match your own GHCR registry if using this template.
cd frontend
# Install dependencies
pnpm install # or npm install
# Run development server
pnpm devOpen http://localhost:3000 to view the portfolio.
frontend/
βββ app/ # Next.js app router pages
β βββ page.tsx # Home page
β βββ about/ # About page
β βββ projects/ # Projects page
β βββ contact/ # Contact page
β βββ homelab/ # Homelab journey page (optional)
β βββ admin/ # Admin dashboard
β
βββ components/ # React components
β βββ ui/ # shadcn/ui components
β βββ sections/ # Page sections
β βββ layout/ # Layout components
β βββ chat/ # Chat components
β
βββ data/ # Static data files (customize these!)
β βββ personal.ts # Personal information
β βββ projects.ts # Projects data
β βββ timeline.ts # Timeline data
β βββ ...
β
βββ lib/ # Utilities
βββ api.ts # API client
βββ utils.ts # Helper functions
Create frontend/.env.local:
NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_ADMIN_API_KEY=your-admin-api-key # Optional, for admin panel# Development
pnpm dev # Start dev server
# Production
pnpm build # Build for production
pnpm start # Start production server
# Code Quality
pnpm lint # Run ESLint
pnpm type-check # TypeScript type checkingCore:
- Next.js 15.5.5 - React framework
- TypeScript - Type safety
- TailwindCSS 4.0 - Styling
- Framer Motion - Animations
UI Components:
- shadcn/ui - Component library
- Radix UI - Accessible primitives
- Lucide React - Icons
- React Icons - Additional icons
State & Data:
- Zustand - State management
- Axios - HTTP client
GET /healthPOST /chat/stream
Content-Type: application/json
{
"message": "Tell me about your projects",
"history": []
}POST /ingest
Content-Type: multipart/form-data
X-Admin-Key: <your-api-key>
file: <document-file>Get Statistics:
GET /admin/stats
X-Admin-Key: <your-api-key>List Documents:
GET /documents
X-Admin-Key: <your-api-key>Delete Document:
DELETE /documents/{id}
X-Admin-Key: <your-api-key>cd backend
# Install test dependencies
pip install pytest pytest-asyncio pytest-cov httpx
# Run all tests
pytest
# Run with coverage
pytest --cov=app --cov-report=html
# Run specific test file
pytest tests/test_chat.py -vcd frontend
# Run all tests
pnpm test
# Run with UI
pnpm test:ui
# Run with coverage
pnpm test:coverage
# Watch mode
pnpm test:watchThe portfolio includes optional LLM observability with Langfuse for tracing and monitoring.
# Start with Langfuse observability
docker compose --profile observability up -dAdd to backend/.env:
LANGFUSE_PUBLIC_KEY=pk-lf-xxx
LANGFUSE_SECRET_KEY=sk-lf-xxx
LANGFUSE_HOST=http://langfuse:3000- Local: http://localhost:3001
- Homelab: http://your-homelab-ip:3001
- LLM call inputs/outputs
- Response latency
- Token usage
- Error rates
API endpoints are protected with rate limiting:
| Endpoint | Development | Production |
|---|---|---|
/chat |
60/minute | 20/minute |
/chat/stream |
30/minute | 10/minute |
/ingest |
20/minute | 5/minute |
/admin/* |
100/minute | 30/minute |
When rate limited, you'll receive HTTP 429 (Too Many Requests).
- API Key: Always use a strong, randomly generated API key
- Environment Variables: Never commit
.envfiles to version control - CORS: Configure
CORS_ORIGINSproperly for production - Reverse Proxy: Use HTTPS in production with a reverse proxy
- Firewall: Restrict access to admin endpoints
- Rate Limiting: Enabled by default to protect LLM endpoints
- Input Guardrails: Prompt injection detection protects chat endpoints
# Check logs
docker compose logs
# Check if ports are in use
netstat -tulpn | grep -E ':(3000|8000|11434)'
# Rebuild containers
docker compose up -d --build# Check Ollama and ollama-init logs
docker compose logs ollama
docker compose logs ollama-init
# Manually pull the models
docker compose exec ollama ollama pull llama3.2:3b
docker compose exec ollama ollama pull nomic-embed-text
# List available models
docker compose exec ollama ollama list-
Docker users:
NEXT_PUBLIC_API_URLmust be passed as a build argument, not a runtime environment variable. Check yourdocker-compose.yml:frontend: build: args: - NEXT_PUBLIC_API_URL=http://localhost:8000 # Must be build arg!
Then rebuild:
docker compose build frontend --no-cache && docker compose up -d -
Local development: Check
NEXT_PUBLIC_API_URLinfrontend/.env.local -
Verify backend is running:
docker compose ps -
Check CORS settings in
backend/.env -
Check browser console for errors
Why localhost instead of backend? The frontend JavaScript runs in your browser, which cannot resolve Docker's internal hostnames like backend. Use http://localhost:8000 since port 8000 is exposed to your host machine.
# Reset ChromaDB (WARNING: Deletes all data)
docker compose down
docker volume rm portfolio_chroma_data
docker compose up -dNote: If you change embedding models, the backend will auto-detect dimension mismatches and reset the collection on startup. You'll see a log message prompting you to re-ingest documents.
This is an open-source portfolio template. Feel free to:
- Fork the repository
- Customize it for your own use
- Submit improvements via Pull Requests
MIT License - feel free to use this for your own portfolio!
- Ollama for local LLM inference
- ChromaDB for vector database
- shadcn/ui for beautiful components
- Next.js and FastAPI teams
Built with β€οΈ using modern web technologies and AI