Skip to content

medevs/local-smart-portfolio

Repository files navigation

πŸš€ AI-Powered Portfolio Website

A modern, self-hosted portfolio website with an AI assistant powered by local LLMs and RAG technology.

Next.js FastAPI TypeScript Python

Showcase your work with a beautiful portfolio and intelligent AI assistant


✨ Features

  • 🎨 Modern Portfolio: Beautiful, responsive design showcasing projects, skills, and experience
  • πŸ€– AI Assistant: Local LLM-powered chatbot with RAG for intelligent responses
  • πŸ”’ Privacy-First: All AI processing happens locally - no data leaves your server
  • πŸ“Š Real-time Metrics: Display live system status and LLM performance benchmarks
  • πŸ“± Fully Responsive: Mobile-first design that works on all devices

πŸ› οΈ Tech Stack

Frontend

  • Framework: Next.js 15.5.5 (App Router)
  • Language: TypeScript
  • Styling: TailwindCSS 4.0
  • UI Components: shadcn/ui + Radix UI
  • Animations: Framer Motion
  • Icons: Lucide React + React Icons

Backend

  • Framework: FastAPI 0.115.0
  • Language: Python 3.11+
  • Vector DB: ChromaDB
  • LLM: Ollama (local models)
  • Embeddings: Ollama (nomic-embed-text)

πŸš€ Quick Start

Prerequisites

  • Docker and Docker Compose (recommended)
  • OR Python 3.11+, Node.js 18+, and Ollama for local development

🐳 Docker Deployment (Recommended)

Step 1: Clone the Repository

git clone https://github.com/medevs/local-smart-portfolio.git
cd local-smart-portfolio

Step 2: Configure Environment Variables

Create a backend/.env file:

cd backend
cp .env.example .env  # If .env.example exists
# OR create .env manually

Edit backend/.env:

# Ollama Configuration
OLLAMA_BASE_URL=http://ollama:11434
OLLAMA_MODEL=llama3.2:3b

# Embedding Model (Ollama)
EMBEDDING_MODEL=nomic-embed-text

# ChromaDB Configuration
CHROMA_PERSIST_DIR=./data/chroma_db
CHROMA_COLLECTION_NAME=portfolio_docs

# Security (REQUIRED - Generate a secure key)
ADMIN_API_KEY=your-secure-api-key-here

# CORS Settings (comma-separated)
CORS_ORIGINS=http://localhost:3000,http://localhost:3001

# Optional: Other settings
DEBUG=false

Generate a secure API key:

# Using Python
python -c "import secrets; print(secrets.token_urlsafe(32))"

# OR using OpenSSL
openssl rand -base64 32

Step 3: Configure Frontend

Create frontend/.env.local (for local development without Docker):

NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_ADMIN_API_KEY=your-admin-api-key-here

Note for Docker users: NEXT_PUBLIC_API_URL is already configured as a build argument in docker-compose.yml. The .env.local file is only needed for local development without Docker.

Step 4: Start Services

# Build and start all services
docker compose up -d

# Check service status
docker compose ps

Step 5: Access the Application

Note: The ollama-init service automatically pulls required models (llama3.2:3b for LLM, nomic-embed-text for embeddings) on first startup. This may take 5-15 minutes depending on your connection.


πŸ’» Local Development (Without Docker)

Backend Setup

# Navigate to backend
cd backend

# Create virtual environment
python -m venv venv

# Activate virtual environment
# Linux/Mac:
source venv/bin/activate
# Windows:
venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Create .env file (see Docker section above)
# Then start the server
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

Frontend Setup

# Navigate to frontend
cd frontend

# Install dependencies
pnpm install
# OR
npm install

# Create .env.local file (see Docker section above)
# Then start the development server
pnpm dev
# OR
npm run dev

Start Ollama

# Install Ollama from https://ollama.ai
# Then start the service
ollama serve

# Download the required models
ollama pull llama3.2:3b        # LLM model
ollama pull nomic-embed-text   # Embedding model

πŸ“ Project Structure

local-smart-portfolio/
β”œβ”€β”€ backend/              # FastAPI backend
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ routers/      # API endpoints
β”‚   β”‚   β”œβ”€β”€ services/     # Business logic
β”‚   β”‚   β”œβ”€β”€ models/       # Pydantic models
β”‚   β”‚   └── utils/        # Utilities
β”‚   β”œβ”€β”€ data/             # ChromaDB and document storage
β”‚   β”œβ”€β”€ Dockerfile
β”‚   └── requirements.txt
β”‚
β”œβ”€β”€ frontend/             # Next.js frontend
β”‚   β”œβ”€β”€ app/              # Next.js app router pages
β”‚   β”œβ”€β”€ components/       # React components
β”‚   β”œβ”€β”€ data/             # Static data files (customize these!)
β”‚   β”œβ”€β”€ lib/              # Utilities & API client
β”‚   β”œβ”€β”€ Dockerfile
β”‚   └── package.json
β”‚
β”œβ”€β”€ docker-compose.yml     # Docker configuration
β”œβ”€β”€ docker-compose.prod.yml # Production overrides
└── README.md

🎯 Customization

Update Personal Information

Edit frontend/data/personal.ts with your information:

  • Personal details (name, email, location, bio)
  • Work experience
  • Education
  • Skills
  • Social media links

Update Portfolio Content

  • Projects: frontend/data/projects.ts
  • Timeline: frontend/data/timeline.ts
  • Skills: frontend/data/skills.tsx
  • Page Content: frontend/data/pageContent.ts
  • About Section: frontend/data/about.tsx

Frontend Styling Customization

The portfolio uses TailwindCSS with a custom amber/gold theme. To customize:

  1. Global Styles: Edit frontend/app/globals.css
  2. Color Variables: Update CSS custom properties
  3. Components: Modify Tailwind classes in component files
  4. Theme: Adjust colors in frontend/components/layout/ClientLayout.tsx

Customize Styling

The portfolio uses an amber/gold color scheme. To customize:

  1. Global Styles: Edit frontend/app/globals.css
  2. Color Variables: Update CSS custom properties
  3. Components: Modify Tailwind classes in component files
  4. Theme: Adjust colors in frontend/components/layout/ClientLayout.tsx

Change AI Models

To change the LLM model:

  1. Update OLLAMA_MODEL in backend/.env (or docker-compose.yml)
  2. Pull the new model: docker compose exec ollama ollama pull <model-name>
  3. Restart the backend: docker compose restart backend

To change the embedding model:

  1. Update EMBEDDING_MODEL in backend/.env (or docker-compose.yml)
  2. Pull the new model: docker compose exec ollama ollama pull <model-name>
  3. Reset ChromaDB (embeddings have different dimensions):
    docker compose down
    docker volume rm portfolio_chroma_data
    docker compose up -d
  4. Re-ingest your documents

Note: The backend auto-detects embedding dimension mismatches and resets the collection automatically on startup.


πŸ”§ Docker Commands

# Start all services
docker compose up -d

# Stop all services
docker compose down

# View logs
docker compose logs -f

# View logs for specific service
docker compose logs -f backend
docker compose logs -f frontend

# Restart a service
docker compose restart backend

# Rebuild and restart
docker compose up -d --build

# Check service status
docker compose ps

# Execute commands in containers
docker compose exec backend bash
docker compose exec frontend sh

πŸš€ Production Deployment

Using Production Override

For production, use the production override:

docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

The production configuration:

  • Removes port mappings (use a reverse proxy)
  • Sets resource limits
  • Disables debug mode
  • Includes Nginx reverse proxy (optional)

Using a Reverse Proxy

Recommended reverse proxies:

  • Nginx
  • Traefik
  • Caddy

Example Nginx configuration is available in infra/nginx/.

Homelab CI/CD Deployment (Optional)

If you have CI/CD set up with GitHub Actions that builds and pushes images to GHCR:

# Using the root file (for backward compatibility)
docker compose -f docker-compose.yml -f docker-compose.homelab.yml up -d

# OR using the organized deployment/ folder
docker compose -f docker-compose.yml -f deployment/docker-compose.homelab.yml up -d

Note: Both files are identical. The root version is kept for backward compatibility with existing deployments.

What it does:

  • Uses pre-built images from GHCR instead of building locally
  • Configures homelab-specific ports and environment variables
  • Sets pull_policy: always to automatically get latest images

Update image names: Edit the image: fields to match your own GHCR registry if using this template.


πŸ’» Frontend Development

Local Development Setup

cd frontend

# Install dependencies
pnpm install  # or npm install

# Run development server
pnpm dev

Open http://localhost:3000 to view the portfolio.

Frontend Project Structure

frontend/
β”œβ”€β”€ app/                  # Next.js app router pages
β”‚   β”œβ”€β”€ page.tsx         # Home page
β”‚   β”œβ”€β”€ about/           # About page
β”‚   β”œβ”€β”€ projects/        # Projects page
β”‚   β”œβ”€β”€ contact/         # Contact page
β”‚   β”œβ”€β”€ homelab/         # Homelab journey page (optional)
β”‚   └── admin/           # Admin dashboard
β”‚
β”œβ”€β”€ components/          # React components
β”‚   β”œβ”€β”€ ui/              # shadcn/ui components
β”‚   β”œβ”€β”€ sections/        # Page sections
β”‚   β”œβ”€β”€ layout/          # Layout components
β”‚   └── chat/            # Chat components
β”‚
β”œβ”€β”€ data/                # Static data files (customize these!)
β”‚   β”œβ”€β”€ personal.ts      # Personal information
β”‚   β”œβ”€β”€ projects.ts      # Projects data
β”‚   β”œβ”€β”€ timeline.ts      # Timeline data
β”‚   └── ...
β”‚
└── lib/                 # Utilities
    β”œβ”€β”€ api.ts           # API client
    └── utils.ts         # Helper functions

Frontend Environment Variables

Create frontend/.env.local:

NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_ADMIN_API_KEY=your-admin-api-key  # Optional, for admin panel

Frontend Available Scripts

# Development
pnpm dev              # Start dev server

# Production
pnpm build            # Build for production
pnpm start            # Start production server

# Code Quality
pnpm lint             # Run ESLint
pnpm type-check       # TypeScript type checking

Frontend Dependencies

Core:

  • Next.js 15.5.5 - React framework
  • TypeScript - Type safety
  • TailwindCSS 4.0 - Styling
  • Framer Motion - Animations

UI Components:

  • shadcn/ui - Component library
  • Radix UI - Accessible primitives
  • Lucide React - Icons
  • React Icons - Additional icons

State & Data:

  • Zustand - State management
  • Axios - HTTP client

πŸ”§ API Endpoints

Health Check

GET /health

Chat (Streaming)

POST /chat/stream
Content-Type: application/json

{
  "message": "Tell me about your projects",
  "history": []
}

Document Upload (RAG)

POST /ingest
Content-Type: multipart/form-data
X-Admin-Key: <your-api-key>

file: <document-file>

Admin Operations

Get Statistics:

GET /admin/stats
X-Admin-Key: <your-api-key>

List Documents:

GET /documents
X-Admin-Key: <your-api-key>

Delete Document:

DELETE /documents/{id}
X-Admin-Key: <your-api-key>

πŸ”¬ Testing

Backend Tests (pytest)

cd backend

# Install test dependencies
pip install pytest pytest-asyncio pytest-cov httpx

# Run all tests
pytest

# Run with coverage
pytest --cov=app --cov-report=html

# Run specific test file
pytest tests/test_chat.py -v

Frontend Tests (Vitest)

cd frontend

# Run all tests
pnpm test

# Run with UI
pnpm test:ui

# Run with coverage
pnpm test:coverage

# Watch mode
pnpm test:watch

πŸ“Š Observability (Langfuse)

The portfolio includes optional LLM observability with Langfuse for tracing and monitoring.

Enable Langfuse

# Start with Langfuse observability
docker compose --profile observability up -d

Configuration

Add to backend/.env:

LANGFUSE_PUBLIC_KEY=pk-lf-xxx
LANGFUSE_SECRET_KEY=sk-lf-xxx
LANGFUSE_HOST=http://langfuse:3000

Access Langfuse UI

What's Tracked

  • LLM call inputs/outputs
  • Response latency
  • Token usage
  • Error rates

πŸ›‘οΈ Rate Limiting

API endpoints are protected with rate limiting:

Endpoint Development Production
/chat 60/minute 20/minute
/chat/stream 30/minute 10/minute
/ingest 20/minute 5/minute
/admin/* 100/minute 30/minute

When rate limited, you'll receive HTTP 429 (Too Many Requests).


πŸ”’ Security Notes

  1. API Key: Always use a strong, randomly generated API key
  2. Environment Variables: Never commit .env files to version control
  3. CORS: Configure CORS_ORIGINS properly for production
  4. Reverse Proxy: Use HTTPS in production with a reverse proxy
  5. Firewall: Restrict access to admin endpoints
  6. Rate Limiting: Enabled by default to protect LLM endpoints
  7. Input Guardrails: Prompt injection detection protects chat endpoints

πŸ› Troubleshooting

Services won't start

# Check logs
docker compose logs

# Check if ports are in use
netstat -tulpn | grep -E ':(3000|8000|11434)'

# Rebuild containers
docker compose up -d --build

Ollama model not loading

# Check Ollama and ollama-init logs
docker compose logs ollama
docker compose logs ollama-init

# Manually pull the models
docker compose exec ollama ollama pull llama3.2:3b
docker compose exec ollama ollama pull nomic-embed-text

# List available models
docker compose exec ollama ollama list

Frontend can't connect to backend

  1. Docker users: NEXT_PUBLIC_API_URL must be passed as a build argument, not a runtime environment variable. Check your docker-compose.yml:

    frontend:
      build:
        args:
          - NEXT_PUBLIC_API_URL=http://localhost:8000  # Must be build arg!

    Then rebuild: docker compose build frontend --no-cache && docker compose up -d

  2. Local development: Check NEXT_PUBLIC_API_URL in frontend/.env.local

  3. Verify backend is running: docker compose ps

  4. Check CORS settings in backend/.env

  5. Check browser console for errors

Why localhost instead of backend? The frontend JavaScript runs in your browser, which cannot resolve Docker's internal hostnames like backend. Use http://localhost:8000 since port 8000 is exposed to your host machine.

ChromaDB issues

# Reset ChromaDB (WARNING: Deletes all data)
docker compose down
docker volume rm portfolio_chroma_data
docker compose up -d

Note: If you change embedding models, the backend will auto-detect dimension mismatches and reset the collection on startup. You'll see a log message prompting you to re-ingest documents.


πŸ“š Additional Resources


🀝 Contributing

This is an open-source portfolio template. Feel free to:

  1. Fork the repository
  2. Customize it for your own use
  3. Submit improvements via Pull Requests

πŸ“„ License

MIT License - feel free to use this for your own portfolio!


πŸ™ Acknowledgments

  • Ollama for local LLM inference
  • ChromaDB for vector database
  • shadcn/ui for beautiful components
  • Next.js and FastAPI teams

Built with ❀️ using modern web technologies and AI

About

A modern, self-hosted portfolio website with an AI assistant powered by local LLMs and RAG technology.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors