This document explains how to test your application with your local Ollama instance instead of the Docker container for performance comparison.
# Test with local Ollama
./test-local-ollama.sh local
# Test with Docker Ollama
./test-local-ollama.sh docker
# Restore original configuration
./test-local-ollama.sh restore# Make sure your local Ollama is running
ollama serve # if not already running
# Start only the API service with local Ollama configuration
docker compose -f docker-compose.local-ollama.yml up --build api# Temporarily disable local configuration
mv .env .env.backup
# Start full Docker stack
docker compose up --build.env- Environment configuration pointing to local Ollamadocker-compose.local-ollama.yml- Docker compose without Ollama servicestest-local-ollama.sh- Helper script for easy switchingLOCAL_OLLAMA_TESTING.md- This documentation
- Ollama URL:
http://localhost:11434 - Network: Uses
network_mode: hostfor Linux compatibility - Dependencies: Removes Docker Ollama service dependencies
- No Docker overhead for Ollama
- Direct host access to your local Ollama instance
- Faster startup - no need to initialize Ollama container
- Resource efficiency - saves 6-8GB Docker memory allocation
-
Local Ollama installed and running:
# Check if Ollama is running curl http://localhost:11434/api/tags # Start Ollama if needed ollama serve
-
Required models available locally:
# Pull required model if not available ollama pull mistral:latest
-
Start with local Ollama:
./test-local-ollama.sh local -
Test your API endpoints (in another terminal):
# Health check curl http://localhost:8000/health # Test generation endpoint (adjust as needed) curl -X POST http://localhost:8000/generate \ -H "Content-Type: application/json" \ -d '{"prompt": "Test prompt"}'
-
Compare with Docker Ollama:
# Stop local test Ctrl+C # Test with Docker Ollama ./test-local-ollama.sh docker
-
Restore original setup:
./test-local-ollama.sh restore
- Ensure Ollama is running:
ollama serve - Check port 11434 is available:
lsof -i :11434 - Verify models are available:
ollama list
- Fixed: Now uses
network_mode: hostwhich eliminates Docker network isolation - Previous issue:
host.docker.internaldoesn't work on Linux systems - Solution: Direct localhost access with host networking mode
- Ensure the
.envfile exists and containsOLLAMA_URL=http://localhost:11434 - The docker compose file now includes both
env_fileand explicitenvironmentsettings - Check container logs:
docker compose -f docker-compose.local-ollama.yml logs api
- Make script executable:
chmod +x test-local-ollama.sh
- Test local Ollama directly:
curl http://localhost:11434/api/tags - With host networking, the container can directly access localhost services
- If still having issues, ensure no firewall is blocking port 11434
- Host networking: Container shares the host's network stack
- Port conflicts: Ensure port 8000 is not already in use on your system
- Firewall: Some Linux distributions may require firewall configuration for localhost access
To completely remove the testing setup:
# Remove testing files
rm .env docker-compose.local-ollama.yml test-local-ollama.sh LOCAL_OLLAMA_TESTING.md
# Remove any backup files
rm .env.backup 2>/dev/null || trueWhen testing, pay attention to:
- Startup time - Local should be much faster
- Response latency - Local should have lower latency
- Memory usage - Local uses less Docker memory
- CPU utilization - May vary depending on your setup
The local Ollama setup should provide noticeably better performance, especially for development and testing workflows.