This guide covers running LearnFlake (ROS2 Humble + RoboSuite) on Windows using Docker Desktop with WSL2.
- Windows 10 version 21H2+ or Windows 11
- WSL2 enabled with a Linux distribution installed
- Docker Desktop for Windows (WSL2 backend)
- Download from Docker Desktop
- During installation, ensure "Use WSL 2 instead of Hyper-V" is selected
- After installation, go to Settings > Resources > WSL Integration
- Enable integration with your WSL2 distro
If you have an NVIDIA GPU and want CUDA acceleration:
- Install the latest NVIDIA drivers on Windows
- In WSL2, install NVIDIA Container Toolkit:
# Inside WSL2 terminal
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit- Restart Docker Desktop
docker compose -f docker-compose.ubuntu.yml buildCPU only (recommended for most use cases):
docker compose -f docker-compose.ubuntu.yml run --rm rover_cpu bashWith GPU support:
docker compose -f docker-compose.ubuntu.yml run --rm rover_gpu bashFor RL training:
docker compose -f docker-compose.ubuntu.yml run --rm rover_rl bash| Service | Description | GPU | GUI |
|---|---|---|---|
rover_base |
Minimal container, headless | No | No |
rover_cpu |
CPU with WSLg GUI support | No | Yes |
rover_rl |
RL training focused | No | Yes |
rover_gpu |
NVIDIA GPU with CUDA | Yes | Yes |
rover_dev |
Development with exposed ports | No | Yes |
docker compose -f docker-compose.ubuntu.yml up -d rover_cpudocker compose -f docker-compose.ubuntu.yml exec rover_cpu bashdocker compose -f docker-compose.ubuntu.yml downdocker compose -f docker-compose.ubuntu.yml build --no-cachesource /opt/ros/humble/setup.bashcd /LearnFlake
colcon build --symlink-install
source install/setup.bashcd /LearnFlake/src/external_pkgs/RoboSuite
python -m robosuite.demos.demo_random_actioncd /LearnFlake/src/rl_autonomy
python scripts/run_ppo.pyWindows 11 and Windows 10 21H2+ include WSLg (Windows Subsystem for Linux GUI) which allows running Linux GUI apps seamlessly.
# Inside the container
apt-get update && apt-get install -y x11-apps
xeyes # Should show animated eyesMUJOCO_GL=egl- Headless rendering (default, fastest)MUJOCO_GL=glfw- Window-based rendering (requires GUI)MUJOCO_GL=osmesa- Software rendering (slowest, no GPU)
To change rendering mode:
export MUJOCO_GL=glfw
python your_script.pyEnsure WSLg is working:
# In WSL2 (not Docker)
echo $DISPLAY # Should show something like :0If empty, try:
export DISPLAY=:0- Verify NVIDIA drivers are installed on Windows
- Check Docker Desktop > Settings > Resources > GPU
- Ensure NVIDIA Container Toolkit is installed in WSL2
# Verify GPU is visible
docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smiThe Dockerfile continues even if RoboSuite fails to install (using || true).
Install manually inside the container:
pip install -e /LearnFlake/src/external_pkgs/RoboSuite- Use
MUJOCO_GL=eglfor headless rendering - For GPU containers, ensure NVIDIA runtime is active
- Close unnecessary applications on the host
LearnFlake/
├── Dockerfile.ubuntu # Ubuntu 22.04 LTS Dockerfile
├── docker-compose.ubuntu.yml # Windows-optimized compose file
├── docker/
│ └── entrypoint.sh # Container startup script
├── src/
│ ├── external_pkgs/
│ │ └── RoboSuite/ # Physics simulation
│ └── rl_autonomy/ # RL algorithms
└── DOCKER_WINDOWS.md # This file
-
Live code editing: The
./:/LearnFlakevolume mount means changes on Windows are immediately reflected in the container. -
Persist installed packages: If you install additional packages, commit the container:
docker commit learnflake_cpu myusername/learnflake:custom
-
Use VS Code Remote: Install the "Dev Containers" extension in VS Code to develop directly inside the container.
-
TensorBoard: The
rover_devservice exposes port 6006 for TensorBoard:# Inside container tensorboard --logdir=/LearnFlake/logs --bind_all # Access at http://localhost:6006 on Windows