WARNING: Installation and use not supported on Virtual Machine. Please uses a full Linux or Windows computer, or do let us know if you are succeeding to use the docker container on VMs
The Docker container has all the prerequisites embedded on it which makes it easier to install and compatible with most of the OS systems.
Notes:
- Currently only tested on Linux. HPC users should use the Singularity version. Mac M chip computers have to do a install_native
- You will need ~13GB of space to install the container
- The docker image contains Miniconda 3, Freesurfer V7.2, Fastsurfer V1.1.2 and torch 1.10.0. The whole image is ~13 GB.
- The predictions stage can use over 20GB of RAM/VRAM, therefore we recommend using a computer of at least 24GB of RAM (for CPU) or VRAM (for GPU).
Here is the video tutorial detailing how to install the Docker - Docker Installation.
You will need to have docker installed. You can check if docker is installed on your computer by running:
docker --versionIf this command displays the docker version then it is already installed. If not, please follow the guidelines to install docker on your machine.
:::{admonition} Windows :class: tip
On windows, Docker should be using WSL2. :::
Enabling your computer's GPUs for running the pipeline accelerates the brain segmentation when using Fastsurfer and the predictions. Ensure you have installed a MELD Graph version compatible with GPU (see release versions). Follow instructions for your operating system to install.
::::{tab-set}
:::{tab-item} Linux :sync: linux Install the nvidia container toolkit. :::
:::{tab-item} Windows :sync: windows Follow the instructions for enabling NVIDIA CUDA on WSL. If you have a recent NVIDIA driver CUDA should already be installed. :::
::::
You will need to download a Freesurfer license.txt to enable Freesurfer/Fastsurfer to perform the segmentation. Please follow the guidelines to download the file and keep a record of the path where you saved it.
In order to run MELD Graph you need to have a meld_license.txt in the meld graph folder. To get this file, please fill out the MELD registration form. Once submitted, your application will be automatically reviewed and the meld_license.txt file will be send to your email.
In order to run the docker, you'll need to configure a couple of files
- Download
meld_graph_X.X.X.zipwith X.X.X the version from the latest github release and extract it. - Copy the freesurfer
license.txtinto the extracted folder (see above how to get the Freesurfer license) - Copy the MELD
meld_license.txtinto the extracted folder (see above how to get the MELD license) - Create the meld_data folder, if it doesn't exist already. This folder is where you would like to store MRI data to run the classifier
- In the
meld_graph_X.X.Xextracted folder open and edit the compose.yml to add the path to the meld_data folder. The initial compose.yml file looks like ::
services:
meld_graph:
image: meldproject/meld_graph:latest
platform: "linux/amd64"
volumes:
- ./docker-data:/data
environment:
- FS_LICENSE=/run/secrets/license.txt
secrets:
- license.txt
user: $DOCKER_USER
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
count: 0
secrets:
license.txt:
file: ./license.txt
Change the line below "volumes:" to point to the meld_data folder. Do not delete the ":/data" at the end.
For example, if you wanted the folder to be on a mounted drive such as "/mnt/datadrive/meld-data" you should change the line as showed below:
volumes:
- /mnt/datadrive/meld-data:/data
:::{admonition} Windows :class: tip
On windows, if you're using absolute paths, use forward slashes and quotes:
volumes:
- "c/:/Users/John/Desktop/meld-data:/data"
:::
- WARNING: If you do not have GPU on your computer (e.g. Mac laptop) you will need to open the compose.yml file and remove the last 6th lines of the text (everything that includes
deployand below).
Your file should look like that:
services:
meld_graph:
image: meldproject/meld_graph:latest
platform: "linux/amd64"
volumes:
- ./docker-data:/data
environment:
- FS_LICENSE=/run/secrets/license.txt
secrets:
- license.txt
user: $DOCKER_USER
secrets:
license.txt:
file: ./license.txt
- WARNING If you are running docker with Docker Desktop, you will need to ensure that the memory usage allowed by docker is to the maximum, as Docker Desktop halves the memory usage by default. For that you can go in the Docker Desktop settings and change the memory limit (more help in this post
Before being able to use the classifier on your data, data paths need to be set up and the pretrained model needs to be downloaded.
-
Make sure you have 12GB of storage space available for the docker, and 2GB available for the meld data.
-
Run this command to download the docker image and the training data
::::{tab-set}
:::{tab-item} Linux :sync: linux
DOCKER_USER="$(id -u):$(id -g)" docker compose run meld_graph python scripts/new_patient_pipeline/prepare_classifier.py:::
:::{tab-item} Windows :sync: windows
docker compose run meld_graph python scripts/new_patient_pipeline/prepare_classifier.py:::
::::
:::{note}
Append --skip-download-data to the python call to skip downloading the test data.
:::
To verify that you have installed all packages, set up paths correctly, and downloaded all data, this verification script will run the pipeline to predict the lesion classifier on a new patient. It takes approximately 15 minutes to run.
::::{tab-set}
:::{tab-item} Linux :sync: linux
DOCKER_USER="$(id -u):$(id -g)" docker compose run meld_graph pytest:::
:::{tab-item} Windows :sync: windows
docker compose run meld_graph pytest:::
::::
If you run into errors at this stage and need help, you can re-run by changing the last line of the command by the command below to save the terminal outputs in a txt file. Please send pytest_errors.log to us so we can work with you to solve any problems. How best to reach us.
::::{tab-set}
:::{tab-item} Linux :sync: linux
DOCKER_USER="$(id -u):$(id -g)" docker compose run meld_graph pytest -s | tee pytest_errors.log:::
:::{tab-item} Windows :sync: windows
docker compose run meld_graph pytest -s | tee -filepath ./pytest_errors.log:::
::::
You will find pytest_errors.log in the folder where you launched the command.
You can test that the pipeline is working well with your GPU by changing count to all in the compose.yml file. The deploy section should look like this to enable gpus:
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
count: all
To disable gpus, change it back to 0.
Please see our FAQ page for common installation problems and questions
If you encounter any errors, please contact the MELD team for support at meld.study@gmail.com