diff --git a/README.md b/README.md index 5c00cbf6c..a0ba4916e 100644 --- a/README.md +++ b/README.md @@ -5,10 +5,9 @@ This is the repository for the [MeVisLab Tutorials and Examples GitHub pages](ht ## Configuration ### Local Deployment -* Checkout the code +* Check out the code * Install _extended_ hugo from the [Hugo Website](https://gohugo.io/) -* Install npm e.g. from [npmjs Website](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) -* Change to the *mevislab.github.io* folder and run `npm install` -* Stay in this folder and run `hugo server -d public --baseURL //localhost/examples/` (the `/examples/` path is not needed, - but it helps to find problems that might appear on the published website) -* Open the given URL in your favorite browser +* Install npm, e.g., from [npmjs Website](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) +* Change to mevislab.github.io folder and run `npm install` +* Stay in this folder and run `hugo server -d public --baseURL //localhost/examples/` (The `/examples/` path is locally not absolute necessary but it helps to find problems that might appear on the production website.) +* Open the given URL in your favorite browser diff --git a/mevislab.github.io/content/about/about.md b/mevislab.github.io/content/about/about.md index 1ed31f16d..b99e66ffe 100644 --- a/mevislab.github.io/content/about/about.md +++ b/mevislab.github.io/content/about/about.md @@ -5,6 +5,7 @@ draft: false status: "OK" tags: ["Symbols", "Glossary", "Overview"] --- + ## Symbols We embedded three symbols, referencing additional info, tasks, and warnings: {{}} diff --git a/mevislab.github.io/content/contact.md b/mevislab.github.io/content/contact.md index e70e8fdb8..de7bda0e6 100644 --- a/mevislab.github.io/content/contact.md +++ b/mevislab.github.io/content/contact.md @@ -3,6 +3,7 @@ title: "Contact" date: 2022-06-15T08:54:53+02:00 draft: false --- + ### Feedback is Valuable and Always Appreciated! #### MeVisLab Licensing diff --git a/mevislab.github.io/content/examples/basic_mechanisms/contour_filter/index.md b/mevislab.github.io/content/examples/basic_mechanisms/contour_filter/index.md index d6cea488e..4f5bfc392 100644 --- a/mevislab.github.io/content/examples/basic_mechanisms/contour_filter/index.md +++ b/mevislab.github.io/content/examples/basic_mechanisms/contour_filter/index.md @@ -11,7 +11,7 @@ This example shows how to create a contour filter. Images are loaded via `ImageLoad` module and visualized unchanged in a `View2D` module *View2D1*. Additionally, the images are modified by a local macro module `Filter` and shown in another `View2D` viewer *View2D*. -In order to display the same slice (unchanged and changed), the module `SyncFloat` is used to synchronize the field value *startSlice* in both viewers. The `SyncFloat` module duplicates the value *Float1* to the field *Float2*. +In order to display the same slice (unchanged and changed), the module `SyncFloat` is used to synchronize the field value *startSlice* in both viewers. The `SyncFloat` module duplicates the value *Float1* to the field *Float2* if it differs by *Epsilon*. ![Screenshot](examples/basic_mechanisms/contour_filter/image.png) diff --git a/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example1/index.md b/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example1/index.md index b94f94556..aa3f30bbd 100644 --- a/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example1/index.md +++ b/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example1/index.md @@ -1,11 +1,11 @@ --- layout: post -title: "Panel for the contour filter" +title: "Panel for the Contour Filter" category: "basic_mechanisms" --- # Example 1: Panel for the Contour Filter -This example contains a whole package structure. Inside you can find the example contour filter for which a panel was created. +This example contains an entire package structure. Inside, you can find the example contour filter for which a panel was created. ## Summary A new macro module `Filter` has been created. Initially, macro modules do not provide an own panel containing user interface elements such as buttons. The *Automatic Panel* is shown on double-clicking the module providing the name of the module. diff --git a/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example2/index.md b/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example2/index.md index 6c143a0cf..39ee2a25f 100644 --- a/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example2/index.md +++ b/mevislab.github.io/content/examples/basic_mechanisms/macro_modules_and_module_interaction/example2/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Python scripting" +title: "Python Scripting" category: "basic_mechanisms" --- @@ -10,7 +10,7 @@ This example shows how to create module interactions via Python scripting. ## Summary A new macro module `IsoCSOs` is created providing two viewers in its internal network, `View2D` and `SoExaminerViewer`. Both viewers are included in the panel of the module. -To showcase how Python functions can be implemented in MeVisLab and called from within a module, additional buttons to browse directories and create contours via the `CSOIsoGenerator` are added. Lastly, a field listener is implemented reacting to field changes by colorizing contours when the user hovers over them with the mouse. +To showcase how Python functions can be implemented in MeVisLab and called from within a module, additional buttons to browse directories and create contours via the `CSOIsoGenerator` are added. Lastly, a field listener is implemented that reacts to field changes by colorizing contours when the user hovers over them with the mouse. ![Screenshot](examples/basic_mechanisms/macro_modules_and_module_interaction/example2/image2.png) diff --git a/mevislab.github.io/content/examples/basic_mechanisms/viewer_application/index.md b/mevislab.github.io/content/examples/basic_mechanisms/viewer_application/index.md index fab2f837d..870902907 100644 --- a/mevislab.github.io/content/examples/basic_mechanisms/viewer_application/index.md +++ b/mevislab.github.io/content/examples/basic_mechanisms/viewer_application/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Creating a simple application" +title: "Creating a Simple Application" category: "basic_mechanisms" --- diff --git a/mevislab.github.io/content/examples/data_objects/contours/example1/index.md b/mevislab.github.io/content/examples/data_objects/contours/example1/index.md index b10a3d2e1..10754a876 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example1/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example1/index.md @@ -35,9 +35,9 @@ In this example, contours are created and colors and styles of these CSOs are cu ![Screenshot](examples/data_objects/contours/example1/image.png) ## Summary -+ Contours are stored as their own abstract data type called Contour Segmentation Objects (often abbreviated to *CSO*). -+ The `SoCSO\*Editor` module group contains several useful modules to create, interact with or modify CSOs. -+ Created CSOs are temporarily stored and can be managed using the `CSOManager`. +* Contours are stored as their own abstract data type called Contour Segmentation Objects (often abbreviated to *CSO*). +* The `SoCSO\*Editor` module group contains several useful modules to create, interact with or modify CSOs. +* Created CSOs are temporarily stored and can be managed using the `CSOManager`. # Download The example network can be downloaded [here](examples/data_objects/contours/example1/ContourExample1.mlab) diff --git a/mevislab.github.io/content/examples/data_objects/contours/example2/index.md b/mevislab.github.io/content/examples/data_objects/contours/example2/index.md index 4203a33ac..180e1c675 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example2/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example2/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Contour interpolation" +title: "Contour Interpolation" category: "data_objects" --- @@ -8,7 +8,7 @@ category: "data_objects" This example shows how to interpolate CSOs across slices. ## Summary -In this example, semi-automatic countours are created using the `SoCSOLiveWireEditor` module and their visualization is modified using the `SoCSOVisualizationSettings` module. +In this example, semiautomatic countours are created using the `SoCSOLiveWireEditor` module and their visualization is modified using the `SoCSOVisualizationSettings` module. Additional contours between the manually created ones are generated by the `CSOSliceInterpolator` and added to the `CSOManager`. Different groups of contours are created for the left and right lobe of the lung and colored respectively. diff --git a/mevislab.github.io/content/examples/data_objects/contours/example3/index.md b/mevislab.github.io/content/examples/data_objects/contours/example3/index.md index 29f253617..030b6f8f9 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example3/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example3/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "2D and 3D visualization of contours" +title: "2D and 3D Visualization of Contours" category: "data_objects" --- diff --git a/mevislab.github.io/content/examples/data_objects/contours/example4/index.md b/mevislab.github.io/content/examples/data_objects/contours/example4/index.md index 6ff2da024..cea3e1e4b 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example4/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example4/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Annotation of images" +title: "Annotation of Images" category: "data_objects" --- @@ -8,7 +8,7 @@ category: "data_objects" This example shows how to add annotations to an image. ## Summary -In this example, the network of **Contour Example 3** is extended so that the volume of the 3D mask generated by the `VoxelizeCSO` module is calculated. The `CalculateVolume` module counts the number of voxels in the given mask and returns the correct volume in ml. The calculated volume will be used for a custom `SoView2DAnnotation` displayed in the `View2D`. +In this example, the network of **Contour Example 3** is extended, so that the volume of the 3D mask generated by the `VoxelizeCSO` module is calculated. The `CalculateVolume` module counts the number of voxels in the given mask and returns the correct volume in ml. The calculated volume will be used for a custom `SoView2DAnnotation` displayed in the `View2D`. ![Screenshot](examples/data_objects/contours/example4/image.png) diff --git a/mevislab.github.io/content/examples/data_objects/contours/example5/index.md b/mevislab.github.io/content/examples/data_objects/contours/example5/index.md index dae91cd32..29bfd52c8 100644 --- a/mevislab.github.io/content/examples/data_objects/contours/example5/index.md +++ b/mevislab.github.io/content/examples/data_objects/contours/example5/index.md @@ -1,16 +1,16 @@ --- layout: post -title: "Contours and ghosting" +title: "Contours and Ghosting" category: "data_objects" --- # Contour Example 5: Contours and Ghosting -This image shows how to automatically create CSOs based on isovalues. In addition, the visualization of CSOs of previous and subsequent slices is shown. +This image shows how to automatically create CSOs based on isovalues. In addition, the visualization of CSOs on previous and subsequent slices is shown. ## Summary -In this example, the `CSOIsoGenerator` is used to generate contours based on a given isovalue of the image. Contours are generated in the image where the given isovalue is close to the one configured. These contours are stored in the `CSOManager` and ghosting is activated in the `SoCSOVisualizationSettings`. +In this example, the `CSOIsoGenerator` is used to generate contours based on a given isovalue out of the image. Contours are generated in the image where the given isovalue is close to the one configured. These contours are stored in the `CSOManager` and ghosting is activated in the `SoCSOVisualizationSettings`. -Ghosting means not only showing contours available on the currently visible slice but also contours of the neighbouring slices with increasing transparency. +"Ghosting" means not only showing contours available on the currently visible slice but also contours on the neighboring slices with increasing transparency. The contours are also displayed in a three-dimensionsl `SoExaminerViewer` by using the `SoCSO3DRenderer`. diff --git a/mevislab.github.io/content/examples/data_objects/curves/example1/index.md b/mevislab.github.io/content/examples/data_objects/curves/example1/index.md index 7e89a52a6..fac8f47db 100644 --- a/mevislab.github.io/content/examples/data_objects/curves/example1/index.md +++ b/mevislab.github.io/content/examples/data_objects/curves/example1/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Drawing curves" +title: "Drawing Curves" category: "data_objects" --- diff --git a/mevislab.github.io/content/examples/data_objects/markers/example1/index.md b/mevislab.github.io/content/examples/data_objects/markers/example1/index.md index 1641fe1cc..83ee2e3e9 100644 --- a/mevislab.github.io/content/examples/data_objects/markers/example1/index.md +++ b/mevislab.github.io/content/examples/data_objects/markers/example1/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Distance between markers" +title: "Distance Between Markers" category: "data_objects" --- diff --git a/mevislab.github.io/content/examples/data_objects/surface_objects/example2/index.md b/mevislab.github.io/content/examples/data_objects/surface_objects/example2/index.md index f01acebf2..e6388808b 100644 --- a/mevislab.github.io/content/examples/data_objects/surface_objects/example2/index.md +++ b/mevislab.github.io/content/examples/data_objects/surface_objects/example2/index.md @@ -1,10 +1,10 @@ --- layout: post -title: "Processing and modification of WEMs" +title: "Processing and Modifying of WEMs" category: "data_objects" --- -# Surface Example 2: Processing and Modification of WEMs +# Surface Example 2: Processing and Modifying of WEMs This example shows how to process and modify WEMs using the modules `WEMModify`, `WEMSmooth`, and `WEMSurfaceDistance`. ![Screenshot](examples/data_objects/surface_objects/example2/DO7_03.png) diff --git a/mevislab.github.io/content/examples/data_objects/surface_objects/example3/index.md b/mevislab.github.io/content/examples/data_objects/surface_objects/example3/index.md index f75fc5513..f72a0e4b8 100644 --- a/mevislab.github.io/content/examples/data_objects/surface_objects/example3/index.md +++ b/mevislab.github.io/content/examples/data_objects/surface_objects/example3/index.md @@ -1,10 +1,11 @@ --- layout: post -title: "Apply transformations on a 3D WEM object via mouse interactions" +title: "Apply Transformations to a 3D WEM Object Via Mouse Interactions" category: "data_objects" --- -# Surface Example 3: Interactions with WEM +# Surface Example 3: Interactions With WEM + ## Scale, Rotate, and Move a WEM in a Scene In this example, we are using a `SoTransformerDragger` module to apply transformations on a 3D WEM object via mouse interactions. ![Screenshot](examples/data_objects/surface_objects/example3/image.png) diff --git a/mevislab.github.io/content/examples/data_objects/surface_objects/example4/index.md b/mevislab.github.io/content/examples/data_objects/surface_objects/example4/index.md index 72b67c83c..d169e8863 100644 --- a/mevislab.github.io/content/examples/data_objects/surface_objects/example4/index.md +++ b/mevislab.github.io/content/examples/data_objects/surface_objects/example4/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Interactively moving WEM" +title: "Interactively Moving WEM" category: "data_objects" --- diff --git a/mevislab.github.io/content/examples/howto.md b/mevislab.github.io/content/examples/howto.md index c4eddf11e..99a205019 100644 --- a/mevislab.github.io/content/examples/howto.md +++ b/mevislab.github.io/content/examples/howto.md @@ -1,5 +1,5 @@ --- -title: "Using provided examples" +title: "Using Provided Examples" date: 2022-06-15T08:56:33+02:00 draft: false status: "OK" @@ -9,14 +9,14 @@ menu: weight: 649 parent: "examples" --- -### Structure +### Structure Each tutorial chapter was used as an umbrella theme to structure related examples that are linked in a list. After clicking any of the linked examples, you will be forwarded to a short description of the feature and have the option to download the resource that produces your desired effect. The provided files are usually either *.mlab* files or *.zip* archives. You will find a short tutorial on how to add those files into your MeVisLab application to work with them below. -### MeVisLab (\*.mlab) files +### MeVisLab (*.mlab*) Files MeVisLab files are networks stored as *.mlab* files.
{{}} @@ -25,7 +25,7 @@ Double-clicking the left mouse button within your MeVisLab workspace works as a Files can also be opened using the menu option {{< menuitem "File" "Open">}}. -### Archives (\*.zip files) +### Archive (*.zip*) Files Archives mostly contain macro modules.
To use those macro modules, you will need to know how to handle user packages. @@ -49,7 +49,7 @@ Feel free to create certain directories if they do not exist yet, but make sure Continuing on your MeVisLab workspace: You might need to reload the module cache after adding macro modules out of *.zip* archives for them to be displayed and ready to be used. To do so, open {{< menuitem "Extras" "Reload Module Database (Clear Cache)" >}}. -### Python (\*.py) or Script (\*.script) Files +### Python (*.py*) or Script (*.script*) Files In the rare case that a *.py* or *.script* file is provided, make sure to firstly follow the tutorials related to macro modules and test cases. {{}} diff --git a/mevislab.github.io/content/examples/image_processing/example2/index.md b/mevislab.github.io/content/examples/image_processing/example2/index.md index 1f5ac6916..88c8e2330 100644 --- a/mevislab.github.io/content/examples/image_processing/example2/index.md +++ b/mevislab.github.io/content/examples/image_processing/example2/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Masking images" +title: "Masking Images" category: "image_processing" --- @@ -8,7 +8,7 @@ category: "image_processing" In this example, we create a simple mask on an image, so that background voxels are not affected by changes of the window/level values. ## Summary -We are loading images by using the `LocalImage` module and show them in a `SynchroView2D`. The same image is shown in the right viewer of the `SynchroView2D` but with a `Threshold` based `Mask`. +We are loading images by using the `LocalImage` module and show them in a `SynchroView2D`. The same image is shown in the right viewer of the `SynchroView2D` but with a `Threshold`-based `Mask`. ![Screenshot](examples/image_processing/example2/image.png) diff --git a/mevislab.github.io/content/examples/image_processing/example4/index.md b/mevislab.github.io/content/examples/image_processing/example4/index.md index 89aa6a307..b6686e3ea 100644 --- a/mevislab.github.io/content/examples/image_processing/example4/index.md +++ b/mevislab.github.io/content/examples/image_processing/example4/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Subtract 3D objects" +title: "Subtract 3D Objects" category: "image_processing" --- @@ -8,7 +8,7 @@ category: "image_processing" In this example, we subtract a sphere from another WEM. ## Summary -We are loading images by using the `LocalImage` module and render them as a 3D scene in a `SoExaminerViewer`. We also add a sphere that is then subtracted from the original image. +We are loading images by using the `LocalImage` module and render them as a 3D scene in a `SoExaminerViewer`. We also add a sphere that is then subtracted from the original surface. ![Screenshot](examples/image_processing/example4/image.png) diff --git a/mevislab.github.io/content/examples/open_inventor/example1/index.md b/mevislab.github.io/content/examples/open_inventor/example1/index.md index 649fd41fd..8d9a84bbf 100644 --- a/mevislab.github.io/content/examples/open_inventor/example1/index.md +++ b/mevislab.github.io/content/examples/open_inventor/example1/index.md @@ -10,7 +10,7 @@ In this example, a simple Open Inventor scene is created. The Open Inventor scen ## Summary A `SoExaminerViewer` is used to render Open Inventor scenes in 3D. The `SoBackground` module defines the background of the whole scene. -Three 3D objects are created (`SoCone`, `SoSphere`, and `SoCube`) having a defined `SoMaterial` module for setting the *DiffuseColor* of the object. The cube and the cone are also transformed by a `SoTransform` module so that they are located next to the centered sphere. +Three 3D objects are created (`SoCone`, `SoSphere`, and `SoCube`) having a defined `SoMaterial` module for setting the *DiffuseColor* of the object. The cube and the cone are also transformed by a `SoTransform` module, so that they are located next to the centered sphere. In the end, all three objects including their materials and transformations are added to the `SoExaminerViewer` by a `SoGroup`. diff --git a/mevislab.github.io/content/examples/open_inventor/example2/index.md b/mevislab.github.io/content/examples/open_inventor/example2/index.md index 136957af3..4c81d6b60 100644 --- a/mevislab.github.io/content/examples/open_inventor/example2/index.md +++ b/mevislab.github.io/content/examples/open_inventor/example2/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Mouse interactions in an Open Inventor scene" +title: "Mouse Interactions in an Open Inventor Scene" category: "open_inventor" --- diff --git a/mevislab.github.io/content/examples/open_inventor/example4/index.md b/mevislab.github.io/content/examples/open_inventor/example4/index.md index f8fdf2ace..82ac6f097 100644 --- a/mevislab.github.io/content/examples/open_inventor/example4/index.md +++ b/mevislab.github.io/content/examples/open_inventor/example4/index.md @@ -1,10 +1,10 @@ --- layout: post -title: "Camera interaction with collision detection" +title: "Camera Interaction With Collision Detection" category: "open_inventor" --- -# Open Inventor Example 4: Camera Interaction with Collision Detection +# Open Inventor Example 4: Camera Interaction With Collision Detection This example shows how to implement a camera flight using keyboard shortcuts. Collisions with anatomical structures are detected and the flight stops. In addition to that, the camera object and direction is rendered in another viewer. This example has been taken from the [MeVisLab forum](https://forum.mevislab.de/index.php?topic=3947.0). diff --git a/mevislab.github.io/content/examples/testing/Example1/index.md b/mevislab.github.io/content/examples/testing/Example1/index.md index e20ef1e47..cd25ba467 100644 --- a/mevislab.github.io/content/examples/testing/Example1/index.md +++ b/mevislab.github.io/content/examples/testing/Example1/index.md @@ -1,5 +1,5 @@ --- -title: "Writing a simple test case in MeVisLab" +title: "Writing a Simple Test Case in MeVisLab" date: 2022-06-15T08:56:33+02:00 category: "testing" --- diff --git a/mevislab.github.io/content/examples/testing/example3/index.md b/mevislab.github.io/content/examples/testing/example3/index.md index 3ede87d2a..9655fb97e 100644 --- a/mevislab.github.io/content/examples/testing/example3/index.md +++ b/mevislab.github.io/content/examples/testing/example3/index.md @@ -1,10 +1,10 @@ --- -title: "Iterative tests in MeVisLab with Screenshots" +title: "Iterative Tests in MeVisLab With Screenshots" date: 2022-06-15T08:56:33+02:00 category: "testing" --- -# Testing Example 3: Iterative Tests in MeVisLab with Screenshots +# Testing Example 3: Iterative Tests in MeVisLab With Screenshots In this example you will learn how to write iterative tests in MeVisLab. In addition to that, we create a screenshot of a viewer and add the image to the test report. # Download diff --git a/mevislab.github.io/content/examples/thirdparty.md b/mevislab.github.io/content/examples/thirdparty.md index 45b150477..96e4991b3 100644 --- a/mevislab.github.io/content/examples/thirdparty.md +++ b/mevislab.github.io/content/examples/thirdparty.md @@ -1,5 +1,5 @@ --- -title: "ThirdParty Examples" +title: "Third-party Examples" date: 2022-06-15T08:56:33+02:00 draft: false status: "OK" @@ -10,5 +10,5 @@ menu: parent: "examples" --- -## ThirdParty Examples: +## Third-party Examples: {{< childpages >}} diff --git a/mevislab.github.io/content/examples/thirdparty/example1/index.md b/mevislab.github.io/content/examples/thirdparty/example1/index.md index 4f71ceeea..1b252f060 100644 --- a/mevislab.github.io/content/examples/thirdparty/example1/index.md +++ b/mevislab.github.io/content/examples/thirdparty/example1/index.md @@ -1,10 +1,10 @@ --- -title: "OpenCV Webcam access" +title: "OpenCV Webcam Access" date: 2022-06-15T08:56:33+02:00 category: "thirdparty" --- -# ThirdParty Example 1: OpenCV Webcam Access +# Third-party Example 1: OpenCV Webcam Access This Python file shows how to access the webcam via OpenCV and use the video via `PythonImage` module in MeVisLab. # Download diff --git a/mevislab.github.io/content/examples/thirdparty/example2/index.md b/mevislab.github.io/content/examples/thirdparty/example2/index.md index 9f384b50c..a5ad10ca5 100644 --- a/mevislab.github.io/content/examples/thirdparty/example2/index.md +++ b/mevislab.github.io/content/examples/thirdparty/example2/index.md @@ -1,10 +1,10 @@ --- -title: "Face detection in OpenCV" +title: "Face Detection in OpenCV" date: 2022-06-15T08:56:33+02:00 category: "thirdparty" --- -# ThirdParty Example 2: Face Detection in OpenCV +# Third-party Example 2: Face Detection in OpenCV This Python file shows how to access the webcam and detect faces in the video stream via OpenCV. # Download diff --git a/mevislab.github.io/content/examples/thirdparty/pytorch1/index.md b/mevislab.github.io/content/examples/thirdparty/pytorch1/index.md index f899924df..51998f30e 100644 --- a/mevislab.github.io/content/examples/thirdparty/pytorch1/index.md +++ b/mevislab.github.io/content/examples/thirdparty/pytorch1/index.md @@ -1,11 +1,11 @@ --- -title: "PyTorch segmentation" +title: "PyTorch Segmentation" date: 2022-06-15T08:56:33+02:00 category: "thirdparty" --- -# ThirdParty Example 5: Segmentation in Webcam Stream by using PyTorch -This macro module segments a person shown in a webcam stream by using a pre-trained network from PyTorch (torchvision). +# Third-party Example 5: Segmentation in Webcam Stream by using PyTorch +This macro module segments a person shown in a webcam stream by using a pretrained network from PyTorch (torchvision). ![Screenshot](images/tutorials/thirdparty/pytorch_example3_10.png) diff --git a/mevislab.github.io/content/examples/visualization/example1/index.md b/mevislab.github.io/content/examples/visualization/example1/index.md index 43fd8ebbd..4cdb94be3 100644 --- a/mevislab.github.io/content/examples/visualization/example1/index.md +++ b/mevislab.github.io/content/examples/visualization/example1/index.md @@ -1,11 +1,11 @@ --- layout: post -title: "Synchronous view of two images" +title: "Synchronous View of Two Images" category: "visualization" --- # Visualization Example 1: Synchronous View of Two Images -This very simple example shows how to load an image and apply a basic `Convolution` filter to the image. The image with and without filter is shown in a Viewer and scrolling is synchronized so that the same slice is shown for both images. +This simple example shows how to load an image and apply a basic `Convolution` filter to the image. The image with and without filter is shown in a viewer and scrolling is synchronized, so that the same slice is shown for both images. ![Screenshot](examples/visualization/example1/image.png) diff --git a/mevislab.github.io/content/examples/visualization/example2/index.md b/mevislab.github.io/content/examples/visualization/example2/index.md index af8bbd5bc..bd293c06d 100644 --- a/mevislab.github.io/content/examples/visualization/example2/index.md +++ b/mevislab.github.io/content/examples/visualization/example2/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Creating a magnifier" +title: "Creating a Magnifier" category: "visualization" --- diff --git a/mevislab.github.io/content/examples/visualization/example3/index.md b/mevislab.github.io/content/examples/visualization/example3/index.md index a51f1a5da..fb717ee4e 100644 --- a/mevislab.github.io/content/examples/visualization/example3/index.md +++ b/mevislab.github.io/content/examples/visualization/example3/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Image overlays" +title: "Image Overlays" category: "visualization" --- diff --git a/mevislab.github.io/content/examples/visualization/example4/index.md b/mevislab.github.io/content/examples/visualization/example4/index.md index 6ba607646..dddacfc30 100644 --- a/mevislab.github.io/content/examples/visualization/example4/index.md +++ b/mevislab.github.io/content/examples/visualization/example4/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Display images converted to Open Inventor scene objects" +title: "Display Images Converted to Open Inventor Scene Objects" category: "visualization" --- diff --git a/mevislab.github.io/content/examples/visualization/example5/index.md b/mevislab.github.io/content/examples/visualization/example5/index.md index 68e85669b..c2418db88 100644 --- a/mevislab.github.io/content/examples/visualization/example5/index.md +++ b/mevislab.github.io/content/examples/visualization/example5/index.md @@ -1,6 +1,6 @@ --- layout: post -title: "Volume rendering and interactions" +title: "Volume Rendering and Interactions" category: "visualization" --- diff --git a/mevislab.github.io/content/glossary.md b/mevislab.github.io/content/glossary.md index db2056f8e..c4d6a75ce 100644 --- a/mevislab.github.io/content/glossary.md +++ b/mevislab.github.io/content/glossary.md @@ -4,5 +4,6 @@ date: 2023-04-19T08:56:33+02:00 draft: false status: "OK" --- + ## Glossary {{}} \ No newline at end of file diff --git a/mevislab.github.io/content/introduction/introduction.md b/mevislab.github.io/content/introduction/introduction.md index e861e08ff..4f9926906 100644 --- a/mevislab.github.io/content/introduction/introduction.md +++ b/mevislab.github.io/content/introduction/introduction.md @@ -8,13 +8,12 @@ tags: ["Tutorial", "Introduction", "Glossary", "Modules", "ML Module", "Filetype menu: main: identifier: "tutorial_introduction" - title: "Overview of MeVisLab Tutorials and general information about User Interface, modules, types of modules, searching for modules and Glossary including filetypes." + title: "Overview of MeVisLab Tutorials and General Information About User Interface, Modules, Types of Modules, Searching for Modules, and Glossary Including Filetypes" weight: 310 parent: "tutorials" --- ## Tutorial Introduction {#tutorial_introduction} - Welcome to [MeVisLab](glossary/#mevislab)! More than 20 years of experience and the continuous implementation of adaptations made MeVisLab one of @@ -32,17 +31,16 @@ for visual programming and the advanced text editor [*MATE*](glossary/#mevislab- scripting, providing code completion, debugging, profiling, and automated test development as well as execution. -You can re-use thousands of pre-defined [*Modules*](glossary/#module) for image processing +You can reuse thousands of predefined [*Modules*](glossary/#module) for image processing (2D up to 6D images) and visualization, combine them, or even build your own. A quick introduction on available modules and [example networks](glossary/#example-network) will be given in the following tutorials. ### Structure and Usage of Provided Tutorials - This tutorial is a hands-on training. You will learn about basic mechanics and features of MeVisLab. -Starting with this introduction, we will be leading you through all relevant aspects of the user interface, +Starting with this introduction, we will lead you through all relevant aspects of the user interface, commonly used functionalities, and provide you with all the basic knowledge you need to build your own web applications. Additional information is accessible through embedded links, forwarding you to a related glossary entry or tutorial and shortcuts, advice and hints will be highlighted as shown [here](about/about/). @@ -52,7 +50,6 @@ You find them at the end of the tutorial or, also sorted by chapters, under the The examples under the designated menu entry are more suitable if you already have a little experience and rather search for inspiration than for explanations. ### Starting MeVisLab for the First Time - Right after installation of MeVisLab, you will find some new icons on your Desktop (if selected during setup). ![MeVisLab Desktop Icons](images/tutorials/basicmechanics/WindowsIcons.png "MeVisLab Desktop Icons (Windows)") @@ -64,19 +61,16 @@ Maybe postpone the usage of the *QuickStart* icons as they can cause created pac {{}} ### MeVisLab IDE User Interface {#tutorial_ide} - -First, start the MeVisLab IDE. After the Welcome Screen, the standard user interface opens. +First, start the MeVisLab IDE. After showing a Welcome Screen, the standard user interface opens. ![MeVisLab IDE User Interface](images/tutorials/introduction/IDE1.png "MeVisLab IDE User Interface") #### Workspace - By default, MeVisLab starts with an empty [workspace](glossary/#workspace). -This is where you will be developing and editing networks. Essentially, networks form the base of all processing and visualization pipelines, so the workspace is where the visual programming is done. +This is where you will develop and edit networks. Essentially, networks form the base of all processing and visualization pipelines, so the workspace is where the visual programming is done. #### Views Area - The standard [Views Area](glossary/#views-area) contains the [Output Inspector and Module Inspector](./tutorials/basicmechanisms#The_Output_Inspector_and_the_Module_Inspector "Output Inspector and Module Inspector"). With the help of the Output Inspector, you can visualize the modules output. {{}} @@ -84,28 +78,25 @@ Further information on each module, e.g., about [module parameters](glossary/#fi {{}} #### Debug Output - Debugging information can be found using the [Debug Output](glossary/#debug-output). The MeVisLab IDE and its layout are completely configurable. You can rearrange the items and add new views via {{< menuitem "Main Menu" "View" "Views" >}}. ### File Types Used in, for, and With MeVisLab - {{< bootstrap-table table_class="table table-striped" >}} |
Extension
| Description | | --- | --- | -| `.mlab` | Network file, includes all information about the networks modules, their settings, their connections, and module groups. Networks developed using the `MeVisLab SDK` are stored as `.mlab` files and can only be opened having a valid SDK license. | -| `.def` | Module definition file, necessary for a module to be added to the common MeVisLab module database. May also include all MDL script parts (if they are not sourced out to the `.script` file). | +| `.mlab` | Network file, includes all information about the networks modules, their settings, their connections, and module groups. Networks developed using the `MeVisLab SDK` are stored as *.mlab* files and can only be opened having a valid SDK license. | +| `.def` | Module definition file, necessary for a module to be added to the common MeVisLab module database. May also include all MDL script parts (if they are not sourced out to the *.script* file). | | `.script` | `MDL` script file, typically includes the user interface definition of panels. See [Chapter GUI Development](./tutorials/basicmechanisms/macromodules/guidesign#Example_Paneldesign "GUI Development") for an example on GUI programming. | | `.mlimage` | MeVisLab internal image format for 6D images saved with all DICOM tags, lossless compression, and in all data types. | -| `.mhelp` | File with descriptions of all fields and possible use-cases of a module, edit- and creatable by using `MATE`. See [Help files](./tutorials/basicmechanisms/macromodules/helpfiles "Help files") for details. | +| `.mhelp` | File with descriptions of all fields and possible use cases of a module, edit- and creatable by using `MATE`. See [Help files](./tutorials/basicmechanisms/macromodules/helpfiles "Help files") for details. | | `.py` | Python file, used for scripting in macro modules. See [Python scripting](./tutorials/basicmechanisms/macromodules/pythonscripting#TutorialPythonScripting "Python scripting") for an example on macro programming. | | `.dcm` | DCM part of the imported DICOM file, see [Importing DICOM Data](./tutorials/basicmechanisms/dataimport#DICOMImport "Importing DICOM Data"). | {{< /bootstrap-table >}} ### Module Types {#Module_Types} - {{}} [Modules](glossary/#module) are the basic entities the MeVisLab concept is built upon.
They provide the functionalities to process, display, and interact with images. @@ -122,7 +113,6 @@ The three existing module types (ML, [Open Inventor](glossary/#open-inventor), a {{< /bootstrap-table >}} ### Invalid Modules - If a module is invalid, it is displayed in bright red. This might happen if the module itself is not available for your system. {{< bootstrap-table table_class="table table-striped" >}} @@ -140,25 +130,25 @@ Once the debug console is cleared, the warning and error indicators next to the module are also cleared. {{
}} -Informational messages are indicated in a similar matter on the same spot, but in a subtle gray color. +Informational messages are indicated in a similar manner on the same spot, but in a subtle gray color. ### Module Interactions Through the Context Menu Each module has a context menu, providing the following options: ![Context Menu of a module](images/tutorials/introduction/ModuleContextMenu.png "Context Menu of a module") -* **Show Internal Network:** [Macro modules](glossary/#macro-module) provide an entry to open the internal network. You can see what happens inside a macro module. The internal network may also contain other macro modules. Changes in the internal network are applied to the currently running instance of your module but not saved permanently. -* **Show Window:** If a module does not provide a User Interface, you will see the automatic panel, showing the module's name. Modules may additionally have one or more windows that can be opened. You can also open the Scripting Console of a module to integrate Python. +* **Show Internal Network:** [Macro modules](glossary/#macro-module) provide an entry to open the internal network. You can see what happens inside a macro module. The internal network may also contain other macro modules. +* **Show Window:** If a module does not provide a user interface, you will see the automatic panel showing the module's name. Modules may additionally have one or more windows that can be opened. You can also open the Scripting Console of a module to integrate Python. * **Instance Name:** You can edit or copy the instance name. Renaming can be useful if the same module appears more than once in one network and/or if you want to access and distinguish the modules in your Python script. * **Help:** The menu entry Help provides access to the Module Help pages and to an example network where the module is used. This example network often helps to understand which additional modules can be added to create your desired effect. * **Extras:** Automated tests written for the specific module can be executed here. You can also run this module in a separate process. -* **Reload Definition:** In the case you are currently working on a module, you may need to reload the definition so that your changes are applied on the module (for example, attached Python scripts). -* **Related Files:** Related files allows a quick access to the modules *.script* or *.py* files. The files are automatically opened in [MATE](glossary/#mevislab-mate) for editing. Changes to the *.mlab* file are applied permanently for your module. +* **Reload Definition:** In the case you are currently working on a module, you may need to reload the definition, so that your changes are applied on the module (for example, attached Python scripts). +* **Related Files:** Related files allows a quick access to the modules *.script* or *.py* files. The files are automatically opened in [MATE](glossary/#mevislab-mate) for editing. * **Show Enclosing Folder:** This entry opens the directory where your module is stored. * **Grouping:** Multiple modules can be clustered and the groups can be named. This adds clarity to the structure of your network. In addition to that, grouped modules can be converted to local or global macro modules easily. ### Input and Output Connectors {#Module_Connectors} -As the creation of a network requires connected modules, each module has input and output connectors, located on their top and bottom side. Data is transmitted from the output connector on the top side of one module to the input connector on another module's bottom side. +As the creation of a network requires connected modules, each module has input and output connectors, located on their top and bottom side. Data is generally transmitted from the output connector on the top side of one module to the input connector on another module's bottom side. Once again, three types can be distinguished: @@ -174,7 +164,7 @@ Once again, three types can be distinguished: A connection can be established by dragging one module close to the other. {{
}} -Some modules even contain hidden connectors in addition to the ones displayed on the module's surface. Click on the workspace and press {{< keyboard "SPACE" >}} to see the hidden connectors as well as the internal networks of each module. You can now also use the hidden connectors for building connections. +Some modules even contain hidden connectors in addition to the ones displayed on the module's surface. Click on the workspace and press {{< keyboard "SPACE" >}} to see the hidden connectors as well as internal networks of each macro module. You can now also use the hidden connectors for building connections. For more information about connectors and different types of connections, click {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch03s03.html" "here" >}}.
If you want to know more about establishing, removing, moving, and replacing connections, have a look at {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch03s04.html" "this." >}} @@ -187,20 +177,18 @@ An exemplary use case for a parameter connection is synchronization. Have a look {{
}} ### Macro Modules {#Macro_Modules} - {{}} The creation of macros is explained in more detail in [Tutorial Chapter I - Example 2.2](tutorials/basicmechanisms/macromodules/globalmacromodules) {{}} -### Adding Modules to your Workspace {#Searching_and_Adding_Modules} - +### Adding Modules to Your Workspace {#Searching_and_Adding_Modules} There are several ways to add a module to your current network: -- via the menu bar entry {{< menuitem "Modules" >}} -- via {{< menuitem "Quick Search" >}} -- via the View Module Search -- via the View Module Browser -- via copy and paste from another network -- by scripting, see the {{< docuLinks "/Resources/Documentation/Publish/SDK/ScriptingReference/index.html" "Scripting Reference" >}} +* via the menu bar entry {{< menuitem "Modules" >}} +* via {{< menuitem "Quick Search" >}} +* via the View Module Search +* via the View Module Browser +* via copy and paste from another network +* by scripting, see the {{< docuLinks "/Resources/Documentation/Publish/SDK/ScriptingReference/index.html" "Scripting Reference" >}} Both the menu entry{{< menuitem "Modules" >}} and the Module Browser display all available modules. The modules are sorted hierarchically by topic and name, as defined in the file `Genre.def`. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms.md b/mevislab.github.io/content/tutorials/basicmechanisms.md index c73ffb130..b3ecdcb43 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms.md @@ -8,13 +8,13 @@ tags: ["Beginner", "Tutorial", "Macro", "Macro modules", "Local Macro"] menu: main: identifier: "basicmechanisms" - title: "Examples explaining the basic mechanisms of MeVisLab like using modules and connecting them to Networks for viewing images." + title: "Examples Explaining the Basic Mechanisms of MeVisLab Such as Using Modules and Connecting Them to Networks for Viewing Images" weight: 350 parent: "tutorials" --- -## Basic Mechanics of MeVisLab (Example: Building a Contour Filter) {#TutorialBasicMechanics} -In this chapter you will learn the basic mechanisms of the MeVisLab IDE. You will learn how to re-use existing modules to load and view data and you will build your first processing pipeline. +## Basic Mechanisms of MeVisLab (Example: Building a Contour Filter) {#TutorialBasicMechanics} +In this chapter you will learn the basic mechanisms of the MeVisLab IDE. You will learn how to reuse existing modules to load and view data, and you will build your first processing pipeline. {{< youtube "hRspMChITE4">}} @@ -25,7 +25,6 @@ Additional information on the basics of MeVisLab are explained {{< docuLinks "/R [//]: <> (MVL-651) ### Loading Data {#TutorialLoadingData} - First, we need to load the data we would like to work on, e.g., a CT scan. In MeVisLab, modules are used to perform their associated specific task: they are the basic entities you will be working with. Each module has a different functionality for processing, visualization, and interaction. Connecting modules enables the development of complex processing pipelines. You will get to know different types of modules throughout the course of this tutorial. Starting off, we will add the module `ImageLoad` to our network to load our data. The module can be found by typing its name into the search bar on the top-right corner and is added to your network by clicking it. @@ -45,8 +44,7 @@ For a more detailed description on loading DICOM images, see {{< docuLinks "/Res [//]: <> (MVL-651) -### The Output-Inspector and the Module Inspector {#The_Output_Inspector_and_the_Module_Inspector} - +### The Output Inspector and the Module Inspector {#The_Output_Inspector_and_the_Module_Inspector} To inspect and visualize the loaded data, we can use the Output Inspector located in the {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch04s09.html" "Views" >}} area. You can already interact with the image using the mouse wheel {{< mousebutton "middle" >}} and mouse buttons {{< mousebutton "left" >}} / {{< mousebutton "right" >}}. To preview the image, click on the triangle on the top side of the module `ImageLoad`, which offers the module's output. All module outputs can be found at the top side of the respective module. You can now inspect your image in 2D: @@ -55,9 +53,9 @@ You can now inspect your image in 2D: ![Output Inspector](images/tutorials/basicmechanics/BM_03.png "Output Inspector") -Your image does not look like this? One reason might be that the slice of the image you are looking at has no information. Click on the Output Inspector and scroll through the slices (This process is called "Slicing") by using the mouse wheel {{< mousebutton "middle" >}}. Still not seeing anything? Then try to adjust the contrast of the given image by keeping the right mouse button {{< mousebutton "right" >}} pressed while moving the mouse. +Your image does not look like this? One reason might be that the slice of the image you are looking at has no information. Click on the Output Inspector and scroll through the slices (this process is called "Slicing") by using the mouse wheel {{< mousebutton "middle" >}}. Still not seeing anything? Then, try to adjust the contrast of the given image by keeping the right mouse button {{< mousebutton "right" >}} pressed while moving the mouse. -You are not restricted to 2D. The Output Inspector offers a 3D View of most loaded images. Try to click on the 3D-tab located in the Output Inspector. The 3D display of the image can be rotated by left-clicking on the image and moving the courser around. The little cube in the lower right corner of the viewer shows the orientation of the image. +You are not restricted to 2D. The Output Inspector offers a 3D View of most loaded images. Try to click on the 3D tab located in the Output Inspector. The 3D display of the image can be rotated by left-clicking on the image and moving the courser around. The little cube in the lower right corner of the viewer shows the orientation of the image. {{}} * A = anterior, front @@ -68,15 +66,14 @@ You are not restricted to 2D. The Output Inspector offers a 3D View of most load * F = feet {{}} -Below the Output Inspector, you'll find the Module Inspector. The Module Inspector displays properties and parameters of the selected module. Parameters are stored in so called **Fields**. Using the Module Inspector you can examine different fields of your `ImageLoad` module. The module has, for example, the fields *filename* (the path, the loaded image is stored in), as well as *sizeX*, *sizeY* and *sizeZ* (the size of the loaded image). +Below the Output Inspector, you'll find the Module Inspector. The Module Inspector displays properties and parameters of the selected module. Parameters are stored in so called **Fields**. Using the Module Inspector, you can examine different fields of your `ImageLoad` module. The module has, for example, the fields *filename* (the path the loaded image is stored in), as well as *sizeX*, *sizeY*, and *sizeZ* (the extent of the loaded image). ![Module Inspector](images/tutorials/basicmechanics/BM_04.png "Module Inspector") ### Viewer {#TutorialViewer} - Instead of using the Output Inspector to inspect images, we'd suggest to add another viewer to the network. Search for the module `View2D` and add it to your workspace. Most modules have different connector options. Data is generally transmitted from the top side of a module to another modules bottom side. -The module `View2D` has one input connector for voxel images (triangle-shaped) and three other possible input connectors (Shaped like half-circles) on the bottom. The half-circle-shaped input connectors will be explained later on. Generally, module outputs can be connected to module inputs with the same symbol and thus transmit information and data between those modules. +The module `View2D` has one input connector for voxel images (triangle-shaped) and three other possible input connectors (shaped like half-circles) on the bottom. The half-circle-shaped input connectors will be explained later on. Generally, module outputs can be connected to module inputs with the same symbol and thus transmit information and data between those modules. ![2D Viewer](images/tutorials/basicmechanics/BM_05.png "2D Viewer") @@ -88,7 +85,7 @@ You can now display the loaded image in the newly added viewer module by connect 3. Check if the connection is well-defined (green line). -4. Release the mouse button on the input connector of your `View2D`-module to establish the connection. +4. Release the mouse button on the input connector of your `View2D` module to establish the connection. ![Establish connection](images/tutorials/basicmechanics/BM_06.png "Establish connection") @@ -107,7 +104,6 @@ Connecting, Disconnecting, Moving, and Replacing Connections is explained in mor [//]: <> (MVL-653) ### Image Processing {#TutorialImageProcessing} - An average kernel will be used to smooth the image as our next step will be to actually process our image. Add the `Convolution` module to your workspace and disconnect the `View2D` module from the `ImageLoad` module by clicking on the connection and pressing {{< keyboard "DEL" >}}. Now, you can build new connections from the module `ImageLoad` to the module `Convolution` and the `Convolution` module to `View2D`. ![Convolution Module](images/tutorials/basicmechanics/BM_08.png "Convolution Module") @@ -118,19 +114,18 @@ Open the panel of the `Convolution` module by double-clicking it. The panel allo The module `View2D` is now displaying the smoothed image. -To compare the processed and unprocessed image, click on the output connector of the module `ImageLoad` to display the original image in the Output Inspector. The Output Inspectors greatest advantage is that it's able to display the output of any connector in the process chain (as long as an interpretable format is used). Simply click the connector or connection to find out more about the module output. +To compare the processed and unprocessed image, click on the output connector of the module `ImageLoad` to display the original image in the Output Inspector. The Output Inspectors greatest advantage is that it's able to display the output of any connector in the process chain (as long as an interpretable format is used). Simply click the connector or connection to find out more about the module's output. -You can also inspect changes between processed (output connector) and unprocessed (input connector) images by adding a second or even third viewer to your network. "Layers" of applied changes can be inspected next to each other using more than one viewer and placing as well as connecting them accordingly. We will be using a second `View2D` module. Notice how the second viewer is numbered for you to be able to distinguish them better. It might be important to know at this point that numerous connections can be established from one output connector but an input connector can only receive one stream of data. Please connect the module `ImageLoad` to the second viewer to display the images twice. You can now scroll through the slices of both viewers and inspect the images. +You can also inspect changes between processed (output connector) and unprocessed (input connector) images by adding a second or even third viewer to your network. "Layers" of applied changes can be inspected next to each other using more than one viewer and placing as well as connecting them accordingly. We will be using a second `View2D` module. Notice how the second viewer is numbered for you to be able to distinguish them better. It might be important to know at this point that numerous connections can be established from one output connector but an input connector can only receive one stream of data. Connect the module `ImageLoad` to the second viewer to display the images twice. You can now scroll through the slices of both viewers and inspect the images. ![Multiple Viewers](images/tutorials/basicmechanics/BM_10.png "Multiple Viewers") ### Parameter Connection for Synchronization {#TutorialParameterConnection} - You're now able to scroll through the slices of the image in two separate windows. To examine the effect of the filter even better, we will now synchronize both viewers. We already know data connections between module inputs and outputs. Besides module connections, it is also possible to connect the fields within the panels of the modules via parameter connection. The values of connected fields are synchronized, which means that the changing value of one field will be adapted to all other connected fields. -In order to practise establishing parameter connections, add the `SyncFloat` module to your workspace. +In order to practice establishing parameter connections, add the `SyncFloat` module to your workspace. ![SyncFloat Module](images/tutorials/basicmechanics/BM_11.png "SyncFloat Module") @@ -145,7 +140,7 @@ Search for the field *startSlice*. The field indicates which slice is currently Now, double-click the module `SyncFloat` to open its panel. -Click on the label *startSlice* in the automatic panel of the module `View2D`, keep the button pressed and drag the connection to the label *Float1* in the panel of the module `SyncFloat`. +Click on the label *startSlice* in the automatic panel of the module `View2D`, keep the button pressed, and drag the connection to the label *Float1* in the panel of the module `SyncFloat`. ![Synchronize StartSlice](images/tutorials/basicmechanics/BM_13.png "Synchronize StartSlice") @@ -161,19 +156,18 @@ As a result, scrolling through the slices with the mouse wheel {{< mousebutton " ![Your final Network](images/tutorials/basicmechanics/BM_16.png "Your final Network") -It is also possible to use the pre-defined module `SynchroView2D` to accomplish a similar result.(`SynchroView2D`'s usage is described in more detail in [this chapter](tutorials/visualization/visualizationexample1/) ). +It is also possible to use the predefined module `SynchroView2D` to accomplish a similar result.(`SynchroView2D`'s usage is described in more detail in [this chapter](tutorials/visualization/visualizationexample1/) ). ### Grouping Modules {#TutorialGroupingModules} - -A contour filter can be created based on our previously created network. To finalize the filter, add the modules `Arithmetic2` and `Morphology` to your workspace and connect the modules as shown below. Double-click {{< mousebutton "left" >}} the module `Arithmetic2` to open its panel. Change the field *Function* of the module `Arithmetic2` to use the function *subtract* in the panel of the module. The contour filter is done now. You can inspect each processing step using the Output Inspector by clicking on the input and output connectors of the respective modules. The final results can be displayed using the viewer modules. If necessary, adjust the contrast by pressing the right arrow key and moving the cursor. +A contour filter can be created based on our previously created network. To finalize the filter, add the modules `Arithmetic2` and `Morphology` to your workspace and connect the modules as shown below. Double-click the module `Arithmetic2` to open its panel. Change the field *Function* of the module `Arithmetic2` to use the function *subtract* in the panel of the module. The contour filter is done now. You can inspect each processing step using the Output Inspector by clicking on the input and output connectors of the respective modules. The final results can be displayed using the viewer modules. If necessary, adjust the contrast by pressing the right mouse button and moving the cursor. ![Grouping modules](images/tutorials/basicmechanics/BM_17.png "Grouping modules") -If you'd like to know more about specific modules, search for help. You can do this by right-clicking {{< mousebutton "right" >}} the module and select help, which offers an example network and further information about the selected module in particular. +If you'd like to know more about specific modules, search for help. You can do this by right-clicking the module and select help, which offers an example network and further information about the selected module in particular. ![Module Help](images/tutorials/basicmechanics/BM_18.png "Module Help") -To be able to better distinguish the image processing pipeline, you can encapsulate it in a group: Select the three modules, for example by dragging a selection rectangle around them. Then right-click {{< mousebutton "right" >}} the selection to open the context menu and select {{< menuitem "Add to New Group" >}}. +To be able to better distinguish the image processing pipeline, you can encapsulate it in a group: select the three modules, for example, by dragging a selection rectangle around them. Then, right-click the selection to open the context menu and select {{< menuitem "Add to New Group" >}}. ![Add modules to new group](images/tutorials/basicmechanics/BM_19.png "Add to new group") @@ -192,24 +186,21 @@ More information on module groups can be found {{< docuLinks "/Resources/Documen [//]: <> (MVL-653) ### Macro Modules {#TutorialMacroModules} - You have probably already noticed how the modules differ in color. Each color represents another type of module: - * The blue modules are called ML modules: they process voxel images. - * Green modules are OpenInventor modules: they enable visual 3D scene graphs. - * The brown modules are called macro modules. Macro modules encapsulate a whole network in a single module. + * Blue modules are called ML modules: they process voxel images. + * Green modules are Open Inventor modules: they enable visual 3D scene graphs. + * Brown modules are called macro modules. Macro modules encapsulate a whole network in a single module. -To condense our filter into one single module, we will now be creating a macro module out of it. To do that, right-click {{< mousebutton "right" >}} on the group title and select *Convert To Local Macro*. Name your new macro module and finish. You just created a local macro module. Local macros can only be used from networks in the same or any parent directory. +To condense our filter into one single module, we will now be creating a macro module out of it. To do that, right-click on the group's title and select *Convert To Local Macro*. Name your new macro module and finish. You just created a local macro module. Local macros can only be used from networks in the same or any parent directory. ![Convert to local macro](images/tutorials/basicmechanics/BM_21.png "Convert to local macro") ![Your first local macro](images/tutorials/basicmechanics/BM_22.png "Your first local macro") -Right-click the macro module and select *Show Internal Network* to inspect and change the internal network. You can change the properties of the new macro module by changing the properties in the internal network. You can, for example, right-click {{< mousebutton "right" >}} the module `Convolution` and change the kernel. These changes will be applied for the currently running instance. +Right-click the macro module and select *Show Internal Network* to inspect and change the internal network. You can change the properties of the new macro module by changing the properties in the internal network. You can, for example, right-click the module `Convolution` and change the kernel. These changes will be preserved. ![Internal Network of your local macro](images/tutorials/basicmechanics/BM_23.png "Internal Network of your local macro") -If you want to change the permanent behavior or the module, right-click {{< mousebutton "right" >}} and select {{< menuitem "Related Files" "Filter.mlab" >}}. The network file of the module opens. Changes applied to this file are saved permanently. - {{< youtube "VmK6qx-vKWk">}} {{}} @@ -220,10 +211,10 @@ More information on macro modules can be found {{< docuLinks "/Resources/Documen [//]: <> (MVL-651) ## Summary -* MeVisLab provides pre-defined modules you can re-use and connect for building more or less complex networks. +* MeVisLab provides predefined modules you can reuse and connect for building more or less complex networks. * Each module's output can be previewed using the Output Inspector. * Each module provides example networks to explain their usage. -* Parameters of each module can be changed in the Module Inspector or Automatic Panel of the module. +* Parameters of each module can be changed in the Module Inspector or automatic panel of the module. * Parameter connections can be established to synchronize the values of these parameters. * Modules can be clustered. Clustered modules can be encapsulated into local or global macro modules. * Macro modules encapsulate networks. Internal networks can be shown and modified. Any changes of the internal network are applied to the macro module on-the-fly, changes in the *.mlab* file change the permanent behavior of your module. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems.md b/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems.md index de3655670..775d50db5 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Data Import", "DICOM", "Coordinate Systems"] menu: main: identifier: "coordinatesystems" - title: "The different coordinate systems in MeVisLab: World-, Voxel- and Device coordinates." + title: "The Different Coordinate Systems in MeVisLab: World, Voxel, and Device Coordinates." weight: 365 parent: "data_import" --- @@ -38,21 +38,21 @@ You can show the world coordinates in MeVisLab by using the following example ne ![World Coordinates in MeVisLab](images/tutorials/basicmechanics/WorldCoordinates.png "World Coordinates in MeVisLab") -The `ConstantImage` module generates an artificial image with a certain size, data type, and a constant fill value. The origin of the image is at the origin of the world coordinate system, therefore the `SoCoordinateSystem` module shows the world coordinate system. In order to have a larger z-axis, open the panel of the `ConstantImage` module and set *IMage Size* for *Z* to *256*. +The `ConstantImage` module generates an artificial image with a certain size, data type, and a constant fill value. The origin of the image is at the origin of the world coordinate system; therefore, the `SoCoordinateSystem` module shows the world coordinate system. In order to have a larger z-axis, open the panel of the `ConstantImage` module and set *IMage Size* for *Z* to *256*. ![ConstantImage Info](images/tutorials/basicmechanics/ConstantImageInfo.png "ConstantImage Info") -Placing an object into the Open Inventor Scene of the `SoExaminerViewer`, in this case a `SoCube` with *width*, *height*, and *depth* of 10, places the object to the origin of the world coordinate system. +Placing an object into the Open Inventor scene of the `SoExaminerViewer`, in this case a `SoCube` with *width*, *height*, and *depth* of 10, places the object to the origin of the world coordinate system. ![SoCube in world coordinate system](images/tutorials/basicmechanics/SoCubeWorldCoordinates.png "SoCube in world coordinate system") ### Translations -You can move an object in your scene, for example by using a `SoTranslation` module. Update your network and add the module before your cube. Defining a translation vector 50, 0, 0 moves your cube by 50 in x-direction based on the origin of the world coordinate system. +You can move an object in your scene, for example, by using a `SoTranslation` module. Update your network and add the module before your cube. Defining a translation vector (50, 0, 0) moves your cube by 50 in the x-direction based on the origin of the world coordinate system. ![SoTranslation](images/tutorials/basicmechanics/SoTranslation.png "SoTranslation") ### Transformations -More complex transformations can be done by using the `SoTransform` module. You can not only translate an existing object, but also rotate, scale, and apply many other transformations. +More complex transformations can be done by using the `SoTransform` module. Not only can you translate an existing object, but you can also rotate, scale, and shear. ![SoTransform](images/tutorials/basicmechanics/SoTransform.png "SoTransform") @@ -61,7 +61,7 @@ More complex transformations can be done by using the `SoTransform` module. You ## Voxel Coordinates Voxel coordinates are: * Relative to an image -* Continuous from [0..x,0..y,0..z], voxel center at 0.5 +* Continuous from [0..x, 0..y, 0..z], voxel center at 0.5 * Direct relation to voxel location in memory ### Voxel Coordinates in MeVisLab @@ -69,17 +69,17 @@ You can show the voxel coordinates in MeVisLab by using the following example ne ![Voxel Coordinates](images/tutorials/basicmechanics/VoxelCoordinates.png "Voxel Coordinates") -Load the file *Liver1_CT_venous.small.tif* .The `Info` module shows detailed information about the image loaded by the `LocalImage`. Opening the `SoExaminerViewer` shows the voxel coordinate system of the loaded image. You may have to change the LUT in `SoGVRVolumeRenderer` so that the image looks better. +Load the file *Liver1_CT_venous.small.tif*. The `Info` module shows detailed information about the image loaded by the `LocalImage`. Opening the `SoExaminerViewer` shows the voxel coordinate system of the loaded image. You may have to change the LUT in `SoGVRVolumeRenderer`, so that the image looks better. ![Voxel coordinates of the loaded image](images/tutorials/basicmechanics/SoExaminerViewer_Voxel.png "Voxel coordinates of the loaded image") -The *Advanced* tab of the `Info` module shows the world coordinates of the image. In this case, the origin of the voxel coordinate system is located at -186.993, -173.993, -249.993. +The *Advanced* tab of the `Info` module shows the world coordinates of the image. In this case, the origin of the voxel coordinate system is located at (-186.993, -173.993, -249.993). In addition to that, you can see a scaling that has been done on the image. The voxel sizes are shown in the diagonal values of the matrix as 3.985792, 3.985792, 3.985798. ![World coordinates of the loaded image](images/tutorials/basicmechanics/ImageInfo_Advanced.png "World coordinates of the loaded image") -You can change the scaling to 1 by adding a `Resample3D` module to the network: set the voxel size to 1, 1, 1 and inspect the `Info` module. +You can change the scaling to 1 by adding a `Resample3D` module to the network: set the voxel size to (1, 1, 1) and inspect the `Info` module. ![Resample3D](images/tutorials/basicmechanics/Resample3D.png "Resample3D") @@ -99,9 +99,9 @@ Opening the `SoExaminerViewer` shows the world coordinate system in white and th ![World and Voxel coordinates](images/tutorials/basicmechanics/SoExaminerViewer_both.png "World and Voxel coordinates") -On the yellow axis, we can see that the coordinate systems are located as already seen in the `Info` module *Advanced* tab. On the x-axis, the voxel coordinate origin is translated by -186.993 and on the y-axis by -173.993. +On the yellow axis, we can see that the coordinate systems are located as already seen in the `Info` module *Advanced* tab. On the x-axis, the voxel coordinate origin is translated by -186.993 and on the y-axis, it is translated by -173.993. -You can also add a `SoVertexProperty` and a `SoLineSet` module and configure a line from the origin of the world coordinate system 0, 0, 0 to the origin of the voxel coordinate system as defined by the image -186.993, -173.993, -249.993. +You can also add a `SoVertexProperty` and a `SoLineSet` module and configure a line from the origin of the world coordinate system (0, 0, 0) to the origin of the voxel coordinate system as defined by the image (-186.993, -173.993, -249.993). ![SoVertexProperty](images/tutorials/basicmechanics/Arrow.png "SoVertexProperty") @@ -111,8 +111,8 @@ You can also add a `SoVertexProperty` and a `SoLineSet` module and configure a l Device coordinates are: * 2D coordinates in OpenGL viewport * Measured in pixel -* Have their origin (0,0) in the top left corner of the device (with x-coordinates increasing to the right and y-coordinates increasing downwards) +* Have their origin (0, 0) in the top left corner of the device (with x-coordinates increasing to the right and y-coordinates increasing downward) The viewport is the rectangle in pixels on your screen you want to render to. Affine transformations map abstract coordinates from your scene to physical pixels on your device. -All triangular vertices go through a projection matrix and end in a normalized range from -1 to 1 representing your field of view. To find which pixels the triangles actually cover on screen, those coordinates get linearly remapped from [−1, 1] to the range of the viewport rectangle in pixels. Technically that kind of mapping is called an [*affine transformation*](https://en.wikipedia.org/wiki/Affine_transformation). +All triangular vertices go through a projection matrix and end in a normalized range from -1 to 1 representing your field of view. To find which pixels the triangles actually cover on screen, those coordinates get linearly remapped from [−1, 1] to the range of the viewport rectangle in pixels. Technically, that kind of mapping is called an [*affine transformation*](https://en.wikipedia.org/wiki/Affine_transformation). diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems2.md b/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems2.md index d95e33bdf..31a89fa19 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems2.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/coordinatesystems/coordinatesystems2.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Data Import", "DICOM", "Coordinate Systems"] menu: main: identifier: "coordinatesystems2" - title: "The different coordinate systems in DICOM." + title: "The Different Coordinate Systems in DICOM." weight: 366 parent: "data_import" --- @@ -24,7 +24,7 @@ World coordinates also refer to the patient axes. They are: ![World Coordinates in Context of the Human Body](images/tutorials/visualization/V2_00.png "World Coordinates in Context of the Human Body") -The DICOM (Digital Imaging and Communications in Medicine) standard defines a data format that groups information into data sets. This way, the image data is always kept together with all meta information like patient ID, study time, series time, acquisition data, etc. The image slice is represented by another tag with pixel information. +The Digital Imaging and Communications in Medicine (DICOM) standard defines a data format that groups information into data sets. This way, the image data is always kept together with all meta information like patient ID, study time, series time, acquisition data, etc. The image slice is represented by another tag with pixel information. DICOM tags have unique numbers, encoded as two 16-bit numbers, usually shown in hexadecimal notation as two four-digit numbers (xxxx,xxxx). These numbers are the data group number and the data element number. @@ -34,12 +34,12 @@ Although DICOM is a standard, often the data that is received/recorded does not Some typical modules for DICOM handling: * `DirectDicomImport` is a module for DICOM import that generates 3D or 4D images (as ML images) from a list of DICOM files which can directly be used by other modules. It has a lot of options to control the import process, which can, e.g., determine which slices are combined into an image stack. -* `DicomImport` is a new module for DICOM import. The new implementation does not yet provide all known functionalities from `DirectDicomImport`, most of them will be added in future releases. Its main advantage is that the import process is faster and happens asynchronously. +* `DicomImport` is a fast and more lightweight module for DICOM import. Its main advantage is that the import process is faster and happens asynchronously. * You can view the the DICOM tags of a DICOM image or a processed ML image with the module `DicomTagBrowser`. * You can view and cut out frame-specific tags with the module `DicomFrameSelect`. * You can modify DICOM tags with the module `DicomTagModify`. * You can also create a new DICOM header for an image file with the `ImageSave` module, tab Options, Save DICOM header file only. -* Saving of loaded DICOM data to the filesystem or sending to a PACS (Picture Archiving and Communication System) is possible with the `DicomTool` macro module. +* Saving of loaded DICOM data to the filesystem or sending to a Picture Archiving and Communication System (PACS) is possible with the `DicomTool` macro module. * Basic support for querying and receiving DICOM data from a PACS is available via the `DicomQuery` and `DicomReceiver` modules. {{}} @@ -49,19 +49,19 @@ Another option for Python is [pydicom](https://pydicom.github.io/). {{}} ## Orthogonal Views -The module `OrthoView2D` provides a 2D view displaying the input image in three orthogonal viewing directions. By default, the view is configured as *Cube* where the transverse view is placed in the top right segment, sagittal in bottom left and coronal in bottom right segment. Use the left mouse button to set a position in the data set. This position will be displayed in all available views and is available as field *worldPosition*. +The module `OrthoView2D` provides a 2D view displaying the input image in three orthogonal viewing directions. By default, the view is configured as *Cube* where the transverse view is placed in the top right segment, sagittal in bottom left, and coronal in bottom right segment. Use the left mouse button to set a position in the data set. This position will be displayed in all available views and is available as field *worldPosition*. ![OrthoView2D](images/tutorials/basicmechanics/OrthoView2D.png "OrthoView2D") -As already learned in the previous example [1.1: MeVisLab Coordinate Systems](tutorials/basicmechanisms/coordinatesystems/coordinatesystems), world and voxel positions are based on different coordinate systems. Selecting the top left corner of any of your views will not show a world position of 0, 0, 0. You can move the mouse cursor to the voxel position 0, 0, 0 as seen in the image information of the viewers in brackets *(x, y, z)*. The field *worldPosition* then shows the location of the image in world coordinate system (see `Info` module). +As already learned in the previous example [1.1: MeVisLab Coordinate Systems](tutorials/basicmechanisms/coordinatesystems/coordinatesystems), world and voxel positions are based on different coordinate systems. Selecting the top left corner of any of your views will not show a world position of (0, 0, 0). You can move the mouse cursor to the voxel position (0, 0, 0) as seen in the image information of the viewers in brackets *(x, y, z)*. The field *worldPosition* then shows the location of the image in world coordinate system (see `Info` module). ![OrthoView2D Voxel- and World Position](images/tutorials/basicmechanics/OrthoView2D_WorldPosition.png "OrthoView2D Voxel- and World Position") -Another option is to use the module `OrthoReformat3` which transforms the input image (by rotating and/or flipping) into the three main views commonly used: +Another option is to use the module `OrthoReformat3` that transforms the input image (by rotating and/or flipping) into the three main views commonly used: * Output 0: Sagittal view * Output 1: Coronal view -* Output 2: Transverse view +* Output 2: Transverse view (aka Axial view) ![OrthoReformat3](images/tutorials/basicmechanics/OrthoReformat3.png "OrthoReformat3") -The general `View2D` always uses the original view from the image data without reconstructing another view. In case of *ProbandT1*, this is the sagittal view. +The general `View2D` always uses the original view from the image data without reconstructing another view. In the case of *ProbandT1*, this is the sagittal view. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/dataimport.md b/mevislab.github.io/content/tutorials/basicmechanisms/dataimport.md index d1ddc44a1..7755800f0 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/dataimport.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/dataimport.md @@ -1,5 +1,5 @@ --- -title: "Example 1: Data import in MeVisLab" +title: "Example 1: Data Import in MeVisLab" date: 2022-06-15T08:54:53+02:00 status: "OK" draft: false @@ -8,13 +8,13 @@ tags: ["Beginner", "Tutorial", "Data Import", "DICOM"] menu: main: identifier: "data_import" - title: "How to import several data formats into MeVisLab like DICOM, Contours, Surface Objects or 3D Scenes." + title: "How to Import Different Data Formats Into MeVisLab like DICOM, Contours, Surface Objects, or 3D Scenes." weight: 360 parent: "basicmechanisms" --- # Example 1: Data Import in MeVisLab -MeVisLab provides several pre-defined modules to import data for processing in your networks. +MeVisLab provides several predefined modules to import data for processing in your networks. {{}} The easiest way to load data in MeVisLab is to drop the file onto the MeVisLab workspace. MeVisLab will try to find a module that is capable of loading your file automatically. @@ -48,38 +48,38 @@ The `ImageLoad` module can import the following formats: * JPEG * MLImageFileFormat -Basic information of the imported images are available on the Panel which opens via double-click. +Basic information of the imported images is available on the panel that opens via double-click. -## DICOM data {#DICOMImport} +## DICOM Data {#DICOMImport} {{}} Additional information about **Digital Imaging and Communications in Medicine (DICOM)** can be found at [Wikipedia](https://en.wikipedia.org/wiki/DICOM "DICOM Format") {{< /alert >}} -Even if the above explained `ImageLoad` is able to import DICOM data, a much better way is to use one of the specialized modules for DICOM images such as `DicomImport`. +Even if the above explained `ImageLoad` is able to import DICOM data, a much better way is to use one of the specialized modules for DICOM images, such as `DicomImport`. -The `DicomImport` module allows to define a directory containing DICOM files to import as well as a list of files which can be dropped to the UI and imported. After import, the volumes are shown in a patient tree providing the following patient, study, series and volume information (depending on the availability in the DICOM file(s)): +The `DicomImport` module allows to define a directory containing DICOM files to import as well as a list of files that can be dropped to the UI and imported. After import, the volumes are shown in a patient tree providing the following patient, study, series, and volume information (depending on the availability in the DICOM file(s)): * **PATIENT LEVEL** Patient Name (0010,0010) - Patient Birthdate (0010,0030) * **STUDY LEVEL** Study Date (0008,0020) - Study Description (0008,1030) - * **SERIES/VOLUME LEVEL** Modality (0008,0060) - Series Description (0008,103e) - Rows (0028,0010) - Columns (0028,0011) - number of slices in volume - number of time points in volume + * **SERIES/VOLUME LEVEL** Modality (0008,0060) - Series Description (0008,103e) - Rows (0028,0010) - Columns (0028,0011) - number of slices in volume - number of timepoints in volume ![DicomImport Module](images/tutorials/basicmechanics/DicomImport.png "DicomImport Module") ### Configuration -The `DicomImport` module generates volumes based on the **Dicom Processor Library (DPL)** which allows to define sorting and partitioning options. +The `DicomImport` module generates volumes based on the **Dicom Processor Library (DPL)** that allows to define sorting and partitioning options. ![DicomImport Sort Part Configuration](images/tutorials/basicmechanics/DicomImportSortPart.png "DicomImport Sort Part Configuration") -### DicomTree information +### DicomTree Information In order to get all DICOM tags from your currently imported and selected volume, you can connect the `DicomImport` module to a `DicomTagBrowser`. ![DicomTagBrowser Module](images/tutorials/basicmechanics/DicomTagBrowser.png "DicomTagBrowser Module") -In MeVisLab versions later than 4.2.0 the *Output Inspector* provides the option to show the DICOM tags of the currently selected output directly. You do not need to add a separate `DicomTagBrowser` module anymore. +In MeVisLab versions later than 4.2.0, the *Output Inspector* provides the option to show the DICOM tags of the currently selected output directly. You do not need to add a separate `DicomTagBrowser` module anymore. ![DICOM Information in Output Inspector](images/tutorials/basicmechanics/OutputInspectorDICOM.png "DICOM Information in Output Inspector") ## Segmentations / 2D Contours {#2DContours} -2-dimensional contours in MeVisLab are handled via *CSO*s (**C**ontour **S**egmentation **O**bjects). +Two-dimensional contours in MeVisLab are handled via *CSO*s (**C**ontour **S**egmentation **O**bjects). {{}} Tutorials for CSOs are available [here](../../dataobjects/contours/contour-objects) @@ -104,23 +104,23 @@ CSOs can be created by the existing `SoCSO*Editor` modules. The following module For saving and loading existing CSOs, the modules `CSOSave` and `CSOLoad` can be used. -## 3D data / meshes {#3DMeshes} +## 3D Data / Meshes {#3DMeshes} ### Winged Edge Mesh (WEM) -3-dimensional meshes in MeVisLab are handled via *WEM*s (**W**inged **E**dge **M**esh). +Three-dimensional meshes in MeVisLab are handled via *WEM*s (**W**inged **E**dge **M**esh). -The module `WEMLoad` loads different 3D mesh file formats like: -* Object File Format (\*.off \*.geom) -* Wavefront (\*.obj) -* Polygon File Format (\*.ply) -* Standard Tessellation Language (\*.stl) -* VRML (\*.wrl) -* Winged Edge Mesh (\*.wem) +The module `WEMLoad` loads different 3D mesh file formats, for example: +* Object File Format (*.off* *.geom*) +* Wavefront (*.obj*) +* Polygon File Format (*.ply*) +* Standard Tessellation Language (*.stl*) +* VRML (*.wrl*) +* Winged Edge Mesh (*.wem*) ![WEMLoad Module](images/tutorials/basicmechanics/WEMLoad.png "WEMLoad Module") WEMs can be rendered via Open Inventor by using the modules `SoExaminerViewer` or `SoRenderArea` and `SoCameraInteraction`. -Before visualizing a WEM, it needs to be converted to a Scene Object via `SoWEMRenderer`. +Before visualizing a WEM, it needs to be converted to a scene object via `SoWEMRenderer`. ![SoWEMRenderer Module](images/tutorials/basicmechanics/SoWEMRenderer.png "SoWEMRenderer Module") @@ -128,8 +128,8 @@ Before visualizing a WEM, it needs to be converted to a Scene Object via `SoWEMR Tutorials for WEMs are available [here](../../dataobjects/surfaces/surfaceobjects). {{}} -### Loading arbitrary 3D files -The `SoSceneLoader` module is able to load external 3D formats. MeVisLab uses the integrated *assimp* ThirdParty library which is able to import most common 3D file types. The currently integrated assimp version can be found {{< docuLinks "/../MeVis/ThirdParty/Documentation/Publish/ThirdPartyReference/index.html" "here" >}} +### Loading Arbitrary 3D Files +The `SoSceneLoader` module is able to load external 3D formats. MeVisLab uses the integrated *assimp* third-party library that is able to import most common 3D file types. The currently integrated *assimp* version can be found {{< docuLinks "/../MeVis/ThirdParty/Documentation/Publish/ThirdPartyReference/index.html" "here" >}} {{}} Supported file formats of the assimp library are documented on their [website](https://github.com/assimp/assimp/blob/master/doc/Fileformats.md). @@ -137,7 +137,7 @@ Supported file formats of the assimp library are documented on their [website](h ![SoSceneLoader Module](images/tutorials/basicmechanics/SoSceneLoader.png "SoSceneLoader Module") -The {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoSceneLoader.html" "SoSceneLoader" >}} module generates a 3D scene from your loaded files which can be rendered via {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoExaminerViewer.html" "SoExaminerViewer" >}} or {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoRenderArea.html" "SoRenderArea" >}} and {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoCameraInteraction.html" "SoCameraInteraction" >}} +The {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoSceneLoader.html" "SoSceneLoader" >}} module generates a 3D scene from your loaded files that can be rendered via {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoExaminerViewer.html" "SoExaminerViewer" >}} or {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoRenderArea.html" "SoRenderArea" >}} and {{< docuLinks "/../MeVisLab/Standard/Documentation/Publish/ModuleReference/SoCameraInteraction.html" "SoCameraInteraction" >}} {{}} Example usage is explained in the tutorials for [Open Inventor](tutorials/openinventor). diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules.md index a6365898d..1e8483517 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules.md @@ -1,5 +1,5 @@ --- -title: "Example 2: Macro modules and Module Interaction" +title: "Example 2: Macro Modules and Module Interaction" date: 2025-05-19 draft: false weight: 370 @@ -8,12 +8,12 @@ tags: ["Beginner", "Tutorial", "Macro", "Macro modules"] menu: main: identifier: "macro_modules" - title: "Examples for Creating Macro modules, adding User Interfaces and Python scripting." + title: "Examples for Creating Macro Modules, Adding User Interfaces, and Python Scripting" weight: 370 parent: "basicmechanisms" --- -# Example 2: Macro modules {#TutorialChapter6} +# Example 2: Macro Modules {#TutorialChapter6} ## What is a Macro Module? A macro module can be used to develop your own functionality in MeVisLab. @@ -22,7 +22,7 @@ Like all other standard MeVisLab modules, macro modules have a defined interface Macro modules are primarily defined using the *MeVisLab Definition Language (MDL)* and often incorporate Python scripting for added functionality, especially for dynamic user interfaces. Importantly, you don't need to write C++ code to create them. -The internal network of a macro module is saved in a .mlab file, often referred to as the macro network. The interface and other definitions are stored in .def and .script files. +The internal network of a macro module is saved in an *.mlab* file, often referred to as the macro network. The interface and other definitions are stored in *.def* and *.script* files. You have two main options for developing a macro module: @@ -30,19 +30,19 @@ You have two main options for developing a macro module: An example can be found in chapter [Basic Mechanics of MeVisLab (Example: Building a Contour Filter)](tutorials/basicmechanisms#TutorialMacroModules). -* **Without Internal Networks**: Use a macro module to add your own Python code. If MeVisLab is missing a specific functionality, you can write your own Python script and execute it it in a macro module. This allows you to extend the software with custom behavior that integrates smoothly into your project. You can see the Python code at the right side of the image below. +* **Without Internal Networks**: Use a macro module to add your own Python code. If MeVisLab is missing a specific functionality, you can write your own Python script and execute it in a macro module. This allows you to extend the software with custom behavior that integrates smoothly into your project. You can see the Python code at the right side of the image below. The image shows these two options. The left side encapsulates a network of three modules into one macro module. The right side shows a macro module without an internal network, only containing your custom Python script. -A typical example for macro modules without an internal network is the execution of a pre-trained AI model on an input ML image, see [Example 2: Brain Parcellation using PyTorch](tutorials/thirdparty/pytorch/pytorchexample2/) for details. +A typical example for macro modules without an internal network is the execution of a pretrained AI model on an input ML image, see [Example 2: Brain Parcellation using PyTorch](tutorials/thirdparty/pytorch/pytorchexample2/) for details. It is also possible to combine both approaches. You can add internal networks and additionally write Python code for user interaction and processing. ![Internal Processing and Python Interaction](images/tutorials/basicmechanics/with.png "Internal Processing and Python Interaction") -### Benefits of Macro Modules: +### Benefits of Macro Modules * **Encapsulation:** -Macro modules take an existing network of modules or Python code. To the user, interacting with the macro module, it appears as a single entity with its own defined inputs, outputs, and parameters. You don't need to know or interact with the internal functionality unless you specifically open the macro module for editing. +Macro modules take an existing network of modules or Python code. To the user interacting with the macro module, it appears as a single entity with its own defined inputs, outputs, and parameters. You don't need to know or interact with the internal functionality unless you specifically open the macro module for editing. * **Reusability:** Once created, macro modules can be easily added to different networks, saving time and effort in rebuilding common processing pipelines. See below for the scope of a macro module. It can be local or global. * **Organization and Clarity:** @@ -56,82 +56,83 @@ Macro modules allow you to create custom modules tailored to your specific needs * **GUI Development:** They are often used to encapsulate dynamic user interfaces built with scripting, sometimes without any underlying image processing network. -### Scope of Macro Modules: -#### Local Macro Module: -A Local Macro module in MeVisLab exists within the context of the current network document - i.e. it’s defined *locally* rather than being installed into the global module database. It does not require a package. It lives inside the directory of the current network file (*\*.mlab*) you’re working on. +### Scope of Macro Modules + +#### Local Macro Module +A Local Macro module in MeVisLab exists within the context of the current network document - i.e., it’s defined *locally* rather than being installed into the global module database. It does not require a package. It lives inside the directory of the current network file (*.mlab*) you’re working on. * A local macro is visible and editable in the directory of your current network. * A local macro is not listed in the Modules panel and module search. * A local macro can only be reused elsewhere by copying it into another folder with your network file. -#### Global Macro Module: +#### Global Macro Module A global macro module is stored in a central location within your MeVisLab installation. The directory is called Package. Once a global macro module is created, it appears in the module browser and can be used in any MeVisLab network you open. See [Package creation](tutorials/basicmechanisms/macromodules/package/) for details about Packages. - Local macro modules can be converted to global macro modules. MeVisLab then adds a definition file containing the name and package of the module and copies the content to your selected package directory. Package directories are loaded automatically when you start MeVisLab in case they have been added to your user packages via main menu {{< menuitem "Edit" "Preferences" "Packages" >}}. + Local macro modules can be converted to global macro modules. MeVisLab then adds a definition file containing the name and package of the module and copies the content to your selected package directory. Package directories are loaded automatically when you start MeVisLab in the case they have been added to your user packages via main menu {{< menuitem "Edit" "Preferences" "Packages" >}}. * A global macro can be used in any MeVisLab network. * A global macro is listed in the Modules panel and module search. {{}} -Packages are the way MeVisLab organizes different development projects. You can organize your own modules, test cases or C++ modules in a package. +Packages are the way MeVisLab organizes different development projects. You can organize your own modules, test cases, or C++ modules in a package. {{}} -### Inputs, Outputs, and Fields: +### Inputs, Outputs, and Parameter Fields Macro modules can have input and output connectors that receive data and/or provide the results of the processing performed by their internal networks or Python scripts. -They are typically defined in the macro module's *\*.script* file. +They are typically defined in the macro module's *.script* file. -#### Inputs: +#### Inputs Input connectors accept data from other modules in the network. These inputs define what information the encapsulated network or Python script within the macro module receives and processes. -Data input connectors, represented by triangles for ML images, half-circles for Open Inventor scenes, or squares for base objects, receive data objects from other modules. The type of data an input accepts is determined by the modules within the macro that are connected to this input. +Data input connectors, represented by triangles for ML images, half-circles for Open Inventor scenes, or squares for Base objects, receive data objects from other modules. The type of data an input accepts is determined by the modules within the macro that are connected to this input. -#### Outputs: +#### Outputs Output connectors provide the results of the processing performed by their internal networks. These outputs can then be connected to the inputs of other modules. Data Outputs (triangle, half-circle, square) provide the processed data from the internal network or Python file. The type of data an output provides depends on the outputs of the modules within the macro that are connected to this output. -#### Fields: +#### Parameter Fields Parameter Fields allow users to control the behavior of the internal network. They can be connected to the parameters/fields of other modules or manually adjusted by the user. They also allow other modules to read values or states from within the encapsulated network or Python file. You have two options when adding fields to your macro module: -* **Define your own fields:** You can define your own fields by specifying their name, type, and default value in the *\*.script* file. This allows you to provide custom parameters for your macro module, tailored to your specific needs. These parameters can be use as input from the user or output from the modules processing. +* **Define your own fields:** You can define your own fields by specifying their name, type, and default value in the *.script* file. This allows you to provide custom parameters for your macro module, tailored to your specific needs. These parameters can be use as input from the user or output from the modules processing. * **Reuse fields from the internal network:** Instead of defining your own field, you can expose an existing field from one of the modules of your internal network. To do this, you reference the *internalName* of the internal field you want to reuse. This makes the internal field accessible at the macro module level, allowing users to interact with it directly without duplicating parameters. Changes of the field value are automatically applied in your internal network. ![Inputs, Outputs, and Fields](images/tutorials/basicmechanics/fields.png "Inputs, Outputs, and Fields") -### Files Associated with a Macro Module: +### Files Associated with a Macro Module Macro modules typically need the following files: -* **Definition file (*\*.def*):** The module definition file contains the definition and information about the module like name, author, package, etc. **Definition files are only available for global macro modules**. -* **Script file (*\*.script*):** The script file defines inputs, outputs, fields and the user interface of the macro module. In case you want to add Python code, it includes the reference to the Python file. The *\*.script* file allows you to define Python functions to be called on field changes and user interactions. +* **Definition file (*.def*):** The module definition file contains the definition and information about the module like name, author, or package. **Definition files are only available for global macro modules**. +* **Script file (*.script*):** The script file defines inputs, outputs, parameter fields, and the user interface of the macro module. In the case you want to add Python code, it includes the reference to the Python file. The *.script* file allows you to define short Python functions to be called on field changes and user interactions. ![user interface and the internal interface](images/tutorials/basicmechanics/mycountourFilter.png "user interface and the internal interface") -* **Python file (*\*.py*):** *(Optional)* The Python file contains the Python code that is used by the module. See section [Python functions and Script files](tutorials/basicmechanisms/macromodules#PythonAndScripts) for different options to add Python functions to user interactions. -* **Internal network file (*\*.mlab*):** *(Optional)* Stores the internal network of the module if available. This file essentially defines the macro module's internal structure and connections. -* **Macro module help file (*\*.mhelp*):** *(Optional)* Provides help documentation for the macro module. This file is used to display information to users about the module’s functionality, usage, and any specific instructions. +* **Python file (*.py*):** *(Optional)* The Python file contains the Python code that is used by the module. See section [Python functions and Script files](tutorials/basicmechanisms/macromodules#PythonAndScripts) for different options to add Python functions to user interactions. +* **Internal network file (*.mlab*):** *(Optional)* Stores the internal network of the module if available. This file essentially defines the macro module's internal structure and connections. +* **Macro module help file (*.mhelp*):** *(Optional)* Provides help documentation for the macro module. This file is used to display information to users about the module’s functionality, usage, and any specific instructions. -Additionally a macro module may provide an additional Python (*\*.py*) and network (*\*.mlab*) that defines your automated test(s). Both files are also stored in your Package and can only be added for global macro modules. +Additionally, a macro module may provide an additional Python (*.py*) and network (*.mlab*) that defines your automated test(s). Both files are also stored in your Package and can only be added for global macro modules. -### Python functions and Script files: {#PythonAndScripts} +### Python Functions and Script Files {#PythonAndScripts} Python functions can be executed on any user interaction with your macro module. Examples are: * **Module initialization**: You can add the *initCommand* to the *Commands* section and the given Python function is called whenever the module is added to the workspace or reloaded. * **Window creation**: You can add the *initCommand* to the *Window* section and the given Python function is called whenever the panel of the module is opened. -* **User interaction**: You can add commands to any user interface element like *Buttons* to call Python functions on user interactions with this element. The image below shows you the user interface and the internal interface: -* **Field changes**: You can also react on any changes of fields in your module and create Field Listeners. See section [Field Listeners](tutorials/basicmechanisms/macromodules#FieldListeners) for details. +* **User interaction**: You can add commands to any user interface element like *Buttons* to call Python functions on user interactions with this element. The image below shows you the user interface and the internal interface. +* **Field changes**: You can also react on any changes of fields in your module and create field listeners. See section [Field Listeners](tutorials/basicmechanisms/macromodules#FieldListeners) for details. -### Field Listeners: {#FieldListeners} -Field listeners are mechanisms to execute Python code automatically anytime the value of a field changes. This allows you to create dynamic responses to user interactions in the module's parameter panel. +### Field Listeners {#FieldListeners} +Field listeners are mechanisms to execute Python code automatically any time the value of a field changes. This allows you to create dynamic responses to user interactions in the module's parameter panel. -You can define field listeners within the *Commands* sections of the *\*.script* file. You get a reference to the field object and then use a method to add a callback function that will be executed when the field's value is modified. +You can define field listeners within the *Commands* sections of the *.script* file. You get a reference to the field object and then use a method to add a callback function that will be executed when the field's value is modified. For an example see [Example 2.5.2: Module interactions via Python scripting](tutorials/basicmechanisms/macromodules/scriptingexample2/). ## Summary * Macro modules allow you to add your own functionality to MeVisLab. You can add inputs and outputs and connect existing modules to your new macro module. * Macro modules may contain an internal network to encapsulate this functionality or Python code to implement your own functionalities to MeVisLab. -* Benefits are encapsulation, reusability, abtraction of complexity and the possibility to add your own functionality to MeVisLab. +* Benefits are encapsulation, reusability, abtraction of complexity, and the possibility to add your own functionality to MeVisLab. * There are different types of macro modules: - * Local macro modules are only available in the directory of your current network - * Global macro modules are available in all projects but must be part of a package \ No newline at end of file + * Local macro modules are only available in the directory of your current network. + * Global macro modules are available in all projects but must be part of a package. \ No newline at end of file diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md index 3bec6f67e..a1ce2fc38 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md @@ -1,5 +1,5 @@ --- -title: "Example 2.2: Creation of global macro modules" +title: "Example 2.2: Creation of Global Macro Modules" date: 2022-06-15T08:58:44+02:00 status: "OK" draft: false @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Macro", "Macro modules", "Global Macro"] menu: main: identifier: "globalmacromodules" - title: "Creation of global macro modules from a local macro using the Project Wizard" + title: "Creation of Global Macro Modules From a Local Macro Using the Project Wizard" weight: 390 parent: "macro_modules" --- @@ -18,10 +18,10 @@ menu: {{< youtube "M4HnA0d1V5k">}} ## Introduction - In this chapter you will learn how to create global macro modules. There are many ways to do this. You can convert local macros into global macro modules or you can directly create global macro modules using the *Project Wizard*. In contrast to local macro modules, global macro modules are commonly available throughout projects and can be found via module search and under {{< menuitem "Modules" >}}. ## Steps to Do + ### Transform a Local Macro Module into a Global Macro Module To transform our local macro module `Filter` from [Chapter I](tutorials/basicmechanisms#TutorialMacroModules) into a global macro module, right-click {{< mousebutton "right" >}} the macro module to open the context menu and select {{< menuitem "Extras" "Convert To Global Module..." >}} @@ -29,7 +29,6 @@ right-click {{< mousebutton "right" >}} the macro module to open the context men ![Convert local macro to global macro](images/tutorials/basicmechanics/GUI_03.png "Convert local macro to global macro") ### Define Module Properties - 1. Choose a unique module name. 2. State the module's author. @@ -48,7 +47,7 @@ right-click {{< mousebutton "right" >}} the macro module to open the context men *\\MyPackageGroup\\General\\Modules\\Macros\\MyProject*. {{}} -If you are working with MeVisLab versions before 5.0, make sure to chose *Directory Structure* as *self-contained*. This makes sure that all files of your module are stored in a single directory. Later versions always use *self-contained*. +If you are working with MeVisLab versions before 5.0, make sure to choose *Directory Structure* as *self-contained*. This makes sure that all files of your module are stored in a single directory. Later versions always use *self-contained*. Also keep in mind that Python files are only created automatically if selected in the Project Wizard. Converting a local macro to a global macro does NOT create a Python file automatically. {{}} @@ -59,7 +58,6 @@ Also keep in mind that Python files are only created automatically if selected i Instead of converting a local macro module into a global macro module, you can also use the *Project Wizard* to create new macro modules. Open the Project Wizard via {{< menuitem "File" "Run Project Wizard ..." >}}. Then, select {{< menuitem "Modules (Scripting)" "Macro module" >}} and *Run Wizard*. ### Define Module Properties - 1. Choose a unique module name. 2. State the module's author. @@ -76,7 +74,7 @@ Instead of converting a local macro module into a global macro module, you can a *\\MyPackageGroup\\General\\Modules\\Macros\\MyProject*. {{}} -Make sure to chose *Directory Structure* as *self-contained*. This ensures that all files of your module are stored in a single directory. +Make sure to choose *Directory Structure* as *self-contained*. This ensures that all files of your module are stored in a single directory. {{}} Press *Next >* to edit further properties. You have the opportunity to directly define the internal network of the macro module, for example, by copying an existing network. In this case, we could copy the network of the local macro module `Filter` we already created. In addition, you have the opportunity to directly create a Python file. Python scripting can be used for the implementation of module interactions and other module functionalities. More information about Python scripting can be found [here](./tutorials/basicmechanisms/macromodules/pythonscripting). @@ -85,9 +83,9 @@ Make sure to chose *Directory Structure* as *self-contained*. This ensures that ## Structure of Global Macro Modules After creating your global macro module, you can find the created project *MyProject* in your package. This project contains your macro module `Filter`. For the macro module exist three files: -* *Filter.def*: Module definition file -* *Filter.mlab*: Network file which contains the internal network of your macro module -* *Filter.script*: MDL script file, which defines in- and outputs of your macro module as well as fields. This file defines the module panel, as well as references to python scripts. +* *Filter.def*: module definition file +* *Filter.mlab*: network file that contains the internal network of your macro module +* *Filter.script*: MDL script file that defines inputs and outputs of your macro module as well as fields. This file defines the module panel, as well as references to Python scripts. In addition, two folders may be created: * *mhelp*: contains the help files of all modules of this project @@ -102,14 +100,13 @@ macro, the new module can be found via {{< menuitem "Modules" "Filters" >}}. In ![Find module in menu](images/tutorials/basicmechanics/GUI_05.png "Find module in menu") - {{}} If you do not find your new global macro module, try to reload the module database. ![Reload module database](images/tutorials/basicmechanics/GUI_05_2.png "Reload module database") {{}} ## Summary -* Via right-click {{< mousebutton "right" >}} {{< menuitem "Extras" "Convert To Global Module..." >}} global macro modules can be created out of local macro modules. +* Via right-click {{< mousebutton "right" >}} {{< menuitem "Extras" "Convert To Global Module..." >}}, global macro modules can be created out of local macro modules. * You can use the Project Wizard to create new macro modules. * You need to have a package structure to store your global macro module. * Global macro modules are available throughout projects and can be found via *Module Search* and under menu item {{< menuitem "Modules" >}}. \ No newline at end of file diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/guidesign.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/guidesign.md index c53e3ef2b..3a9e515a1 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/guidesign.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/guidesign.md @@ -1,5 +1,5 @@ --- -title: "Example 2.4: GUI development" +title: "Example 2.4: GUI Development" date: 2022-06-15T08:58:44+02:00 status: "OK" draft: false @@ -8,17 +8,18 @@ tags: ["Beginner", "Tutorial", "Macro", "Macro modules", "Global Macro", "User I menu: main: identifier: "gui_development" - title: "Custom User Interfaces for macro modules." + title: "Custom User Interfaces for Macro Modules" weight: 410 parent: "macro_modules" --- + # Example 2.4: Building a Panel Layout: Interactions with Macro Modules {{< youtube "tdQUkkROWBg">}} ## Introduction This chapter will give you an introduction into the creation of module panels and user -interfaces. For the implementation you will need to +interfaces. For the implementation, you will need to use the {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html" "MeVisLab Definition Language (MDL)">}}. {{}} @@ -29,20 +30,21 @@ More information about GUI design in MeVisLab can be found {{< docuLinks "/Resou [//]: <> (MVL-651) ## Creating a Panel for the Macro Module Filter {#Example_Paneldesign} -### Creation of a module panel + +### Creation of a Module Panel In [Example 2.2](tutorials/basicmechanisms/macromodules/globalmacromodules) we created the global macro module `Filter`. By now, this module does not have a proper panel. When double-clicking {{< mousebutton "left" >}} the module, the *Automatic Panel* is shown. -The *Automatic Panel* contains fields, as well as module in and outputs. In this case, no fields exists except the *instanceName*. Accordingly, there is no possibility to interact with the module. Only the input and the output of the module are given. +The *Automatic Panel* contains fields, as well as module inputs and outputs. In this case, no fields exists except the *instanceName*. Accordingly, there is no possibility to interact with the module. Only the input and the output of the module are given. ![Automatic Panel](images/tutorials/basicmechanics/GUI_10.png "Automatic Panel") -To add and edit a panel, open the context menu and select {{< menuitem "Related Files" "Filter.script" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the file *Filter.script*, which you can edit to define a custom User Interface for the Module. +To add and edit a panel, open the context menu and select {{< menuitem "Related Files" "Filter.script" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the file *Filter.script*, which you can edit to define a custom user interface for the module. ![Module script file](images/tutorials/basicmechanics/GUI_11.png "Module script file") ### Module Interface -Per default, the *.script* file contains the {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Interface" "interface" >}} of the module. -In the interface section (everything insight the curled brackets behind the name *Interface*) you can define the module inputs, the module outputs, and also all module fields (or *Parameters*). +By default, the *.script* file contains the {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Interface" "interface" >}} of the module. +In the interface section (everything inside the curled brackets behind the name *Interface*) you can define the module inputs, the module outputs, and also all module fields (or *Parameters*). [//]: <> (MVL-653) {{< highlight filename="Filter.script" >}} @@ -63,28 +65,25 @@ Interface { {{}} ##### Module Inputs and Outputs - -To create an input/output, you need to define a *Field* in the respective input/output environment. Each input/output gets a name (here *input0/output0*) that you can use to reference this field. The module input maps to an input of the internal network. You need to define this mapping. In this case, the input of the macro module `Filter` maps to the input of the module `Convolution` of the internal network (*internalName = Convolution.input0*). Similarly, you need to define which output of the internal network maps to the output of the macro module `Filter`. In this example, the output of the internal module `Arithmethic2` maps to the output of our macro module `Filter` (*internalName = Arithmetic2.output0*). +To create an input/output, you need to define a *Field* in the respective input/output section. Each input/output gets a name (here *input0/output0*) that you can use to reference this field. The module input maps to an input of the internal network. You need to define this mapping. In this case, the input of the macro module `Filter` maps to the input of the module `Convolution` of the internal network (*internalName = Convolution.input0*). Similarly, you need to define which output of the internal network maps to the output of the macro module `Filter`. In this example, the output of the internal module `Arithmethic2` maps to the output of our macro module `Filter` (*internalName = Arithmetic2.output0*). Creating an input/output causes: 1. Input/output connectors are added to the module. 2. You can find placeholders for the input and output in the internal network (see image). 3. Input/output fields are added to the automatic panel. -4. A description of the input/output fields is automatically added to the module help file, when opening the *\*.mhelp* file after input/output creation. Helpfile creation is explained in [Example 2.3](tutorials/basicmechanisms/macromodules/helpfiles/). +4. A description of the input/output fields is automatically added to the module help file, when opening the *.mhelp* file after input/output creation. Helpfile creation is explained in [Example 2.3](tutorials/basicmechanisms/macromodules/helpfiles/). ![Internal Network of your macro module](images/tutorials/basicmechanics/BM_23.png "Internal Network of your macro module") ##### Module Fields - -In the environment *Parameters* you can define *fields* of your macro module. These fields may map to existing fields of the internal network (*internalName = ...* ), but they do not need to and can also be completely new. You can reference these fields when creating a panel, to allow interactions with these fields. All fields appear in the *Automatic Panel*. +In the *Parameters* section, you can define *fields* of your macro module. These fields may map to existing fields of the internal network (*internalName = ...* ), but they do not need to and can also be completely new. You can reference these fields when creating a panel, to allow interactions with these fields. All fields appear in the *Automatic Panel*. ### Module Panel Layout - -To create your own User Interface, we need to create a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Window" "Window" >}}. A window is one of the layout elements that exist in MDL. These layout elements are called {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#Controls" "controls" >}}. The curled brackets define the window environment, in which you can define properties of the window and insert further controls like a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Box" "Box" >}}. +To create your own user interface, we need to create a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Window" "Window" >}}. A window is one of the layout elements that exist in MDL. These layout elements are called {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#Controls" "controls" >}}. The curled brackets define the window section, in which you can define properties of the window and insert further controls like a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Box" "Box" >}}. Initially, we call the window *MyWindowTitle*, which can be used to reference this window. -Double-clicking {{< mousebutton "left" >}} on your module now opens your first self developed User Interface. +Double-clicking {{< mousebutton "left" >}} on your module now opens your first self-developed user interface. [//]: <> (MVL-653) {{< highlight filename="Filter.script" >}} @@ -117,17 +116,17 @@ Window MyWindowName { ![Module Panel](images/tutorials/basicmechanics/ModulePanel.png "Module Panel") -You can define different properties of your control. For a window, you can for example define a title, or whether the +You can define different properties of your control. For a window, you can, for example, define a title, or whether the window should be shown in full screen (*fullscreen = True*). These properties are called {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/#SyntaxTagsAndValues" "tags" >}} and are individually different for each control. Which tags exist for the control window can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Window" "here" >}}. -The control box has different tags. You can for example define a +The control box has different tags. You can, for example, define a title for the box, but you can not define whether to present the box in full screen. If you like to add more than one control to your window, -for example one box and one label, you can specify their design like in +for example, one box and one label, you can specify their design like in the following examples: [//]: <> (MVL-653) @@ -174,13 +173,12 @@ Window MyWindowName { ![Horizontal layout of Box and Text](images/tutorials/basicmechanics/HorizontalLayout.png "Horizontal layout of Box and Text") -There are much more controls, which can be used. For example a CheckBox, +There are much more controls that can be used. For example, a CheckBox, a Table, a Grid, or a Button. To find out more, take a look into the {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#Controls" "MDL Reference" >}}. [//]: <> (MVL-653) ### Module Interactions {#mdlInteractions} - Until now, we learned how to create the layout of a panel. As a next step, we like to get an overview over interactions. {{}} @@ -188,8 +186,7 @@ You can add the module `GUIExample` to your workspace and play around with is. {{}} #### Access to Existing Fields of the Internal Network - -To interact with fields of the internal network in your User Interface, we +To interact with fields of the internal network in your user interface, we need to access these fields. To access the field of the internal module `Convolution`, which defines the kernel, we need to use the internal network name. To find the internal field name, open the internal network of the macro module `Filter` (click on the module using the middle mouse button {{< mousebutton "middle" >}}). @@ -221,7 +218,7 @@ Window MyWindowName { ![Selecting the kernel](images/tutorials/basicmechanics/SelectingKernel.png "Selecting the kernel") -As an alternative, you can define the field *kernel* in the *Parameters* environment, and reference the defined field by its name. The result in the panel is the same. You can see a difference in the Automatic Panel. All fields that are defined in the interface in the *Parameters* environment appear in the Automatic Panel. Fields of the internal network, which are used but not declared in the section *Parameters* of the module interface, do not appear in the Automatic Panel. +As an alternative, you can define the field *kernel* in the *Parameters* section, and reference the defined field by its name. The result in the panel is the same. You can see a difference in the automatic panel. All fields that are defined in the interface in the *Parameters* section appear in the automatic panel. Fields of the internal network, which are used but not declared in the section *Parameters* of the module interface, do not appear in the automatic panel. {{< highlight filename="Filter.script" >}} ```Stan @@ -252,8 +249,7 @@ Window MyWindowName { {{}} #### Commands - -We cannot only use existing functionalities, but also add new interactions via Python scripting. +We not only can use existing functionalities, but also add new interactions via Python scripting. In the example below, we added a *wakeupCommand* to the Window and a simple *command* to the Button. @@ -276,7 +272,7 @@ Both commands reference a Python function that is executed whenever both actions If you like to learn more about Python scripting, take a look at [Example 2.5](tutorials/basicmechanisms/macromodules/pythonscripting). -We need to define the Python script, which contains our Python functions. In order to do this, add a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Commands" "Command">}} section outside your window and define the tag source. +We need to define the Python script that contains our Python functions. In order to do this, add a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Commands" "Command">}} section outside your window and define the tag source. **Example:** {{< highlight filename="Filter.script" >}} @@ -288,7 +284,7 @@ Commands { {{}} {{}} -The section *Source* should already be available and generated automatically in case you enable the Wizard to add a Python file to your module. +The section *Source* should already be available and generated automatically in the case you enable the Wizard to add a Python file to your module. {{}} [//]: <> (MVL-653) diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/helpfiles.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/helpfiles.md index 8ce8979d4..6db1b06dc 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/helpfiles.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/helpfiles.md @@ -1,5 +1,5 @@ --- -title: "Example 2.3: Creation of module help" +title: "Example 2.3: Creation of Module Help" date: 2022-06-15T08:58:44+02:00 status: "OK" draft: false @@ -8,22 +8,22 @@ tags: ["Beginner", "Tutorial", "Macro", "Macro modules", "Global Macro", "Help"] menu: main: identifier: "helpfiles" - title: "Creation of module help files in MATE" + title: "Creation of Module Help Files in MATE" weight: 400 parent: "macro_modules" --- -# Example 2.3: Creation of Module Help +# Example 2.3: Creation of Module Help Generating help of a macro module is part of the video about macro modules from [Example 2: Creation of global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules) {{< youtube "M4HnA0d1V5k">}} ## Introduction - In this chapter, you will learn how to create a help page and an example network. For hands-on training, we will use the macro module `Filter`, which was created in the [previous chapter](tutorials/basicmechanisms/macromodules/globalmacromodules). Depending on the way the macro module was created, the default help page and example network might or might not exist. In the case they exist, the help page only contains information about module inputs and outputs as well as module fields. The example network only contains the macro module itself. Both, the help page and the example network, can be created and edited after module creation. ## Steps to Do + ### Creation of Help Files Using MeVisLab MATE We will start by creating a help file using the built-in text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MeVisLab MATE">}} (MeVisLab Advanced Text Editor). If you open the context menu of your global macro module and select {{< menuitem "Help" >}}, it might be that no help page is given. We will start to create a help file by selecting {{< menuitem "Help" "Create Help" >}}. If a help page already exists, select {{< menuitem "Help" "Edit Help" >}}. @@ -69,7 +69,7 @@ Depending on the way the macro module was created, more or less features are aut {{}} ### Creation of an Example Network -To add an example network to your module, you need to add a reference to the respective *\*.mlab* file to the module definition file (.def). Open the file *Filter.def*. You can find the line *exampleNetwork = "$(LOCAL)/networks/FilterExample.mlab"*, which defines the reference to the *.mlab* file containing the example network. By default, the name of the example network is *ModulenameExample.mlab*. An *.mlab* file containing only the module *Filter* is created inside the folder *networks*. +To add an example network to your module, you need to add a reference to the respective *.mlab* file to the module definition file (*.def*). Open the file *Filter.def*. You can find the line *exampleNetwork = "$(LOCAL)/networks/FilterExample.mlab"*, which defines the reference to the *.mlab* file containing the example network. By default, the name of the example network is *ModulenameExample.mlab*. An *.mlab* file containing only the module *Filter* is created inside the folder *networks*. It is possible that the reference to the example network or the file *FilterExample.mlab* is missing. One reason could be that its creation was not selected when creating the macro module. In this case, add the reference and the file manually. @@ -79,10 +79,9 @@ To create the example network, open the file *FilterExample.mlab* in MeVisLab an ![Example Network](images/tutorials/basicmechanics/ExpNetwork_02.png "Example Network") - ## Summary -* {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MeVisLab MATE">}} is a build-in text editor which can be used to create module help files, module panels or to create module interactionss via Python scripting. -* You can create help files via the module context menu using MeVisLab Mate. -* You can add an example network to your macro module via the .def file. +* {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MeVisLab MATE">}} is a built-in text editor that can be used to create module help files and module panels, or to create module interactions via Python scripting. +* You can create help files via the module context menu using MeVisLab's MATE. +* You can add an example network to your macro module via the *.def* file. [//]: <> (MVL-653) \ No newline at end of file diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md index a2a7629df..f0023c3a5 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md @@ -1,5 +1,5 @@ --- -title: "Example 7: Creating your own ItemModel by using the ItemModelView" +title: "Example 7: Creating Your Own ItemModel by Using the ItemModelView" date: 2025-06-03 status: "OK" draft: false @@ -8,10 +8,11 @@ tags: ["Advanced", "Tutorial", "ItemModel", "ItemModelView"] menu: main: identifier: "itemmodel" - title: "Creating your own ItemModel by using the ItemModelView" + title: "Creating Your Own ItemModel by Using the ItemModelView" weight: 465 parent: "basicmechanisms" --- + # Example 7: Creating Your Own ItemModel by Using the ItemModelView ## Introduction @@ -194,7 +195,7 @@ class MyItem: ``` {{}} -Now we implement a very simple and basic model named *MyItemModel*. Initially, we create a new *MLBase* object using the existing *StandardItemModel* and define the structure of our items as already done using the attributes. +Now, we implement a very simple and basic model named *MyItemModel*. Initially, we create a new *MLBase* object using the existing *StandardItemModel* and define the structure of our items as already done using the attributes. Some additional functions are necessary to get the root item and the selected index of the model. We also need functions to add and insert items and to clear all items. @@ -279,7 +280,7 @@ Window { ``` {{}} -#### Fill the Model with Your Data +#### Fill the Model With Your Data Now, we can implement the function *imageChanged*. {{< highlight filename="MyItemModelView.py" >}} @@ -361,7 +362,7 @@ If you now open the panel of your module, you can already see the results. The first line shows the information of the patient, the study and the series and each child item represents a single slice of the image. -## Interact with Your Model +## Interact With Your Model We can now add options to interact with the *ItemModelView*. Open the *.script* file of your module and go to the *Commands* section. We add a *FieldListener* to our *selection* field. Whenever the user selects a different item in our view, the Python function *itemClicked* in the *FieldListener* is executed. {{< highlight filename="MyItemModelView.script" >}} @@ -387,7 +388,7 @@ def getItemByID(self, id): It uses *id* to find the selected item and returns all values of this item. -Now add the Python function of our *FieldListener* to your Python script: +Now, add the Python function of our *FieldListener* to your Python script: {{< highlight filename="MyItemModelView.py" >}} ```Python diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/package.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/package.md index c2e19f33e..27ef795ff 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/package.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/package.md @@ -8,18 +8,16 @@ tags: ["Beginner", "Tutorial", "Package"] menu: main: identifier: "packageCreation" - title: "Creation of packages necessary for macro modules." + title: "Creation of Packages Necessary for Macro Modules." weight: 380 parent: "macro_modules" --- - # Example 2.1: Package Creation {{< youtube "1wrGsYtAs3g">}} ## Introduction - Packages are the way MeVisLab organizes different development projects. Macro modules and projects are stored in packages. If you like to create a global macro module, you need a package in which this macro module can be stored in. In this chapter, we will create our own package. We start our package creation by creating a package group, because every package needs to be stored in a package group. You can find detailed information about packages and package groups {{< docuLinks "/Resources/Documentation/Publish/SDK/GettingStarted/ch08.html" "here" >}} and in the {{< docuLinks "/Resources/Documentation/Publish/SDK/PackageStructure/index.html" "package documentation" >}}. @@ -35,7 +33,7 @@ To create packages and package groups, we will use the Project Wizard. Open the Next you need to: -1. Find a name for your package group, for example your company name or +1. Find a name for your package group, for example, your company name or in our example the name *MyPackageGroup*. 2. Find a name for your package, in our example we call it *General*. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md index bb8df56d2..ebda6360d 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md @@ -12,6 +12,7 @@ menu: weight: 455 parent: "basicmechanisms" --- + # Example 5: Debugging Python Files in MATE {{< youtube "ccLDQUrlzjU">}} @@ -20,7 +21,7 @@ menu: MeVisLab provides the powerful integrated text editor MATE. By default, MATE is used to create/edit files like Python scripts. In this tutorial, we want to show you how to debug Python scripts in MeVisLab. ## Prepare Your Network -We are using a very simple network of pre-defined modules, but you can also debug your self-written Python scripts. Add a `LocalImage` module to your workspace and connect it to a `DicomTagBrowser` module. The `DicomTagBrowser` module shows a table containing the DICOM tags of your currently opened file. +We are using a very simple network of predefined modules, but you can also debug your self-written Python scripts. Add a `LocalImage` module to your workspace and connect it to a `DicomTagBrowser` module. The `DicomTagBrowser` module shows a table containing the DICOM tags of your currently opened file. ![Example Network](images/tutorials/basicmechanics/Debug1.png "Example Network") @@ -34,7 +35,7 @@ MATE only opens Python files if the default configuration in *MeVisLab/Preferenc ![MATE](images/tutorials/basicmechanics/Debug2.png "MATE") {{}} -You cannot only debug your own files, but also Python scripts of pre-defined MeVisLab modules. +Not only can you debug your own files, but you can also debug Python scripts of predefined MeVisLab modules. {{}} The user interface of MATE provides some relevant views for debugging. @@ -106,16 +107,16 @@ Use the *Debugging* panel (fifth button *Step to next line*) or press {{< keyboa ![Watches panel](images/tutorials/basicmechanics/Debug7b.png "Watches panel") -The *Variables* panel now shows all currently available local and global variables including their value(s). The *Stack Trace* panel shows that the *copyCurrentTagName* function has been called after the *DicomTagBrowser.MenuItem.command* from the \*.script file of the `DicomTagBrowser` module. +The *Variables* panel now shows all currently available local and global variables including their value(s). The *Stack Trace* panel shows that the *copyCurrentTagName* function has been called after the *DicomTagBrowser.MenuItem.command* from the *.script* file of the `DicomTagBrowser` module. ![Variables/Watches panel](images/tutorials/basicmechanics/Debug7a.png "Variables/Watches panel") ## Conditions for Breakpoints -You can also define conditions for your breakpoints. Remove breakpoint in line 180 and set a new one in line 181. In case you only want to stop the execution of your script if a specific condition is met, right click {{< mousebutton "right" >}} on your breakpoint and select {{< menuitem "Set Condition for Breakpoint" >}}. A dialog opens where you can define your condition. Enter **item.text(1) == 'SOPClassUID'** as condition. +You can also define conditions for your breakpoints. Remove breakpoint in line 180 and set a new one in line 181. In the case you only want to stop the execution of your script if a specific condition is met, right click {{< mousebutton "right" >}} on your breakpoint and select {{< menuitem "Set Condition for Breakpoint" >}}. A dialog opens where you can define your condition. Enter **item.text(1) == 'SOPClassUID'** as condition. ![Conditions for Breakpoints](images/tutorials/basicmechanics/Debug8.png "Conditions for Breakpoints") -Now, the code execution is only stopped if you copy the tag name *SOPClassUID*. In case another line is copied, the execution does not stop and just continues. +Now, the code execution is only stopped if you copy the tag name *SOPClassUID*. In the case another line is copied, the execution does not stop and just continues. ## Evaluate Expression The *Evaluate Expression* tab allows you to modify variables during execution. In our example you can set the result **item.text(1)** to something like **item.setText(1, "Hello")**. If you now step to the next line via {{< keyboard "F10" >}}, your watched value shows *"Hello"* instead of *"SOPClassUID"*. @@ -123,7 +124,7 @@ The *Evaluate Expression* tab allows you to modify variables during execution. I {{< imagegallery 2 "images/tutorials/basicmechanics" "Debug9" "Debug9a" >}} ## Summary -* MATE allows debugging of any Python files including files pre-defined in MeVisLab. +* MATE allows debugging of any Python files including files predefined in MeVisLab. * Values of variables can be watched. * It is possible to define conditions for breakpoints, so that the execution is only stopped if the condition is met. * It is possible to change values of variables while program execution is stopped via *Evaluate Expression* panel. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonpip.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonpip.md index a08104c98..30ddad0df 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonpip.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonpip.md @@ -1,5 +1,5 @@ --- -title: "Example 4: Installing additional Python packages using the PythonPip module" +title: "Example 4: Installing Additional Python Packages Using the PythonPip Module" date: 2023-05-16 status: "OK" draft: false @@ -8,10 +8,11 @@ tags: ["Advanced", "Tutorial", "Python", "PythonPip", "pip"] menu: main: identifier: "pythonpip" - title: "Installing additional Python packages using the PythonPip module" + title: "Installing Additional Python Packages Using the PythonPip Module" weight: 450 parent: "basicmechanisms" --- + # Example 4: Installing Additional Python Packages Using the PythonPip Module ## Introduction MeVisLab already comes with a lot of integrated third-party software tools ready to use. Nevertheless, it might be necessary to install additional Python packages for your specific needs. This example will walk you through the process of adding packages through usage of/using the `PythonPip` module. @@ -53,10 +54,10 @@ We strongly recommend to install the packages into a MeVisLab user package. This The only disadvantage: Python commands will not be recognized outside of MeVisLab by default. {{}} -Thirdparty information and *.mli* files are updated automatically. +Third-party information and *.mli* files are updated automatically. ### Using the Commandline -Another option is using the commandline tool provided by MeVisLab. Under Windows, you need to change to directory *Packages\MeVis\ThirdParty\Python* first. +Another option is using the commandline tool provided by MeVisLab. On Windows, you need to change to directory *Packages\MeVis\ThirdParty\Python* first. {{< highlight filename="commandline" >}} ```cmd diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonscripting.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonscripting.md index 7d2416de4..1ff224fdf 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonscripting.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythonscripting.md @@ -1,5 +1,5 @@ --- -title: "Example 2.5: Interactions via Python scripting" +title: "Example 2.5: Interactions via Python Scripting" date: 2022-06-15T08:58:44+02:00 status: "OK" draft: false @@ -8,32 +8,31 @@ tags: ["Beginner", "Tutorial", "Macro", "Macro modules", "Global Macro", "Python menu: main: identifier: "pythonscripting" - title: "Interactions with macro modules via Python scripting" + title: "Interactions with Macro Modules via Python Scripting" weight: 420 parent: "macro_modules" --- + # Example 2.5: Module Interactions Using Python Scripting {#TutorialPythonScripting} + ## Introduction This chapter will give you an overview over Python scripting in MeVisLab. Here, no introduction into Python will be given. However, basic knowledge in Python is helpful. Instead, we will show how to integrate and use Python in the MeVisLab SDK. -In fact, nearly everything in MeVisLab can be done via Python scripting: You can add modules to your network, or remove modules, you can dynamically establish and remove connections and so on. But, much more important: You can access module inputs and outputs, as well as module fields to process their parameters and data. You can equip user interfaces and panel with custom functionalities. Python can be used to implement module interactions. When you open a panel or you press a button in a panel, the executed actions are implemented via Python scripting. +In fact, nearly everything in MeVisLab can be done via Python scripting: You can add modules to your network, or remove modules, you can dynamically establish and remove connections, and so on. But, much more important: You can access module inputs and outputs, as well as module fields to process their parameters and data. You can equip user interfaces and panel with custom functionalities. Python can be used to implement module interactions. When you open a panel or you press a button in a panel, the executed actions are implemented via Python scripting. ## Basics +To see how to access modules, fields, and so on, open the *Scripting Console* via {{< menuitem "Scripting" "Show Scripting Console" >}}. -To see how to access modules, fields, and so on, open the *Scripting Console* Via {{< menuitem "Scripting" "Show Scripting Console" >}}. ### Internal Field Names - -You can find the internal name of one module field in the respective network. Open a panel, for example the Automatic Panel and right-click {{< mousebutton "right" >}} the field's title to open the field's context menu. Now, you can select *Copy Name*, to copy the internal name of the field. This name can be used to access the field via scripting. +You can find the internal name of one module field in the respective network. Open a panel, for example, the automatic panel and right-click {{< mousebutton "right" >}} the field's title to open the field's context menu. Now, you can select *Copy Name*, to copy the internal name of the field. This name can be used to access the field via scripting. ### Scripting Context - -When entering *ctx* to the console, you can see the context you are working with. In the context of the *Scripting Console*, you have access to your workspace, meaning the whole network, its modules, and the module fields. +When entering *ctx* to the console, you can see the context you are working with. In the context of the *Scripting Console*, you have access to your workspace, meaning the whole network, its modules, and the modules' fields. ![Scripting context](images/tutorials/basicmechanics/Scripting_02.png "Scripting context") ### Editing the Workspace - In the *Scripting Console*, you can add and connect modules using the following commands: * *ctx.addModule("*< ModuleName >*")* : Add the desired module to your workspace. @@ -49,44 +48,39 @@ It is also possible to add notes to your workspace. ![Add a note to the workspace](images/tutorials/basicmechanics/Scripting_04.png "Add a note to your workspace") ### Access Modules and Module Fields - You can access modules via *ctx.module("* < ModuleName > *")*. From this object, you can access module fields, module inputs and outputs, and everything in context of this module. -You can also directly access a module field via *ctx.field("* < ModuleName.FieldName > *")*. Different methods can be called on this object. Take a look at the {{< docuLinks "/Resources/Documentation/Publish/SDK/ScriptingReference/group__scripting.html" "Scripting Reference" >}} to find out which methods can be called for which object or class. You can for example access the value of the respective field. +You can also directly access a module field via *ctx.field("* < ModuleName.FieldName > *")*. Different methods can be called on this object. Have a look at the {{< docuLinks "/Resources/Documentation/Publish/SDK/ScriptingReference/group__scripting.html" "Scripting Reference" >}} to find out which methods can be called for which object or class. You can, for example, access the value of the respective field. [//]: <> (MVL-653) ![Access modules and module fields](images/tutorials/basicmechanics/Scripting_05.png "Access modules and module fields") ### Python Scripting Reference - -{{< docuLinks "/Resources/Documentation/Publish/SDK/ScriptingReference/group__scripting.html" "Here" >}}, you can find the Scripting Reference. In the Scripting Reference you can find information about different Python classes used in MeVisLab and their methods. +{{< docuLinks "/Resources/Documentation/Publish/SDK/ScriptingReference/group__scripting.html" "Here" >}} you can find the Scripting Reference. In the Scripting Reference you can find information about different Python classes used in MeVisLab and their methods. [//]: <> (MVL-653) ## Where and How to Use Python Scripting + #### Scripting View Under {{< menuitem "View" "Views" "Scripting" >}} you can find the View *Scripting*. The view offers a standard Python console, without any meaningful network or module context. This means only general Python functionalities can be tested and used. Access to modules or your network is not possible. #### Scripting Console - You can open the *Scripting Console* via {{< menuitem "Scripting" "Show Scripting Console" >}}. In the context of your workspace, you can access your network and modules. #### Scripting Console of Modules - Every module offers a scripting console. Open the context menu of a module and select {{< menuitem "Show Window" "Scripting Console" >}}. You can work in the context (*ctx.*) of this module. #### Module `RunPythonScript` - The module `RunPythonScript` allows to execute Python scripts from within a MeVisLab network. You can draw parameter connection from modules to `RunPythonScript` and back, to process parameter fields using Python scripting. An example for the usage of `RunPythonScript` can be found [here](../scriptingexample1/). #### Module Interactions via Python Scripting - -You can reference to a Python function inside a *.script* file of a macro module. With this, you can for example execute a Python function, whenever you open a panel, define the action that is executed when pressing a button or specify the command triggered by a [field listener](tutorials/basicmechanisms/macromodules/scriptingexample2). An example for module interactions via Python scripting is given in the same example. +You can reference to a Python function in a *.script* file of a macro module. With this, you can, for example, execute a Python function whenever you open a panel, or define the action that is executed when pressing a button or specify the command triggered by a [field listener](tutorials/basicmechanisms/macromodules/scriptingexample2). An example for module interactions via Python scripting is given in the same example. #### Python Scripting in Network Files (*.mlab*) -If you do not want to create a macro module, you can also execute Python scripts in a network file (*.mlab*). Save your network using a defined name, for example *mytest.mlab*. Then create a *.script* and a *.py* file in the same directory, using the same names (*mytest.script* and *mytest.py*). +If you do not want to create a macro module, you can also execute Python scripts in a network file (*.mlab*). Save your network using a defined name, for example, *mytest.mlab*. Then, create a *.script* and a *.py* file in the same directory, using the same names (*mytest.script* and *mytest.py*). Open the *.script* file and add a *Commands* section defining the name of the Python file. @@ -98,7 +92,7 @@ Commands { ``` {{}} -Now you can enter your Python code to the file *mytest.py*, for example: +Now, you can enter your Python code to the file *mytest.py*, for example: {{< highlight filename="IsoCSOs.py" >}} ```Python @@ -116,7 +110,7 @@ If you now use the menu item {{< menuitem "Scripting" "Start Network Script" >}} Under {{< menuitem "View" "Views" "Scripting Assistant" >}} you can find the view *Scripting Assistant*. In this view, the actions you execute in the workspace are translated into Python script. -For example: Open the *Scripting Assistant*. Add the module `WEMInitialize` to your workspace. You can select a *Model*, for example the cube. In addition, you can change the *Translation* and press *Apply*. All these actions can be seen in the *Scripting Assistant*, translated into Python code. Therefore, the *Scripting Assistant* is a powerful tool to help you to script you actions. +For example: Open the *Scripting Assistant*. Add the module `WEMInitialize` to your workspace. You can select a *Model*, for example, the cube. In addition, you can change the *Translation* and press *Apply*. All these actions can be seen in the *Scripting Assistant* translated into Python code. Therefore, the *Scripting Assistant* is a powerful tool to help you to script you actions. ![Scripting Assistant](images/tutorials/basicmechanics/Scripting_01.png "Scripting Assistant") @@ -126,7 +120,6 @@ See the following examples for Python scripting: 2. [Module interactions via Python scripting](./tutorials/basicmechanisms/macromodules/scriptingexample2/) ## Summary - -* Python can be used to access, create and process networks, modules, fields and panels. +* Python can be used to access, create, and process networks, modules, fields, and panels. * You can use Python via different scripting consoles. * You can also define custom module interactions by referencing to Python functions from the *.script* file. diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample1.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample1.md index 94952ff4d..f0bbfc3a7 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample1.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample1.md @@ -1,5 +1,5 @@ --- -title: "Example 2.5.1: The module RunPythonScript" +title: "Example 2.5.1: The Module RunPythonScript" date: 2022-06-15T08:58:44+02:00 status: "OK" draft: false @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Python", "Scripting", "RunPythonScript"] menu: main: identifier: "scriptingexample1" - title: "The module RunPythonScript" + title: "The Module RunPythonScript" weight: 430 parent: "macro_modules" --- @@ -18,21 +18,19 @@ menu: {{< youtube "O5Get1PMOq8" >}} ## Introduction - -The module `RunPythonScript` allows to execute Python scripts from within a MeVisLab network. You can draw parameter connection from modules to `RunPythonScript` and back, to process parameter fields using Python scripting. +The module `RunPythonScript` allows to execute Python scripts from within a MeVisLab network. You can draw parameter connection from modules to `RunPythonScript` and back to process parameter fields using Python scripting. ## Steps to Do -### Develop Your Network +### Develop Your Network In this example, we like to dynamically change the color of a cube in an Open Inventor scene. For that, add and connect the following modules as shown. ![RunPythonScript Example](images/tutorials/basicmechanics/Scripting_06.png "RunPythonScript") ### Scripting Using the Module `RunPythonScript` - Open the panel of `RunPythonScript`. There is an option to display input and output fields. For that, tick the box *Fields* on the top left side of the panel. -You can also name these fields individually, by ticking the box *Edit field titles*. Call the first input field *TimeCounter* and draw a parameter connection from the field *Value* of the panel of `TimeCounter` to the input field *TimeCounter* of the module `RunPythonScript`. +You can also name these fields individually by ticking the box *Edit field titles*. Call the first input field *TimeCounter* and draw a parameter connection from the field *Value* of the panel of `TimeCounter` to the input field *TimeCounter* of the module `RunPythonScript`. We can name the first output field *DiffuseColor* and draw a parameter connection from this field to the field *Diffuse Color* in the panel of the module `SoMaterial`. ![TimeCounter](images/tutorials/basicmechanics/Scripting_07.png "TimeCounter") @@ -57,7 +55,6 @@ You can now see a color change in the viewer `SoExaminerViewer` every time the ` ![Triggered color change](images/tutorials/basicmechanics/Scripting_08.png "Triggered color change") - ## Summary * The module `RunPythonScript` can be used to process module fields in your network using Python scripting. * Use the methods *updateOutputValue(name, value)* or *setOutputValue(name, value)* to update output fields of `RunPythonScript`. \ No newline at end of file diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample2.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample2.md index dd0963bc0..e960e87cd 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample2.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/scriptingexample2.md @@ -1,5 +1,5 @@ --- -title: "Example 2.5.2: Module interactions via Python scripting" +title: "Example 2.5.2: Module Interactions via Python Scripting" date: 2022-06-15T08:58:44+02:00 status: "OK" draft: false @@ -8,29 +8,29 @@ tags: ["Advanced", "Tutorial", "Macro", "Macro modules", "Global Macro", "Python menu: main: identifier: "scriptingexample2" - title: "Module interactions via Python scripting" + title: "Module Interactions via Python Scripting" weight: 440 parent: "macro_modules" --- + # Example 2.5.2: Module Interactions Via Python Scripting {{< youtube "hGq6vA7Ll9Q" >}} ## Introduction - In this example, you will learn how to add Python scripting to your user interface. The network used in [Chapter V](tutorials/dataobjects/contours/contourexample5/) will be used for creating the macro module. ## Steps to Do + ### Creating the Macro Module -First, we condense the example network into a macro module and then we create a panel for that module. To create a macro module use the -Project Wizard, which you find under {{< menuitem "File" "Run Project Wizard" >}}. Select +First, we condense the example network into a macro module and then we create a panel for that module. To create a macro module, use the Project Wizard, which you find under {{< menuitem "File" "Run Project Wizard" >}}. Select *Macro module* and press *Run*. Now, you have to edit: 1. Name: The name of your module 2. Package: Select the package you like to save the macro module in 3. Directory Structure: Change to *Self-contained* (this setting is only available in MeVisLab versions before 5.0.0, later versions always use *self-contained*) -4. Project: Select you project name +4. Project: Select your project name Press *Next* and edit the following: @@ -43,15 +43,14 @@ Now, create your macro module and reload MeVisLab. You can find your module via ![Enable Python scripting](images/tutorials/basicmechanics/EnablePythonScripting.png "Enable Python scripting") -To design a panel and create a user interface for the macro module, open the *.script* file. You can see that a *Command* environment exist, which defines the Python file as source for all commands. +To design a panel and create a user interface for the macro module, open the *.script* file. You can see that a *Commands* section exists, which defines the Python file as source for all commands. ![Open the script file](images/tutorials/basicmechanics/OpenScriptFile.png "Open the script file") ![Script file](images/tutorials/basicmechanics/ScriptFile.png "Script file") -### Creating a Panel with Tabs and Viewers - -At first, we create a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Window" "Window" >}} with two {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html##mdl_TabView" "Tabs" >}}. One *Main* tab, in which both viewers of the network are represented and one tab for *Settings*. For generating tabs, we can use the control {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_TabView" "TabView" >}}, with its items {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_TabViewItem" "TabViewItem" >}}. The control *TabView* enables to add a command, which is executed when opening the tab. For adding the viewers to the panel, we use the Control *Viewer*. +### Creating a Panel With Tabs and Viewers +First, we create a {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Window" "Window" >}} with two {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html##mdl_TabView" "Tabs" >}}: the *Main* tab, in which both viewers of the network are represented, and the tab for *Settings*. For generating tabs, we can use the control {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_TabView" "TabView" >}}, with its items {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_TabViewItem" "TabViewItem" >}}. The control *TabView* enables to add a command, which is executed when opening the tab. For adding the viewers to the panel, we use the control *Viewer*. [//]: <> (MVL-653) @@ -84,17 +83,23 @@ Window { ![Panel with Tabs and Viewers](images/tutorials/basicmechanics/PanelWithTabsAndViewers.png "Panel with Tabs and Viewers") ### Edit Viewer Settings in the Panel - -You may want to change the design setting of the right viewer. This is possible via the network file of the macro module. Open the context menu {{< mousebutton "right" >}} and select {{< menuitem "Related Files" "IsoCSOs.mlab" >}} on the module. In the network file, open the Automatic Panel of the module `SoExaminerViewer` via context menu {{< menuitem "Show Windows" "Automatic Panel" >}} and change the field *decoration* to *False*. Keep in mind, as we did not create CSOs by now, the right viewer stays black. +You may want to change the design setting of the right viewer. This is +still possible via the internal network of the macro module. Open the +internal network either via the context menu or using the middle mouse +button {{< mousebutton "middle" >}} and click on the module. After that, open the automatic panel of +the module `SoExaminerViewer` via context menu {{< menuitem "Show Windows" "Automatic Panel" >}} and change the field *decoration* to *False*. Keep in mind, as we did not create CSOs by now, the right viewer stays black. ![Change viewer settings](images/tutorials/basicmechanics/ChangeViewerSettings.png "Change viewer settings") ![Changed viewer settings](images/tutorials/basicmechanics/ChangedViewerSettings.png "Changed viewer settings") ### Selection of Images - Next, we like to add the option to browse through the folders and select -the image, we like to create CSOs from. This functionality is already present in the internal network in the module `LocalImage`. We can copy this functionality from `LocalImage` and add this option to the panel above both viewers. But, how should we know which field name we reference to? To find this out, open the network file of your macro module again. Now you are able to open the panel of the module `LocalImage`. Right-click {{< mousebutton "right" >}} the desired field: In this case, right-click the label *Name:*. Select *Copy Name* to copy the internal name of this field. +the image, we like to create CSOs from. This functionality is already given in the internal network in the module `LocalImage`. We can copy this functionality from `LocalImage` and add this option to the panel above both viewers. But, how should we know, which field name we +reference to? To find this out, open the +internal network of your macro module. Now you are able to open the panel of +the module `LocalImage`. Right-click {{< mousebutton "right" >}} the desired field: In this case, right-click the label +*Name:*. Select *Copy Name*, to copy the internal name of this field. ![Copy the field name](images/tutorials/basicmechanics/GUI_Exp_09.png "Copy the field name") @@ -132,8 +137,7 @@ Window { ![Add name field](images/tutorials/basicmechanics/AddNameField.png "Add name field") -### Add Buttons to your Panel - +### Add Buttons to Your Panel As a next step, we like to add a *Browse\...* button, like in the module `LocalImage`, and also a button to create the CSOs. @@ -145,11 +149,11 @@ To create the *Browse\...* button: To create the Iso Generator Button: -We like to copy the field of the Update-Button from the internal module +We like to copy the field of the *Update* button from the internal module `IsoCSOGenerator`, but not its layout so: 1. Create a new Field in the interface, called *IsoGenerator*, which contains the internal field *Update* from the module `IsoCSOGenerator`. -2. Create a new Button in your Window which uses the field *IsoGenerator*. +2. Create a new Button in your Window that uses the field *IsoGenerator*. After these steps, you can use the Iso Generator button to create CSOs. @@ -220,10 +224,8 @@ def fileDialog(): ![Automatically generate CSOs based on Iso value](images/tutorials/basicmechanics/GUI_Exp_14.png "Automatically generate CSOs based on Iso value") ### Colorizing CSOs - We like to colorize the CSO we hover over with our -mouse in the 2D viewer. Additionally, when clicking a CSO with the left mouse key {{< mousebutton "left" >}}, this CSO shall be -colorized in the 3D viewer. This functionality can be implemented via Python +mouse in the 2D viewer. Additionally, when clicking a CSO with the left mouse button {{< mousebutton "left" >}}, this CSO shall be colorized in the 3D viewer. This functionality can be implemented via Python scripting (even though MeVisLab has a build-in function to do that). We can do this in the following way: @@ -231,13 +233,13 @@ can do this in the following way: ![Scripting Assistant](images/tutorials/basicmechanics/GUI_Exp_15.png "Scripting Assistant") -2. Enable a functionality that allows us to notice the id of the CSO we are currently hovering over with our mouse. For this open the network file of our macro module. We will use the module `SoView2DCSOExtensibleEditor`. Open its panel and select the tab *Advanced*. You can check a box to enable *Update CSO id under mouse*. If you now hover over a CSO, you can see its id in the panel. We can save the network to save this functionality, but we can also solve our problem via scripting. The Scripting Assistant translated our action into code, which we can use. +2. Enable a functionality that allows us to notice the ID of the CSO we are currently hovering over with our mouse. For this, open the internal network of our macro module. We will use the module `SoView2DCSOExtensibleEditor`. Open its panel and select the tab *Advanced*. You can check a box to enable *Update CSO id under mouse*. If you now hover over a CSO, you can see its ID in the panel. We can save the internal network to save this functionality, but we can also solve our problem via scripting. The Scripting Assistant translated our action into code that we can use. ![Enabling CSO id identification](images/tutorials/basicmechanics/GUI_Exp_16.png "Enabling CSO id identification") - We like to activate this functionality when opening the panel of our macro module `IsoCSOs`. Thus, we add a starting command to the control Window. We can call this command for example *enableFunctionalities*. + We like to activate this functionality when opening the panel of our macro module `IsoCSOs`. Thus, we add a starting command to the control Window. We can call this command, for example, *enableFunctionalities*. - In the *\*.script* file: + In the *.script* file: {{< highlight filename="IsoCSOs.script" >}} ```Stan @@ -261,7 +263,7 @@ def enableFunctionalities(): ``` {{}} -3. Implement a field listener. This field listener will detect when you hover over a CSO and the CSO id changes. Triggered by a CSO id change, a colorization function will be executed, which will colorize the selected CSO. +3. Implement a field listener. This field listener will detect when you hover over a CSO and the CSO ID changes. Triggered by a CSO ID change, a colorization function will be executed that will colorize the selected CSO. In the *.script* file: @@ -310,7 +312,7 @@ def colorizeCSO(): {{}} Reload your module ({{< keyboard "F5" >}}) and open the panel. After generating CSOs, the CSO under your mouse is marked. Clicking this CSO {{< mousebutton "left" >}} enables the marking in the 3D viewer. If you like, you can add some settings to your *Settings* -page. For example +page. For example: {{< highlight filename="IsoCSOs.script" >}} ```Stan @@ -329,6 +331,6 @@ TabViewItem Settings { * The control *Button* creates a button executing a Python function when pressed. * The tag *WindowActivationCommand* of the control Window triggers Python functions executed when opening the panel. * Field listeners can be used to activate Python functions triggered by a change of defined parameter fields. -* Use the view *Scripting Assistant* can be used to translate actions into Python code. +* Use the view *Scripting Assistant* to translate actions into Python code. {{< networkfile "examples/basic_mechanisms/macro_modules_and_module_interaction/example2/ScriptingExample2.zip" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/soviewportregion.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/soviewportregion.md index 26837a967..0cd4a1a5a 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/soviewportregion.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/soviewportregion.md @@ -1,5 +1,5 @@ --- -title: "Example 6: Creating Multi-View Layouts Using SoViewportRegion" +title: "Example 6: Creating Multi View Layouts Using SoViewportRegion" date: 2025-04-22 status: "OK" draft: false @@ -8,11 +8,12 @@ tags: ["Beginner", "Tutorial", "SoViewportRegion", "Layout", "Multi-View"] menu: main: identifier: "soviewportregion" - title: "Creating Multi-View Layouts Using SoViewportRegion" + title: "Creating Multi View Layouts Using SoViewportRegion" weight: 460 parent: "basicmechanisms" --- -# Example 6: Creating Multi-View Layouts Using SoViewportRegion + +# Example 6: Creating Multi View Layouts Using SoViewportRegion ## Introduction In this guide, we will show how to use the `SoViewportRegion` module to create custom layouts within the `SoRenderArea` module. This allows you to display multiple views or slices in a single window. @@ -29,13 +30,13 @@ Add an `ImageLoad` module to your workspace and select a 3D image like *./MeVisL ![Image Display Setup](images/tutorials/basicmechanics/E6_1.png "Image Display Setup") -Opening the three `View2D` module panels now shows the data in three orthogonal views. The module `OrthoReformat3` transforms the input image (by rotating and/or flipping) into the three main views commonly used. +Opening the three `View2D` module panels now shows the image data in three orthogonal views. The module `OrthoReformat3` transforms the input image (by rotating and/or flipping) into the three main views commonly used. ![3 Views in 3 Viewers](images/tutorials/basicmechanics/E6_2.png "3 Views in 3 Viewers") The module `SoViewportRegion` divides the render window into multiple areas, allowing different views or slices to be shown in the same window. It's useful in medical applications, like displaying MRI or CT images from different angles (axial, sagittal, coronal) at once, making data analysis easier and faster. -Add three `SoViewportRegion` modules and connect each one to a `View2D` module. To display the hidden outputs of the `View2D` module, press {{< keyboard "SPACE" >}} and connect the output to the input of `SoViewportRegion`, as shown below. +Add three `SoViewportRegion` modules and connect each one to a `View2D` module. To display the hidden outputs of the `View2D` module, press {{< keyboard "SPACE" >}} and connect the output to the input of `SoViewportRegion` as shown below. ![Connect SoViewportRegion with View2D](images/tutorials/basicmechanics/E6_3.png "Connect SoViewportRegion with View2D") @@ -64,7 +65,7 @@ We want to create a layout with the following setting: ![Target Layout](images/tutorials/basicmechanics/E6_6.png "Target Layout") -Now open the left `SoViewportRegion` module and change settings: +Now, open the left `SoViewportRegion` module and change settings: * **X-Position and Width** * *Left Border* to 0 @@ -114,11 +115,11 @@ In the next example, the `SoRenderArea` will display four views at the same time ![3D View Layout](images/tutorials/basicmechanics/E6_11.png "3D View Layout") -These views will be arranged in a single panel, split into two sides, with each side showing two images. To add the 3D view, insert a `View3D` module and connect it to the `ImageLoad` module. Then connect the `View3D` to `SoCameraInteraction`, connect that to another `SoViewportRegion`, and finally to `SoRenderArea`. +These views will be arranged in a single panel that is split into two sides with each side showing two images. To add the 3D view, insert a `View3D` module and connect it to the `ImageLoad` module. Then, connect the `View3D` to `SoCameraInteraction`, connect that to another `SoViewportRegion`, and finally to `SoRenderArea`. ![3D View Network](images/tutorials/basicmechanics/E6_10.png "3D View Network") -Now open the left `SoViewportRegion` module and change settings: +Now, open the left `SoViewportRegion` module and change settings: * **X-Position and Width** * *Left Border* to 0 @@ -144,7 +145,7 @@ Open the right `SoViewportRegion` connected to the `SoCameraInteraction` module * *Domain* Fraction of height * *Reference* Upper window border -This setup will let you interact with the 3D view and display all four views together, as shown in the figure below. +This setup will let you interact with the 3D view and display all four views together as shown in the figure below. ![3D View](images/tutorials/basicmechanics/E6_12.png "3D View") @@ -152,8 +153,8 @@ You will see that the orientation cube of the 3D viewer appears in the bottom ri ![Final Network](images/tutorials/basicmechanics/E6_13.png "Final Network") -## Alternative Using `SoView2D` -In case you want the same dataset to be visualized in multiple viewers, the module `SoView2D` already provides this functionality. +## Alternatively Using `SoView2D` +In the case you want the same dataset to be visualized in multiple viewers, the module `SoView2D` already provides this functionality. ![Initial SoView2D](images/tutorials/basicmechanics/SoView2D_1.png "Initial SoView2D") @@ -163,7 +164,7 @@ By default, you will see your images in a single viewer the same way as if you u ![Multiple slices in SoView2D](images/tutorials/basicmechanics/SoView2D_2.png "Multiple slices in SoView2D") -Changing the *number of columns* to *3* and the *Number of Slices* to *9* results in a 3 x 3 layout. +Changing the *number of columns* to *3* and the *Number of Slices* to *9* results in a 3x3 layout. ![Multiple slices and columns in SoView2D](images/tutorials/basicmechanics/SoView2D_3.png "Multiple slices and columns in SoView2D") @@ -179,6 +180,6 @@ You can play around with the different `SoViewportRegion` modules to create your ![Exercise](images/tutorials/basicmechanics/E6_14.png "Exercise") ## Summary -* Own layouts can be created by using multiple `SoViewportRegion` modules +* Own layouts can be created by using multiple `SoViewportRegion` modules. {{< networkfile "examples/basic_mechanisms/soviewportregion.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/viewerexample.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/viewerexample.md index 7e24a265b..7f25e1e7d 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/viewerexample.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/viewerexample.md @@ -1,5 +1,5 @@ --- -title: "Example 3: Creating a simple application" +title: "Example 3: Creating a Simple Application" date: 2022-06-15T08:58:44+02:00 status: "OK" draft: false @@ -8,23 +8,27 @@ tags: ["Advanced", "Tutorial", "Macro", "Macro modules", "Global Macro", "Python menu: main: identifier: "viewerexample" - title: "Adding viewer to your UI and implement a field listener in Python" + title: "Adding Viewer to Your UI and Implement a Field Listener in Python" weight: 445 parent: "basicmechanisms" --- + # Example 3: Creating a Simple Application + ## Introduction In the previous examples, you already learned how to create macro modules, user interfaces, and how to interact with your UI via Python scripting. In this example, you will learn how to create a simple prototype application in MeVisLab including a user interface with 2D and 3D viewer. You will learn how to implement field listeners and react on events. ## Steps to Do + ### Create Your Network -Start with an empty network and add the Module `ImageLoad` to your workspace. Then, add a `View2D` and `View3D` to your workspace and connect the modules as seen below. +Start with an empty network and add the module `ImageLoad` to your workspace. Then, add the modules `View2D` and `View3D` to your workspace and connect them as seen below. ![Loading and viewing images](images/tutorials/basicmechanics/SimpleApp_01.png "Loading and viewing images") + ### Load an Image -Now double-click {{< mousebutton "left" >}} on the `ImageLoad` module and open any image. You can use the included file *./MeVisLab/Resources/DemoData/MRI_Head.dcm*. +Now, double-click {{< mousebutton "left" >}} on the `ImageLoad` module and open any image. You can use the included file *./MeVisLab/Resources/DemoData/MRI_Head.dcm*. Opening your viewers should now show the images in 2D and 3D. @@ -75,7 +79,7 @@ Interface { ``` {{}} -We now re-use the *filepath* field from the `ImageLoad` module for our interface. Add a *Window* and a *Vertical* to the bottom of your *.script* file. Add the just created parameter field *filepath* inside your *Vertical* as seen below. +We now reuse the *filepath* field from the `ImageLoad` module for our interface. Add a *Window* and a *Vertical* to the bottom of your *.script* file. Add the just created parameter field *filepath* inside your *Vertical* as seen below. {{< highlight filename="MyViewerApplication.script" >}} ``` Stan @@ -138,7 +142,7 @@ Window { We have a vertical layout having two items placed horizontally next to each other. The new *Button* gets the title *Reset* but does nothing yet, because we did not add a Python function to a command. -Additionally, we added the `View2D` and the `View3D` to our *Window* and defined the *height*, *width* and the *expandX/Y* property to *yes*. This leads our viewers to resize together with our *Window*. +Additionally, we added the `View2D` and the `View3D` to our *Window* and defined the *height*, *width*, and the *expandX/Y* property to *yes*. This leads our viewers to resize together with our *Window*. {{}} Additional information about the `View2D` and `View3D` options can be found in the MeVisLab {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#mdl_Viewer" "MDL Reference">}} @@ -149,7 +153,7 @@ You can now play around with your module in MeVisLab SDK. Open the *Window* and ![2D and 3D viewers in our application](images/tutorials/basicmechanics/SimpleApp_09.png "2D and 3D viewers in our application") ### Develop a Python Function for Your Button -Next we want to reset the filepath to an empty string on clicking our *Reset* button. Add the *reset* command to your Button. +Next, we want to reset the filepath to an empty string on clicking our *Reset* button. Add the *reset* command to your Button. {{< highlight filename="MyViewerApplication.script" >}} ``` Stan ... @@ -175,7 +179,6 @@ def reset(): Clicking on *Reset* in your module now clears the filename field and the viewers do not show any images anymore. #### Field Listeners {#fieldlisteners} - A field listener watches a given field in your network and reacts on any changes of the field value. You can define Python functions to execute in the case a change has been detected. In order to define such a listener, you need to add it to the *Commands* section in your *.script* file. @@ -207,8 +210,8 @@ def printCurrentSliceNumber(field): Scrolling through slices in the `View2D` module now logs a message containing the slice number currently visible to the MeVisLab Debug Output. ## Summary -* You can add any Viewers to your application UI by reusing them in MDL. -* Parameter Fields using the internalName of an existing field in your network allows re-using this UI element in your own UI. Changes in your UI are applied to the field in the module. +* You can add any viewers to your application UI by reusing them in MDL. +* Parameter fields using the *internalName* of an existing field in your network allows reusing this UI element in your own UI. Changes in your UI are applied to the field in the module. * Field Listeners allow reacting on changes of a field value in Python. {{< networkfile "examples/basic_mechanisms/viewer_application/viewerexample.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/dataobjects.md b/mevislab.github.io/content/tutorials/dataobjects.md index e7dde76ed..b5969568e 100644 --- a/mevislab.github.io/content/tutorials/dataobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects.md @@ -8,17 +8,17 @@ tags: ["Beginner", "Tutorial", "Data Objects", "2D", "Contours", "3D", "Surfaces menu: main: identifier: "dataobjects" - title: "Examples for handling Data Objects like Contours, Surfaces and Markers in MeVisLab." + title: "Examples for Handling Data Objects like Contours, Surfaces, and Markers in MeVisLab" weight: 650 parent: "tutorials" --- -## Data Objects in MeVisLab {#TutorialDataObjects} -MeVisLab provides pre-defined data objects, e.g.: -* [Contour Segmentation Objects (CSOs)](tutorials/dataobjects/contourobjects)
, -which are three-dimensional objects encapsulating formerly defined contours within images. +## Data Objects in MeVisLab {#TutorialDataObjects} +MeVisLab provides predefined data objects, for example: +* [Contour Segmentation Objects (CSOs)](tutorials/dataobjects/contourobjects)
+three-dimensional objects encapsulating formerly defined contours within images. * [Surface Objects (Winged Edge Meshes or WEMs)](tutorials/dataobjects/surfaceobjects)
- represent the surface of geometrical figures and allow the user to manipulate them. +represent the surface of geometrical figures and allow the user to manipulate them. * [Markers](tutorials/dataobjects/markerobjects)
are used to mark specific locations or aspects of an image and allow to process those later on. * [Curves](tutorials/dataobjects/curves)
diff --git a/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md b/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md index e76f121fd..5e11d5cdd 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md @@ -8,14 +8,16 @@ tags: ["Beginner", "Tutorial", "Data Objects", "2D", "Contours", "CSO"] menu: main: identifier: "contours" - title: "Contour Segmented Objects (CSOs) in MeVisLab" + title: "Contour Segmentation Objects (CSOs) in MeVisLab" weight: 660 parent: "dataobjects" --- -# Contour Segmented Objects (CSOs) in MeVisLab {#CSO} + +# Contour Segmentation Objects (CSOs) in MeVisLab {#CSO} + ## Introduction -### Structure of CSOs +### Structure of CSOs MeVisLab provides modules to create contours in images. 3D objects that encapsulate these contours are called Contour Segmentation Objects (CSOs). In the next image, you can see a rectangular shaped CSO. The pink circles you can see are called *Seed Points*. @@ -31,7 +33,6 @@ In general, the *Seed Points* are created interactively using an editor module a ![Contour Segmented Object (CSO)](images/tutorials/dataobjects/contours/CSO_Expl_01.png "Contour Segmented Object (CSO)") #### CSO Editors {#CSOEditors} - As mentioned, when creating CSOs, you can do this interactively by using an editor. The following images show editors available in MeVisLab for drawing CSOs: @@ -46,8 +47,7 @@ The `SoCSOIsoEditor` and `SoCSOLiveWireEditor` are special, because they are usi {{
}} ### CSO Lists and CSO Groups - -All created CSOs are stored in CSO lists that can be saved and loaded on demand. The lists can not only store the coordinates of the CSOs, but also additional information in the form of name-value pairs (using specialized modules or Python scripting). +All created CSOs are stored in CSO lists that can be saved and loaded on demand. The lists cannot only store the coordinates of the CSOs, but also additional information in the form of name-value pairs (using specialized modules or Python scripting). ![Basic CSO Network](images/tutorials/dataobjects/contours/BasicCSONetwork.png "Basic CSO Network") diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample1.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample1.md index c4ceb4763..8b235e177 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample1.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Data Objects", "2D", "Contours", "CSO"] menu: main: identifier: "contourexample1" - title: "Creation of simple Contours changing their appearance" + title: "Creation of Simple Contours Changing Their Appearance" weight: 665 parent: "contours" --- @@ -18,17 +18,17 @@ menu: {{< youtube "ygYJMmQ95v8">}} ## Introduction - We like to start with the creation of CSOs. To create CSOs, you need a `SoCSO*`-Editor. There are several different editors that can be used to create CSOs (see [here](tutorials/dataobjects/contourobjects#CSOEditors)). Some of them are introduced in this example. ## Steps to Do + ### Develop Your Network For this example, we need the following modules. Add the modules to your workspace, connect them as shown below, and load the example image *$(DemoDataPath)/BrainMultiModal/ProbandT1.tif*. ![Data Objects Contours Example 1](images/tutorials/dataobjects/contours/DO1_01.png "Data Objects Contours Example 1") ### Edit Rectangular CSO -Now, open the module `View2D`. Use your left mouse key {{< mousebutton "left" >}}, to draw a rectangle, which is your first CSO. +Now, open the module `View2D`. Use your left mouse button {{< mousebutton "left" >}}, to draw a rectangle as your first CSO. ![Rectangle Contour](images/tutorials/dataobjects/contours/DO1_02.png "Rectangle Contour") @@ -73,7 +73,7 @@ If you want to fill the shapes, you can simply add a `SoCSOFillingRenderer` modu Create CSOs with green color and ellipsoid shapes. ## Summary -* CSOs can be created using a SoCSO-Editor. +* CSOs can be created using a SoCSO\*-Editor. * CSOs of different shapes can be created. * A list of CSOs can be stored in the `CSOManager`. * Properties of CSOs can be changed using `SoCSOVisualizationSettings`. diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample2.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample2.md index 1f927b4e5..87451b88a 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample2.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample2.md @@ -8,18 +8,18 @@ tags: ["Beginner", "Tutorial", "Data Objects", "2D", "Contours", "CSO", "Interpo menu: main: identifier: "contourexample2" - title: "Creating Contours using Live Wire and linear Interpolation, grouping CSOs for different colors" + title: "Creating Contours Using Live Wire and Linear Interpolation, Grouping CSOs for Different Colors" weight: 670 parent: "contours" --- + # Contour Example 2: Creating Contours using Live Wire and Interpolation {#TutorialContoursExample2} {{< youtube "l2ih_maKfSw">}} ## Introduction - In this example, we like to create CSOs using the **Live Wire -Algorithm**, which allows semi-automatic CSO creation. The algorithm +Algorithm**, which allows semiautomatic CSO creation. The algorithm uses edge detection to support the user creating CSOs. We also like to interpolate CSOs over slices. That means additional CSOs are @@ -28,13 +28,14 @@ generated between manual segmentations based on a linear interpolation. As a last step, we will group together CSOs of the same anatomical unit. ## Steps to Do + ### Develop Your Network and Create CSOs In order to do that, create the shown network. You can use the network from the previous example and exchange the `SoCSO`-Editor. In addition to that, load the example image *$(DemoDataPath)/Thorax1_CT.small.tif* . -Now, create some CSOs on different, not consecutive slices. Afterwards, -hover over the `CSOManager` and press the emerging plus-sign. This +Now, create some CSOs on different, not consecutive slices. Afterward, +hover over the `CSOManager` and press the emerging *plus* symbol. This displays the amount of existing CSOs. ![Data Objects Contours Example 2](images/tutorials/dataobjects/contours/DO2_02.png "Data Objects Contours Example 2") @@ -84,7 +85,7 @@ As a last step, we need to disconnect the module `SoCSOVisualizationSettings`, a ![Interpolated CSOs](images/tutorials/dataobjects/contours/DO2_11.png "Interpolated CSOs") ## Summary -* `SoCSOLiveWireEditor` can be used to create CSOs semi-automatically. +* `SoCSOLiveWireEditor` can be used to create CSOs semiautomatically. * CSO interpolations can be created using `CSOSliceInterpolator`. * CSOs can be grouped together using the `CSOManager`. diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample3.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample3.md index 9eafb3b78..57342d785 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample3.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample3.md @@ -12,12 +12,12 @@ menu: weight: 675 parent: "contours" --- + # Contour Example 3: Overlay Creation and 3D Visualization of Contours {#TutorialContoursExample3} {{< youtube "6NmKQagTDKg">}} ## Introduction - In this example, we'd like to use the created CSOs to display an overlay. This allows us to mark one of two lungs. In addition to that, we will display the whole segmented lobe of the lung in a 3D @@ -32,7 +32,7 @@ shown. The module `VoxelizeCSO` allows to convert CSOs into voxel images. ![Data Objects Contours Example 3](images/tutorials/dataobjects/contours/DO3_02.png "Data Objects Contours Example 3") ### Convert CSOs into Voxel Images -Update the module `VoxelizeCSOs` to create overlays based on your CSOs. +Update the module `VoxelizeCSOs` to create voxel masks based on your CSOs. The result can be seen in `View2D1`. ![Overlay](images/tutorials/dataobjects/contours/DO3_03.png "Overlay") diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample4.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample4.md index 9eb121124..b62f0e4c7 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample4.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample4.md @@ -8,21 +8,23 @@ tags: ["Beginner", "Tutorial", "Data Objects", "2D", "Contours", "CSO", "Annotat menu: main: identifier: "contourexample4" - title: "Calculate the volume of your segmentation and display ml value on your image in viewer" + title: "Calculate the Volume of Your Segmentation and Display Milliliter Value on Your Image in the Viewer" weight: 680 parent: "contours" --- + # Contour Example 4: Annotation of Images {#TutorialContoursExample4} {{< youtube "bT2ZprYcuOU">}} ## Introduction In this example we like to calculate the volume of our object, in this -case the part of the lung we have segmented. +case, the part of the lung we have segmented. ## Steps to Do + ### Develop Your Network and Calculate the Lung Volume -Add the module `CalculateVolume` and `SoView2DAnnotation` to your workspace +Add the modules `CalculateVolume` and `SoView2DAnnotation` to your workspace and connect both modules as shown. Update the module `CalculateVolume`, which directly shows the volume of our object. @@ -31,7 +33,7 @@ which directly shows the volume of our object. ### Display the Lung Volume in the Image We now like to display the volume in the image viewer. For this, open the panel of the modules `CalculateVolume` and `SoView2DAnnotation`. -Open the tab *Input* in the panel of the module `SoView2DAnnotation`. Now +Open the tab *Input* in the panel of the module `SoView2DAnnotation`. Now, establish a parameter connection between *Total Volume* calculated in the module `CalculateVolume` and the *input00* of the module `SoView2DAnnotation`. This connection projects the *Total Volume* to the diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample5.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample5.md index 0972a4b03..73ed7b82f 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample5.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample5.md @@ -8,26 +8,27 @@ tags: ["Beginner", "Tutorial", "Data Objects", "2D", "Contours", "CSO"] menu: main: identifier: "contourexample5" - title: "Visualizing Contours on currently visible and neighboring slices (ghosting)" + title: "Visualizing Contours on Currently Visible and Neighboring Slices (Ghosting)" weight: 685 parent: "contours" --- + # Contour Example 5: Visualizing Contours and Images {#TutorialContoursExample5} {{< youtube "6fHmy57P3yQ">}} ## Introduction - In this example, we like to automatically create CSOs based on a predefined isovalue. ## Steps to Do + ### Develop Your Network Add the following modules to your workspace and connect them as shown. Load the example image *Bone.tiff*. ### Automatic Creation of CSOs Based on the Isovalue Now, open the panel of `CSOIsoGenerator` to set the *Iso Value* to 1200. If you press *Update* in -the panel, you can see the creation of CSOs on every slide when opening +the panel, you can see the creation of CSOs on each image slice when opening the module `View2D`. In addition to that, the number of CSOs is displayed in the `CSOManager`. The module `CSOIsoGenerator` generates isocontours for each slice at a fixed isovalue. This means that closed CSOs are formed based on the detection of the voxel value of 1200 on every slice. @@ -35,7 +36,7 @@ voxel value of 1200 on every slice. ![Data Objects Contours Example 5](images/tutorials/dataobjects/contours/DO5_02.png "Data Objects Contours Example 5") ### Ghosting -Now, we like to make CSOs of previous and subsequent slices visible (Ghosting). In +Now, we like to make CSOs of previous and subsequent slices visible (ghosting). In order to do that, open the panel of `SoCSOVisualizationSettings` and open the tab *Misc*. Increase the parameter `Ghosting depth in voxel`, which shows you the number of slices above and below the current slice in @@ -47,7 +48,7 @@ viewer. ### Display Created CSOs At last, we like to make all CSOs visible in a 3D viewer. To do that, add the modules `SoCSO3DRenderer` and `SoExaminerViewer` to your network -and connect them as shown. In the viewer `SoExaminerViewer` you can see +and connect them as shown. In the viewer `SoExaminerViewer`, you can see all CSOs together. In this case all scanned bones can be seen. ![CSOs in 3D View](images/tutorials/dataobjects/contours/DO5_05.png "CSOs in 3D View") diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md index a481dac56..7e1fcab5c 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md @@ -12,18 +12,18 @@ menu: weight: 690 parent: "contours" --- + # Contour Example 6: Adding Labels to Contours {#TutorialContoursExample6} {{< youtube "-ACAoeK2Fm8">}} ## Introduction - In this example, we are adding a label to a contour. The label provides information about measurements and about the contour itself. The label remains connected to the contour and can be moved via mouse interactions. ## Steps to Do -### Develop Your Network -Add a `LocalImage` and a `View2D` module to your workspace and connect them as shown below. Load the file *ProbandT1.dcm* from MeVisLab demo data. In order to create contours (CSOs), we need a `SoView2DCSOExtensibleEditor` module. It manages attached CSO editors, renderers and offers an optional default renderer for all types of CSOs. +### Develop Your Network +Add the modules `LocalImage` and `View2D` to your workspace and connect them as shown below. Load the file *ProbandT1.dcm* from MeVisLab demo data. In order to create contours (CSOs), we need a `SoView2DCSOExtensibleEditor` module. It manages attached CSO editors, renderers and offers an optional default renderer for all types of CSOs. The first CSO we want to create is a distance line. Add a `SoCSODistanceLineEditor` to the `SoView2DCSOExtensibleEditor`. It renders and interactively generates CSOs that consist of a single line segment. The line segment can be rendered as an arrow; it can be used to measure distances. @@ -46,8 +46,8 @@ We now want to customize the details to be shown for each distance line. Open th Enter the following to the panel of the `CSOLabelRenderer` module: {{< highlight filename="CSOLabelRenderer" >}} ```Python -labelString = f"Length: {cso.getLength()}" -labelName = f"Distance: {cso.getId()}" +labelString = f'Length: {cso.getLength()} mm' +labelName = f'ID: {cso.getId()}' deviceOffsetX = 0 deviceOffsetY = 0 ``` @@ -68,11 +68,11 @@ In order to see all possible parameters of a CSO, add a `CSOInfo` module to your ![CSOInfo](images/tutorials/dataobjects/contours/Ex6_CSOInfo.png "CSOInfo") -For labels shown on grayscale images, it makes sense to add a shadow. Open the panel of the `SoCSOVisualizationSettings` module and on tab *Misc* check the option *Should render shadow*. This increases the readability of your labels. +For labels shown on gray value images, it makes sense to add a shadow. Open the panel of the `SoCSOVisualizationSettings` module and on tab *Misc* check the option *Should render shadow*. This increases the readability of your labels. {{< imagegallery 2 "images/tutorials/dataobjects/contours/" "Ex6_NoShadow" "Ex6_Shadow" >}} -If you want to define your static text as a parameter in multiple labels, you can open the panel of the `CSOLabelRenderer` module and define text as User Data. The values can then be used in Python via *userData*. +If you want to define your static text as a parameter in multiple labels, you can open the panel of the `CSOLabelRenderer` module and define text as *User Data*. The values can then be used in Python via *userData*. ![User Data](images/tutorials/dataobjects/contours/Ex6_Parameters.png "User Data") diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample7.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample7.md index 08e0c0f6e..bf388a3e1 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample7.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample7.md @@ -12,23 +12,23 @@ menu: weight: 691 parent: "contours" --- + # Contour Example 7: Using the CSOListContainer {#TutorialContoursExample7} {{< youtube "4quJcvvt-GQ">}} ## Introduction - In this example, we are using the module `CSOListContainer` instead of the `CSOManager`. The `CSOManager` is a heavyweight, UI driven module. You can use it to see all of your CSOs and CSOGroups in the module panel. The `CSOListContainer` is a lightweight module with focus on Python scripting. We recommend to use this module for final application development, because Python provides much more flexibility in handling CSO objects. ![CSOManager](images/tutorials/dataobjects/contours/Example_7_1.png "CSOManager") ![CSOListContainer](images/tutorials/dataobjects/contours/Example_7_2.png "CSOListContainer") -We will create multiple CSOs by using the `SoCSOEllipseEditor` and dynamically add these CSOs to different groups via Python scripting depending on their size. CSOs larger than a configurable threshold will be red, small CSOs will be drawn green. The colors will also be adapted if we manually resize the contours. +We will create multiple CSOs by using the `SoCSOEllipseEditor` and dynamically add these CSOs to different groups via Python scripting depending on their size. CSOs larger than a configurable threshold will be drawn in red, small CSOs will be drawn in green. The colors will also be adapted if we manually resize the contours. ## Steps to Do -### Develop Your Network +### Develop Your Network Add a `LocalImage` and a `View2D` module to your workspace and connect them as shown below. Load the file *ProbandT1.dcm* from MeVisLab demo data. In order to create contours (CSOs), we need a `SoView2DCSOExtensibleEditor` module. It manages attached CSO editors, renderers, and offers an optional default renderer for all types of CSOs. Add a `SoCSOEllipseEditor` and a `CSOListContainer` to the `SoView2DCSOExtensibleEditor` @@ -40,7 +40,7 @@ You are now able to draw CSOs. Create a separate directory for this tutorial and save your network in this empty directory. This makes the final structure easier to read. ### Create a Local Macro Module -Select the module `CSOListContainer` and open menu {{}}. Enter some details about your new local macro module and click finish. Leave the already defined output as is. +Select the module `CSOListContainer` and open menu {{}}. Enter some details about your new local macro module and click *Finish*. Leave the already defined output as is. ![Create Local Macro](images/tutorials/dataobjects/contours/Example_7_4.png "Create Local Macro") @@ -52,7 +52,7 @@ The behavior of your network does not change. You can still draw the same CSOs a Open the context menu of your `csoList` module {{< mousebutton "right" >}} and select {{}}. -The MeVisLab text editor MATE opens, showing your script file. You can see the output of your module as *CSOListContainer.outCSOList*. We want to define a threshold for the color of our CSOs. For this, add another field to the *Parameters* section of your script file named *areaThreshold*. Define the *type* as *Float* and *value* as *2000.0*. +The MeVisLab text editor MATE opens, showing your *.script* file. You can see the output of your module as *CSOListContainer.outCSOList*. We want to define a threshold for the color of our CSOs. For this, add another field to the *Parameters* section of your script file named *areaThreshold*. Define the *type* as *Float* and *value* as *2000.0*. In order to call Python functions, we also need a Python file. Add a *Commands* section and define the *source* of the Python file as *$(LOCAL)/csoList.py*. Also add an *initCommand* as *initCSOList*. The initCommand defines the Python function that is called whenever the module is added to the workspace or reloaded. @@ -95,21 +95,21 @@ def setupCSOList(): csoGroupLarge = csoList.addGroup("large") csoGroupSmall.setUsePathPointColor(True) - csoGroupSmall.setPathPointColor((0,0,1)) + csoGroupSmall.setPathPointColor((0, 1, 0)) csoGroupLarge.setUsePathPointColor(True) - csoGroupLarge.setPathPointColor((1,0,0)) + csoGroupLarge.setPathPointColor((1, 0, 0)) def _getCSOList(): return ctx.field("CSOListContainer.outCSOList").object() ``` {{}} -The function gets the current CSOList from the output field of the `CSOListContainer`. Initially it should be empty. If not, we want to start with an empty list, therefore we remove all existing CSOs. +The function gets the current CSOList from the output field of the `CSOListContainer`. Initially, it should be empty. If not, we want to start with an empty list; therefore, we remove all existing CSOs. We also create two new CSO lists: one list for small contours, one list for larger contours, depending on the defined *areaThreshold* from the modules parameter. -Additionally, we also want to define different colors for the CSOs in the lists. Small contours shall be drawn green, large contours shall be drawn red. +Additionally, we also want to define different colors for the CSOs in the lists. Small contours shall be drawn in green, large contours shall be drawn in red. In order to listen for changes on the contours, we need to register for notifications. Create a new function *registerForNotification*. @@ -136,11 +136,11 @@ def _getAreaThreshold(): The function gets all currently existing CSOs from the `CSOListContainer`. Then, we register for notifications on this list. Whenever the notification *NOTIFICATION_CSO_FINISHED* is sent in the current context, we call the function *csoFinished*. -The *csoFinished* function again needs all existing contours. We walk through all single CSOs in the list and remove it from all groups. As we do not know which CSO has been changed from the notification, we evaluate the area of each CSO and add them to the correct list again. +The *csoFinished* function again needs all existing contours. We walk through each CSO in the list and remove it from all groups. As we do not know which CSO has been changed from the notification, we evaluate the area of each CSO and add them to the correct list again. The function *getAreaThreshold* returns the current value of our parameter field *areaThreshold*. -Now we can call our functions in the *initCSOList* function and test our module. +Now, we can call our functions in the *initCSOList* function and test our module. {{< highlight filename="csoList.py" >}} ```Python @@ -155,10 +155,10 @@ def setupCSOList(): csoGroupLarge = csoList.addGroup("large") csoGroupSmall.setUsePathPointColor(True) - csoGroupSmall.setPathPointColor((0,1,0)) + csoGroupSmall.setPathPointColor((0, 1, 0)) csoGroupLarge.setUsePathPointColor(True) - csoGroupLarge.setPathPointColor((1,0,0)) + csoGroupLarge.setPathPointColor((1, 0, 0)) def registerForNotification(): csoList = _getCSOList() diff --git a/mevislab.github.io/content/tutorials/dataobjects/curves.md b/mevislab.github.io/content/tutorials/dataobjects/curves.md index d73bfb28b..131145515 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/curves.md +++ b/mevislab.github.io/content/tutorials/dataobjects/curves.md @@ -12,7 +12,9 @@ menu: weight: 775 parent: "dataobjects" --- + # Curves in MeVisLab {#CurvesInMeVisLab} + ## Introduction Curves can be used in MeVisLab to print the results of a function as two-dimensional mathematical curves into a diagram. diff --git a/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md b/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md index f45358f81..2320f86f2 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md @@ -1,5 +1,5 @@ --- -title: "Example 1: Drawing curves" +title: "Example 1: Drawing Curves" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,27 +8,29 @@ tags: ["Beginner", "Tutorial", "Data Objects", "2D", "Curves"] menu: main: identifier: "curvesexample1" - title: "Draw one or more curves into a diagram." + title: "Draw One or More Curves Into a Diagram" weight: 780 parent: "curves" --- + # Example 1: Drawing Curves {{< youtube "sj6muyInkRc">}} ## Introduction - In this example, you will draw one or more curves into a diagram and define different styles for the curves. ## Steps to Do + ### Develop Your Network A curve requires x- and y-coordinates to be printed. You can use the `CurveCreator` module as input for these coordinates. The `SoDiagram2D` draws the curves into a `SoRenderArea`. You can also define the style of the curves by using the `StylePalette` module. Add the modules to your workspace and connect them as seen below. ![Example Network](images/tutorials/dataobjects/curves/example_network.png "Example Network") + ### Creating a Curve -Click on the output of the CurveCreator and open the Output Inspector. +Click on the output of the `CurveCreator` module and open the Output Inspector. ![Empty Output Inspector](images/tutorials/dataobjects/curves/OutputInspector_empty.png "Empty Output Inspector") @@ -53,15 +55,15 @@ Enter the following into the *Curve Table*: ``` {{}} -Now your *Output Inspector* shows a yellow line through the previously entered coordinates. Exactly the same curve is shown in the `SoRenderArea`. +Now, your *Output Inspector* shows a yellow line through the previously entered coordinates. Exactly the same curve is shown in the `SoRenderArea`. ![SoRenderArea](images/tutorials/dataobjects/curves/SoRenderArea.png "SoRenderArea") ### Creating Multiple Curves -Now, update the *Curve Table* so that you are using three columns and click *Update* {{}}: +Now, update the *Curve Table*, so that you are using three columns and click *Update* {{}}: {{< highlight filename="Curve Table" >}} ```Text -# My first curve +# My first curves 0 0 0 1 1 2 2 2 4 @@ -86,17 +88,17 @@ Let's do this. Open the panel of the `SoDiagram2D` module and check *Draw legend You can also define a different location of the legend and set font sizes. -Now open the panel of the `StylePalette` module. +Now, open the panel of the `StylePalette` module. ![StylePalette](images/tutorials/dataobjects/curves/StylePalette.png "StylePalette") -The `StylePalette` allows you to define twelve different styles for curves. Initially, without manual changes, the styles are applied one after the other. The first curve gets style 1, the second curve style 2, and so on. +The `StylePalette` module allows you to define twelve different styles for curves. Initially, without manual changes, the styles are applied one after the other. The first curve gets style 1, the second curve style 2, and so on. -Open the Panel of your `CurveCreator` again and define *Curve Style(s)* as *"3 6"*. *Update* {{}} your curves. +Open the panel of your `CurveCreator` module again and define *Curve Style(s)* as *"3 6"*. *Update* {{}} your curves. ![StylePalette applied](images/tutorials/dataobjects/curves/StylePalette_applied.png "StylePalette applied") -You now applied the style three for your first curve and six for the second. This is how you can create twelve different curves with unique appearance. +You now applied the style three for your first curve and style six for the second. This is how you can create twelve different curves with unique appearance. ### Using Multiple Tables for Curve Generation In addition to adding multiple columns for different y-coordinates, you can also define multiple tables as input, so that you can also have different x-coordinates for multiple curves. @@ -104,7 +106,7 @@ In addition to adding multiple columns for different y-coordinates, you can also Update the *Curve Table* as defined below and click *Update* {{}}: {{< highlight filename="Curve Table" >}} ```Text -# My first curve +# My first curves 0 0 0 1 1 2 2 2 4 @@ -114,7 +116,7 @@ Update the *Curve Table* as defined below and click *Update* {{}} -For more complex visualizations, you can also use Matplotlib. See examples at [Thirdparty - Matplotlib](tutorials/thirdparty/matplotlib "Thirdparty - Matplotlib"). +For more complex visualizations, you can also use *Matplotlib*. See examples at [Third-party - Matplotlib](tutorials/thirdparty/matplotlib "Third-party - Matplotlib"). {{
}} ## Summary * Curves can be created to draw two-dimensional diagrams. -* The `StylePalette` allows you to define the appearance of a curve. +* The `StylePalette` module allows you to define the appearance of a curve. * Details of the different curves can be visualized by using the `SoDiagram2D` module. {{}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/markerobjects.md b/mevislab.github.io/content/tutorials/dataobjects/markerobjects.md index bca9a02c0..b14c4c4d7 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/markerobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects/markerobjects.md @@ -12,33 +12,33 @@ menu: weight: 750 parent: "dataobjects" --- + # Markers in MeVisLab {#MarkersInMeVisLab} -In MeVisLab you can equip images and other data objects with markers. In this example you will see how to create, process, and use markers. +In MeVisLab you can attach markers to images and other data objects. In this example you will see how to create, process, and use markers. ## Creation and Rendering To create markers, you can use a marker editor, for example, the `SoView2DMarkerEditor`. Connect this editor to a viewer as shown below. Now you can interactively create new markers. Connect the module `XMarkerListContainer` to your marker editor to store markers in a list. ![Create Markers](images/tutorials/dataobjects/markers/DO_Markers_01.png "Create Markers") -Using the module `StylePalette`, you can define a style for your markers. In order to set different styles for different markers, change the field *Color Mode* in the Panel of `SoView2DMarkerEditor` to *Index*. +Using the `StylePalette` module, you can define a style for your markers. In order to set different styles for different markers, change the field *Color Mode* in the panel of `SoView2DMarkerEditor` to *Index*. ![Style of Markers](images/tutorials/dataobjects/markers/DO_Markers_08.png "Style of Markers") -With the help of the module `So3DMarkerRenderer`, markers of an `XMarkerList` can be rendered. +With the help of the module `So3DMarkerRenderer`, markers of an `XMarkerList` can be rendered in 3D. ![Rendering of Markers](images/tutorials/dataobjects/markers/DO_Markers_09.png "Rendering of Markers") -## Working with Markers - +## Working With Markers {{}} It is possible to convert other data objects into markers and also to convert markers into other data objects. It is, for example, possible to set markers by using the `MaskToMarkers` module and later on generate a surface object from a list of markers using the `MaskToSurface` module. Marker conversion can also be done by various other modules, listed in [/Modules/Geometry/Markers]. {{}} -Learn how to convert markers by building the following network. Press the *Reload* buttons of the modules `MaskToMarkers` and `MarkersToSurface` to enable the conversion. Now you can see both the markers and the created surface in the module `SoExaminerViewer`. Use the toggle options of `SoToggle` and `SoWEMRenderer` to enable or disable the visualization of markers and surface. +Learn how to convert markers by building the following network. Press the *Reload* buttons of the modules `MaskToMarkers` and `MarkersToSurface` to enable the conversion. Now you can see both the markers and the created surface in the module `SoExaminerViewer`. Use the toggle options of the modules `SoToggle` and `SoWEMRenderer` to enable or disable the visualization of markers and surface. {{}} -Make sure to set *Lower Threshold* of the `MaskToMarkers` module to 1000 so that the 3D object is rendered correctly. +Make sure to set *Lower Threshold* of the `MaskToMarkers` module to 1000, so that the 3D object is rendered correctly. {{}} ![Convert Markers](images/tutorials/dataobjects/markers/DO_Markers_02.png "Convert Markers") diff --git a/mevislab.github.io/content/tutorials/dataobjects/markers/markerexample1.md b/mevislab.github.io/content/tutorials/dataobjects/markers/markerexample1.md index 112aa3aab..96da8809d 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/markers/markerexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/markers/markerexample1.md @@ -1,5 +1,5 @@ --- -title: "Example 1: Distance between Markers" +title: "Example 1: Distance Between Markers" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,19 +8,20 @@ tags: ["Beginner", "Tutorial", "Data Objects", "2D", "3D", "Marker"] menu: main: identifier: "markerexample1" - title: "Calculate the distance between Marker objects." + title: "Calculate the Distance Between Marker Objects" weight: 755 parent: "markers" --- + # Example 1: Calculating the Distance Between Markers {{< youtube "xYR5Qkze0lE">}} ## Introduction - In this example, we will measure the distance between one position in an image to a list of markers. ## Steps to Do + ### Develop Your Network Add the following modules and connect them as shown. @@ -30,24 +31,24 @@ We changed the names of the modules `SoView2DMarkerEditor` and `XMarkerListConta As a next step, add two more modules: `SoView2DMarkerEditor` and `XMarkerListContainer`. -Change their names and the marker color to *green* and connect them as shown. We also like to change the mouse button you need to press in order to create a marker. This allows to place both types of markers, the red ones and the green ones. In order to do this, open the panel of `GreenMarker`. Under *Buttons* you can adjust which button needs to be pressed in order to place a marker. Select the *Button2* (the middle button of your mouse {{< mousebutton "middle" >}}) instead of *Button1* (the left mouse button {{< mousebutton "left" >}}). +Change their names and the marker color to *green* and connect them as shown. We also like to change the mouse button you need to press in order to create a marker. This allows to place both types of markers, the red ones and the green ones. In order to do this, open the panel of `GreenMarker`. Under *Buttons*, you can adjust which button needs to be pressed in order to place a marker. Select the *Button2* (the middle button of your mouse {{< mousebutton "middle" >}}) instead of *Button1* (the left mouse button {{< mousebutton "left" >}}). In addition to that, we like to allow only one green marker to be present. If we place a new marker, the old marker should vanish. For this, select the *Max Size* to be one and select *Overflow Mode: Remove All*. ![Marker Editor Settings](images/tutorials/dataobjects/markers/DO_Markers_04.png "Marker Editor Settings") ### Create Markers of Different Type -Now we can place as many red markers as we like, using the left mouse button {{< mousebutton "left" >}} and one green marker using the middle mouse button {{< mousebutton "middle" >}}. +Now, we can place as many red markers as we like, using the left mouse button {{< mousebutton "left" >}} and only one green marker using the middle mouse button {{< mousebutton "middle" >}}. ![Two Types of Markers](images/tutorials/dataobjects/markers/DO_Markers_05.png "Two Types of Markers") ### Calculate the Distance Between Markers -We like to calculate the minimum and maximum distance of the green marker to all the red markers. In order to do this, add the module `DistanceFromXMarkerList` and connect it to `RedMarkerList`. Open the panels of `DistanceFromXMarkerList` and `GreenMarkerList`. Now, draw a parameter connection from the coordinates of the green marker, which are stored in the field *Current Item -> Position* in the panel of `GreenMarkerList`, to the field *Position* of `DistanceFromXMarkerList`. You can now press *Calculate Distance* in the panel of `DistanceFromXMatkerList` to see the result, meaning the distance of the green marker to all the red markers in the panel of `DistanceFromXMarkerList`. +We like to calculate the minimum and maximum distance of the green marker to all red markers. In order to do this, add the module `DistanceFromXMarkerList` and connect it to `RedMarkerList`. Open the panels of `DistanceFromXMarkerList` and `GreenMarkerList`. Now, draw a parameter connection from the coordinate of the green marker, which is stored in the field *Current Item -> Position* in the panel of `GreenMarkerList`, to the field *Position* of `DistanceFromXMarkerList`. You can now press *Calculate Distance* in the panel of `DistanceFromXMatkerList` to see the result, meaning the distance of the green marker to all red markers in the panel of `DistanceFromXMarkerList`. ![Module DistanceFromXMarkerList](images/tutorials/dataobjects/markers/DO_Markers_06.png "Module DistanceFromXMarkerList") ### Automation of Distance Calculation -To automatically update the calculation when placing a new marker, we need to tell the module `DistanceFromXMarkerList` **when** a new green marker is placed. Open the panels of `DistanceFromXMarkerList` and `GreenMarker` and draw a parameter connection from the field *Currently busy* in the panel of `GreenMarker` to *Calculate Distance* in the panel of `DistanceFromXMarkerList`. If you now place a new green marker, the distance from the new green marker to all red markers is automatically calculated. +To automatically update the calculation when placing a new marker, we need to tell the module `DistanceFromXMarkerList` **when** a new green marker is placed. Open the panels of `DistanceFromXMarkerList` and `GreenMarker` and draw a parameter connection from the field *Currently busy* in the panel of `GreenMarker` to *Calculate Distance* in the panel of `DistanceFromXMarkerList`. If you now place a new green marker, the distance from the new green marker to all red markers is calculated automatically. ![Calculation of Distance between Markers](images/tutorials/dataobjects/markers/DO_Markers_07.png "Calculation of Distance between Markers") {{}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md b/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md index af9aed73e..755182d77 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md @@ -12,13 +12,14 @@ menu: weight: 700 parent: "dataobjects" --- + # Surface Objects (WEMs){#WEMs} ## Introduction -In MeVisLab it is possible to create, visualize, process, and manipulate surface objects, also known as polygon meshes. Here, we call surface objects *Winged Edge Mesh*, in short WEM. In this chapter you will get an introduction into WEMs. In addition, you will find examples on how to work with WEMs. For more information on WEMs, take a look at the {{< docuLinks "/Resources/Documentation/Publish/SDK/ToolBoxReference/WEMDataStructure.html" "MeVislab Toolbox Reference" >}}. If you like to know which WEM formats can be imported into MeVisLab, take a look at the *assimp* documentation [here](https://github.com/assimp/assimp). +In MeVisLab it is possible to create, visualize, process, and manipulate surface objects, also known as polygon meshes. Here, we call surface objects *Winged Edge Mesh*, in short WEM. In this chapter you will get an introduction into WEMs. In addition, you will find examples on how to work with WEMs. For more information on WEMs, take a look at the {{< docuLinks "/Resources/Documentation/Publish/SDK/ToolBoxReference/WEMDataStructure.html" "MeVisLab Toolbox Reference" >}}. If you like to know which WEM formats can be imported into MeVisLab, take a look at the *assimp* documentation [here](https://github.com/assimp/assimp). [//]: <> (MVL-653) -## WEM Explained with MeVisLab +## WEM in MeVisLab Explained To explain WEMs in MeVisLab, we will build a network that shows the structure and the characteristics of WEMs. We will start the example by generating a WEM forming a cube. With this, we will explain structures of WEMs called *Edges*, *Nodes*, *Surfaces*, and *Normals*. ### Initialize a WEM @@ -27,7 +28,6 @@ Add the module `WEMInitialize` to your workspace, open its panel, and select a * ![WEM initializing](images/tutorials/dataobjects/surfaces/WEM_01_1.png "WEM initializing") ### Rendering of WEMs - For rendering WEMs, you can use the module `SoWEMRenderer` in combination with the viewer `SoExaminerViewer`. Add both modules to your network and connect them as shown. A background is always a nice feature to have. ![WEM rendering](images/tutorials/dataobjects/surfaces/WEM_01_2.png "WEM rendering") @@ -39,7 +39,7 @@ Add and connect the module `SoWEMRendererEdges` to your workspace to enable the ![WEM Edges](images/tutorials/dataobjects/surfaces/WEM_01_3.png "WEM Edges") #### Nodes -Nodes mark the corner points of each surface. Therefore, nodes define the geometric properties of every WEM. To visualize the nodes, add and connect the module `SoWEMRendererNodes` as shown. By default, the nodes are visualized with an offset to the position they are located in. We reduced the offset to be zero, increased the point size, and changed the color. +Nodes mark the corner points of each polygon. Therefore, nodes define the geometric properties of every WEM. To visualize the nodes, add and connect the module `SoWEMRendererNodes` as shown. By default, the nodes are visualized with an offset to the position they are located in. We reduced the offset to be zero, increased the point size, and changed the color. ![WEM Nodes](images/tutorials/dataobjects/surfaces/WEM_01_4.png "WEM Nodes") #### Faces diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample1.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample1.md index 57b50009b..8ea3d5d5a 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample1.md @@ -8,11 +8,12 @@ tags: ["Beginner", "Tutorial", "Data Objects", "3D", "Surfaces", "Meshes", "WEM" menu: main: identifier: "surfaceexample1" - title: "Creation of Surface objects (WEMs) from an image via WEMIsoSurface module" + title: "Creation of Surface Objects (WEMs) From an Image Via WEMIsoSurface Module" weight: 705 parent: "surfaces" --- -# Surface Example 1: Create Winged Edge Mesh out of voxel images and CSOs + +# Surface Example 1: Create Winged Edge Mesh out of Voxel Images and CSOs {{< youtube "-KnZ5a27T0c">}} @@ -22,7 +23,6 @@ In this example you will learn how to create a Winged Edge Mesh (WEM). There are ## Steps to Do ### From Image to Surface: Generating WEMs out of Voxel Images - At first, we will create a WEM out of a voxel image using the module `WEMIsoSurface`. Add and connect the shown modules. Load the image *$(DemoDataPath)/Bone.tiff* and set the *Iso Min. Value* in the panel of `WEMIsoSurface` to 1200. Tick the box *Use image max. value*. The module `WEMIsoSurface` creates surface objects out of all voxels with an isovalue equal or above 1200 (and smaller than the image max value). The module `SoWEMRenderer` can now be used to generate an Open Inventor scene, which can be displayed by the module `SoExaminerViewer`. ![WEM](images/tutorials/dataobjects/surfaces/DO6_01.png "WEM") @@ -33,8 +33,7 @@ It is not only possible to create WEMs out of voxel images. You can also transfo ![WEM](images/tutorials/dataobjects/surfaces/DO6_02.png "WEM") ### From Contour to Surface: Generating WEMs out of CSOs - -Now we like to create WEMs out of CSOs. To create CSOs, load the network from [Contour Example 2](tutorials/dataobjects/contours/contourexample2) and create some CSOs. +Now, we like to create WEMs out of CSOs. To create CSOs, load the network from [Contour Example 2](tutorials/dataobjects/contours/contourexample2) and create some CSOs. Next, add and connect the module `CSOToSurface` to convert CSOs into a surface object. To visualize the created WEM, add and connect the modules `SoWEMRenderer` and `SoExaminerViewer`. @@ -44,7 +43,7 @@ It is also possible to display the WEM in 2D in addition to the original image. ![WEM](images/tutorials/dataobjects/surfaces/DO6_04.png "WEM") -If you like to transform WEMs back into CSOs, take a look at the module `WEMClipPlaneToCSO`. +If you like to transform WEMs back into CSOs, have a look at the module `WEMClipPlaneToCSO`. ## Summary * Voxel images can be transformed into WEMs using `WEMIsoSurface`. diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample2.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample2.md index e2d2e3a07..28e9bc639 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample2.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample2.md @@ -8,11 +8,12 @@ tags: ["Beginner", "Tutorial", "Data Objects", "3D", "Surfaces", "Meshes", "WEM" menu: main: identifier: "surfaceexample2" - title: "Examples for modification, smoothing and annotations on WEM" + title: "Examples for Modification, Smoothing, and Annotations of WEM" weight: 710 parent: "surfaces" --- -# Surface Example 2: Processing and Modification of WEM + +# Surface Example 2: Processing and Modifying of WEM {{< youtube "lVbldzanvfE">}} @@ -20,9 +21,11 @@ menu: In this example, you will learn how to modify and process WEMs. ## Steps to Do + ### Develop Your Network + #### Modification of WEMs -Use the module `WEMLoad` to load the file *venus.off*. Then add and connect the shown modules. We like to display the WEM *venus* two times, one time this WEM is modified. You can use the module `WEMModify` to apply modifications. In its panel, change the scale and the size of the WEM. Now you see two times the `venus` next to each other. +Use the module `WEMLoad` to load the file *venus.off*. Then, add and connect the shown modules. We like to display the WEM *venus* two times, one time this WEM is modified. You can use the module `WEMModify` to apply modifications. In its panel, change the scale and the size of the WEM. Now, you see two times the `venus` next to each other. ![WEMModify](images/tutorials/dataobjects/surfaces/DO7_01.png "WEMModify") @@ -51,7 +54,7 @@ Next, open the tab *Input* and draw parameter connections from the results of th ![Define annotation parameters](images/tutorials/dataobjects/surfaces/DO7_07.png "Define annotation parameters") -You can design the annotation overlay as you like in the tab *User*. We decided to only display the minimum and maximum distance between both WEMs. +You can design the annotation overlay as you like in the tab *User*. We decided to only display the minimal (the minimum minimum) and maximal (the maximum minimum) distance between both WEMs. ![Annotation design](images/tutorials/dataobjects/surfaces/DO7_04.png "Annotation design") @@ -64,8 +67,7 @@ Now, you can see the result in the viewer. If the annotations are not visible, p ## Summary * There are several modules to modify and process WEMs, e.g., `WEMModify`, `WEMSmooth`. -* To calculate the minimal and maximal surface distance between two WEMs, use the module `WEMSurfaceDistance`. +* To calculate the minimal and maximal (the maximum minimum) surface distance between two WEMs, use the module `WEMSurfaceDistance`. * To create annotations in 3D, the module `SoView2DAnnotation` can be used when adapted to be used in combination with a 3D viewer. - {{< networkfile "examples/data_objects/surface_objects/example2/SurfaceExample2.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample3.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample3.md index 4a8310d0b..028f6ba79 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample3.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample3.md @@ -1,5 +1,5 @@ --- -title: "Surface Example 3: Interactions with WEM" +title: "Surface Example 3: Interactions With WEM" date: "2023-03-21" status: "OK" draft: false @@ -8,23 +8,24 @@ tags: ["Beginner", "Tutorial", "Data Objects", "3D", "Surfaces", "Meshes", "WEM" menu: main: identifier: "surfaceexample3" - title: "Interactions with WEM" + title: "Interactions With WEM" weight: 715 parent: "surfaces" --- + # Surface Example 3: Interactions with WEM {{< youtube "YDOEqCOmUFw">}} ## Introduction In these examples, we are showing two different possibilities to interact with a WEM: -* Scale, rotate and move a WEM in a scene +* Scale, rotate, and move a WEM in a scene * Modify a WEM in a scene ### Scale, Rotate, and Move a WEM in a Scene We are using a `SoTransformerDragger` module to apply transformations on a 3D WEM object via mouse interactions. -Add a `SoCube` and a `SoBackground` module and connect both to a `SoExaminerViewer`. For a better understanding, you should also add a `SoCoordinateSystem` module and connect it to the viewer. Change the *User Transform Mode* to *User Transform Instead Of Input* and set *User Scale* to 2 for *x*, *y* and *z*. +Add a `SoCube` and a `SoBackground` module and connect both to a `SoExaminerViewer`. For a better understanding, you should also add a `SoCoordinateSystem` module and connect it to the viewer. Change the *User Transform Mode* to *User Transform Instead Of Input* and set *User Scale* to 2 for *x*, *y*, and *z*. ![Initial Network](images/tutorials/dataobjects/surfaces/WEMExample3_1.png "Initial Network") @@ -32,7 +33,7 @@ The `SoExaminerViewer` shows your cube and the world coordinate system. You can ![Initial Cube](images/tutorials/dataobjects/surfaces/WEMExample3_2.png "Initial Cube") -Scaling, rotating, and translating the cube itself can be done by using the module `SoTransformerDragger`. Additionally, add a `SoTransform` module to your network. Add all modules but the `SoCoordinateSystem` to a `SoSeparator` so that transformations are not applied to the coordinate system. +Scaling, rotating, and translating the cube itself can be done by using the module `SoTransformerDragger`. Additionally, add a `SoTransform` module to your network. Add all modules except the `SoCoordinateSystem` to a `SoSeparator`, so that transformations are not applied to the coordinate system. ![SoTransformerDragger and SoTransform](images/tutorials/dataobjects/surfaces/WEMExample3_3.png "SoTransformerDragger and SoTransform") @@ -41,10 +42,10 @@ Draw parameter connections from *Translation*, *Scale Factor*, and *Rotation* of Opening your SoExaminerViewer now allows you to use handles of the `SoTransformerDragger` to scale, rotate, and move the cube. You can additionally interact with the camera as already done before. {{}} -You need to change the active tool on the right side of the `SoExaminerViewer`. Use the *Pick Mode* for applying transformations and the *View Mode* for using the camera. +You need to change the active tool on the right side of the `SoExaminerViewer`. Use the *Pick Mode* for applying transformations and the *View Mode* for adjusting the camera. {{}} -![Moved, Rotated and Scaled Cube](images/tutorials/dataobjects/surfaces/WEMExample3_4.png "Moved, Rotated and Scaled Cube") +![Moved, Rotated, and Scaled Cube](images/tutorials/dataobjects/surfaces/WEMExample3_4.png "Moved, Rotated, and Scaled Cube") You can also try the other `So*Dragger` modules in MeVisLab for variations of the `SoTransformerDragger`. @@ -53,7 +54,7 @@ You can also try the other `So*Dragger` modules in MeVisLab for variations of th ### Interactively Modify WEMs We are using the `WEMBulgeEditor` module to interactively modify the WEM via mouse interactions. -Add a `WEMInitialize`, a `SoWEMRenderer`, and a `SoBackground` module to your workspace and connect them to a `SoExaminerViewer` as seen below. Select model *Icosahedron* for the `WEMInitialize` module. +Add the modules `WEMInitialize`, `SoWEMRenderer`, and `SoBackground` to your workspace and connect them to a `SoExaminerViewer` as seen below. Select model *Icosahedron* for the `WEMInitialize` module. ![WEMLoad and SoWEMRenderer](images/tutorials/dataobjects/surfaces/WEMExample3_5.png "WEMLoad and SoWEMRenderer") @@ -87,14 +88,14 @@ Open the panel of the `SoLUTEditor`. Configure *New Range Min* as -1 and *New Ra ![SoLUTEditor](images/tutorials/dataobjects/surfaces/WEMExample3_9.png "SoLUTEditor") -Now your Primitive Value List is used to colorize the affected region for your tansformations. You can see the region by the color on hovering the mouse over the WEM. +Now, your Primitive Value List is used to colorize the affected region for your tansformations. You can see the region by the color on hovering the mouse over the WEM. ![Affected region colored](images/tutorials/dataobjects/surfaces/Affected_Region.png "Affected region colored") The size of the region can be changed via {{}} and mouse wheel {{< mousebutton "middle" >}}. Make sure that the *Influence Radius* in `WEMBulgeEditor` is larger than 0. {{}} -You need to change the active tool on the right side of the `SoExaminerViewer`. Use the *Pick Mode* for applying transformations and the *View Mode* for using the camera. +You need to change the active tool on the right side of the `SoExaminerViewer`. Use the *Pick Mode* for applying transformations and the *View Mode* for adjusting the camera. {{}} ![Modify WEM](images/tutorials/dataobjects/surfaces/Modify.png "Modify WEM") @@ -111,6 +112,6 @@ For other interaction possibilities, you can play around with the example networ ## Summary * MeVisLab provides multiple options to interact with 3D surfaces. -* Modules of the `So*Dragger` family allow to scale, rotate, and translate a WEM. +* Modules of the `So\*Dragger` family allow to scale, rotate, and translate a WEM. * You can always use a `SoCoordinateSystem` to see the current world coordinates. * The `WEMBulgeEditor` allows you to interactively modify a WEM via mouse. diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample4.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample4.md index 6bfe60690..9c54f5b92 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample4.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample4.md @@ -1,5 +1,5 @@ --- -title: "Surface Example 4: Interactively moving WEM" +title: "Surface Example 4: Interactively Moving WEM" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,21 +8,21 @@ tags: ["Beginner", "Tutorial", "Data Objects", "3D", "Surfaces", "Meshes", "WEM" menu: main: identifier: "surfaceexample4" - title: "Example for implementing WEM translations via mouse interaction" + title: "Example for Implementing WEM Translations Via Mouse Interaction" weight: 720 parent: "surfaces" --- + # Surface Example 4: Interactively Moving WEM {{< youtube "WKiCddNGKrw">}} ## Introduction - In this example, we like to interactively move WEMs using `SoDragger` modules inside a viewer. ### Develop Your Network -### Interactively Translating Objects in 3D Using SoDragger Modules +### Interactively Translating Objects in 3D Using SoDragger Modules Add and connect the following modules as shown. On the panel of the module `WEMInitialize`, select the *Model* *Octasphere*. After that, open the viewer `SoExaminerViewer` and make sure to select the *Interaction Mode*. Now, you are able to click on the presented *Octasphere* and move it alongside one axis. The following modules are involved in the interactions: * `SoMITranslate1Dragger`: This module allows interactive translation of the object alongside one axis. You can select the axis for translation in the panel of the module. @@ -31,10 +31,10 @@ Add and connect the following modules as shown. On the panel of the module `WEMI ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_01.png "Interactive dragging of objects") ### Interactively Translating a WEM Alongside Three Axes -We like to be able to interactively move a WEM alongside all three axes. In MeVisLab, there is the module `SoMITranslate2Dragger`, which allows translations alongside two axis, but there is no module which allows object translation in all three directions. Therefore, we will create a network that solves this task. The next steps will show you how you create three planes intersecting the objects. Dragging one plane, will drag the object alongside one axis. In addition, these planes will only be visible when hovering over them. +We like to be able to interactively move a WEM alongside all three axes. In MeVisLab, there is the module `SoMITranslate2Dragger`, which allows translations alongside two axes, but there is no module that allows object translation in all three directions. Therefore, we will create a network that solves this task. The next steps will show you how you create three planes intersecting the objects. Dragging one plane will drag the object alongside one axis. In addition, these planes will only be visible when hovering over them. #### Creation of Planes Intersecting an Object -We start creating a plane that will allow dragging in x-direction. In order to do that, modify your network as shown: Add the modules `WEMModify`, and `SoBackground` and connect the module `SoCube` to the dragger modules. You can select the translation direction in the panel of `SoMITranslate1Dragger`. +We start creating a plane that will allow dragging in x-direction. In order to do that, modify your network as shown: Add the modules `WEMModify` and `SoBackground`, and connect the module `SoCube` to the dragger modules. You can select the translation direction in the panel of `SoMITranslate1Dragger`. ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_02.png "Interactive dragging of objects") @@ -55,8 +55,7 @@ The result can be seen in the next image. You can now select the plane in the *I ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_05.png "Interactive dragging of objects") #### Modifying the Appearance of the Plane - -For changing the visualization of the dragger plane, add the modules `SoGroup`, `SoSwitch`, and `SoMaterial` to your network and connect them as shown. In addition, group together all the modules that are responsible for the translation in the x-direction. +For changing the visualization of the dragger plane, add the modules `SoGroup`, `SoSwitch`, and `SoMaterial` to your network and connect them as shown. In addition, group all modules together that are responsible for the translation in the x-direction. ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_06.png "Interactive dragging of objects") @@ -73,14 +72,13 @@ When hovering over the plane, the plane becomes visible and the option to move t ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_08.png "Interactive dragging of objects") #### Interactive Object Translation in Three Dimensions - We do not only want to move the object in one direction, we like to be able to do interactive object translations in all three dimensions. For this, copy the modules responsible for the translation in one direction and change the properties to enable translations in other directions. We need to change the size of `SoCube1` and `SoCube2` to form planes that cover surfaces in x- and z-, as well as x- and y-directions. To do that, draw the respective parameter connections from `DecomposeVector3` to the fields of the modules `SoCube`. In addition, we need to adapt the field *Direction* in the panels of the modules `SoMITranslate1Dragger`. ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_09.png "Interactive dragging of objects") -Change width, height, and depth of the three cubes so that each of them represents one plane. The values need to be set to (0, 2, 2), (2, 0, 2) and (2, 2, 0). +Change width, height, and depth of the three cubes, so that each of them represents one plane. The values need to be set to (0, 2, 2), (2, 0, 2), and (2, 2, 0). As a next step, we like to make sure that all planes always intersect the object, even though the object is moved. To do this, we need to synchronize the field *Translation* of all `SoMIDraggerContainer` modules and the module `WEMModify`. Draw parameter connections from one *Translation* field to the next, as shown below. @@ -96,7 +94,7 @@ To enable transformations in all directions, we need to connect the modules `SoM ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_12.png "Interactive dragging of objects") -As a next step, we like to enlarge the planes to make them exceed the object. For that, add the module `CalculateVectorFromVectors` to your network. Open its panel and connect the field *Size* of `WEMInfo` to *Vector 1*. We like to enlarge the size by one, so we add the vector (1,1,1), by editing the field *Vector 2*. Now, connect the *Result* to the field *V* of the module `DecomposeVector3`. +As a next step, we like to enlarge the planes to make them exceed the object. For that, add the module `CalculateVectorFromVectors` to your network. Open its panel and connect the field *Size* of `WEMInfo` to *Vector 1*. We like to enlarge the size by one, so we add the vector (1, 1, 1), by editing the field *Vector 2*. Now, connect the *Result* to the field *V* of the module `DecomposeVector3`. ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_13.png "Interactive dragging of objects") @@ -108,9 +106,7 @@ The result can be seen in the next image. This module can now be used for intera ![Interactive dragging of objects](images/tutorials/dataobjects/surfaces/DO10_15.png "Interactive dragging of objects") - ## Summary * A family of `SoDragger` modules is available that can be used to interactively modify Open Inventor objects. - {{< networkfile "examples/data_objects/surface_objects/example4/SurfaceExample4.zip" >}} diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md index 26c00ad12..b1f6a416c 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md @@ -8,22 +8,23 @@ tags: ["Advanced", "Tutorial", "Data Objects", "3D", "Surfaces", "Meshes", "WEM" menu: main: identifier: "surfaceexample5" - title: "Examples how to calculate distances between WEM objects" + title: "Examples How to Calculate Distances Between WEM Objects" weight: 725 parent: "surfaces" --- + # Surface Example 5: WEM - Primitive Value Lists {{< youtube "Rap1RY6l5Cc">}} ## Introduction -WEMs do not only contain the coordinates of nodes and surfaces, they can also contain additional information. That information are stored in so-called *Primitive Value Lists* (PVLs). Every node, every surface, and every edge can contains such a list. In these lists, you can for example store the color of the node or specific patient information. This information can be used for visualization or for further statistical analysis. +WEMs do not only contain the coordinates of nodes and surfaces, they can also contain additional information. That information is stored in so-called *Primitive Value Lists* (PVLs). Every node, every surface, and every edge can contain such a list. In these lists, you can, for example, store the color of the node or specific patient information. This information can be used for visualization or for further statistical analysis. In this example we like to use PVLs to color-code and visualize the distance between two WEMs. ## Steps to Do -### Develop Your Network +### Develop Your Network We start our network by initializing two WEMs using `WEMInitialize`. We chose an *Octasphere* and a resized *Cube*. Use the modules `SoWEMRenderer`, `SoExaminerViewer`, and `SoBackground` to visualize the WEMs. ![WEMInitialize](images/tutorials/dataobjects/surfaces/DO12_01.png "WEMInitialize") @@ -85,5 +86,4 @@ The result can be seen in the next image. * The module `WEMSurfaceDistance` stores the minimum distance between nodes of different WEMs in PVLs as LUT values. * PVLs containing LUT values can be used to color-code additional information on the WEM surface. - {{< networkfile "examples/data_objects/surface_objects/example5/SurfaceExample5.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/image_processing.md b/mevislab.github.io/content/tutorials/image_processing.md index 7b072e4eb..2ffae0046 100644 --- a/mevislab.github.io/content/tutorials/image_processing.md +++ b/mevislab.github.io/content/tutorials/image_processing.md @@ -8,10 +8,11 @@ tags: ["Beginner", "Tutorial", "Image Processing"] menu: main: identifier: "imageprocessing" - title: "Examples for processing images in MeVisLab." + title: "Examples for Processing Images in MeVisLab." weight: 600 parent: "tutorials" --- + # Image Processing in MeVisLab {#TutorialImageProcessing} Digital image processing is the use of a digital computer to process digital images through an algorithm (see [Wikipedia](https://en.wikipedia.org/wiki/Digital_image_processing)). diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing1.md b/mevislab.github.io/content/tutorials/image_processing/image_processing1.md index aca69710f..3b118a682 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing1.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing1.md @@ -1,5 +1,5 @@ --- -title: "Example 1: Arithmetic operations on two images" +title: "Example 1: Applying Scalar Functions to Two Images" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,19 +8,19 @@ tags: ["Beginner", "Tutorial", "Image Processing", "Arithmetic"] menu: main: identifier: "imageprocessing1" - title: "In this example, you will apply scalar functions on two images like Add, Multiply, Subtract, etc." + title: "Applying Scalar Functions to Two Images" weight: 605 parent: "imageprocessing" --- # Example 1: Arithmetic Operations on Two Images - {{< youtube "ToTQ3XRPmlk" >}} ## Introduction We are using the `Arithmetic2` module to apply basic scalar functions on two images. The module provides two inputs for images and one output image for the result. ## Steps to Do + ### Develop Your Network Add two `LocalImage` modules to your workspace for the input images. Select *$(DemoDataPath)/BrainMultiModal/ProbandT1.dcm* and *$(DemoDataPath)/BrainMultiModal/ProbandT2.dcm* from MeVisLab demo data and add a `SynchroView2D` to your network. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing2.md b/mevislab.github.io/content/tutorials/image_processing/image_processing2.md index 5c4ec4810..fac3d6432 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing2.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing2.md @@ -1,5 +1,5 @@ --- -title: "Example 2: Masking images" +title: "Example 2: Masking Images" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Image Processing", "Mask"] menu: main: identifier: "imageprocessing2" - title: "In this example, you will apply a mask on an image, so that contrast changes are not applied on black background pixels" + title: "Masking Images" weight: 610 parent: "imageprocessing" --- @@ -25,6 +25,7 @@ Being in a dark room using a large screen, the user might be blended by these la Image masking is a very good way to select a defined region where image processing shall be applied. A mask allows to define a region (the masked region) to allow image modifications whereas voxels outside the mask remain unchanged. ## Steps to Do + ### Develop Your Network Add a `LocalImage` and a `SynchroView2D` module to your network and connect the modules as seen below. @@ -44,11 +45,11 @@ Add a `Mask` and a `Threshold` module to your workspace and connect them as seen ![Example Network](images/tutorials/image_processing/network_example2b.png "Example Network") -Changing the window/level values in your viewer still also changes background voxels. The `Threshold` module still leaves the voxels as is because the threshold value is configured as larger than 0. Open the Automatic Panel of the modules `Threshold` and `Mask` via double-click {{< mousebutton "left" >}} and set the values as seen below. +Changing the window/level values in your viewer still also changes background voxels. The `Threshold` module still leaves the voxels as is because the threshold value is configured as larger than 0. Open the panels of the modules `Threshold` and `Mask` via double-click {{< mousebutton "left" >}} and set the values as seen below. {{< imagegallery 2 "images/tutorials/image_processing" "Threshold" "Mask">}} -Now all voxels having a HU value lower or equal 60 are set to 0, all others are set to 1. The resulting image from the `Threshold` module is a binary image that can now be used as a mask by the `Mask` module. +Now, all voxels having a value lower or equal 60 are set to 0, all others are set to 1. The resulting image from the `Threshold` module is a binary image that can now be used as a mask by the `Mask` module. ![Output of the Threshold module](images/tutorials/image_processing/OutputInspector_Threshold.png "Output of the Threshold module") diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing3.md b/mevislab.github.io/content/tutorials/image_processing/image_processing3.md index e2c189c61..25e75d336 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing3.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing3.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Image Processing", "Segmentation", "Region Growi menu: main: identifier: "imageprocessing3" - title: "In this example, you segment parts of an image by using a simple region growing." + title: "Segmentation With Region Growing" weight: 615 parent: "imageprocessing" --- @@ -23,6 +23,7 @@ A very simple approach to segment parts of an image is the region growing method In this example, you will segment the brain of an image and show the segmentation results as an overlay on the original image. ## Steps to Do + ### Develop Your Network Add a `LocalImage` module to your workspace and select load *$(DemoDataPath)/BrainMultiModal/ProbandT1.dcm*. Add a `View2D` module and connect both as seen below. @@ -37,7 +38,7 @@ Add a `SoView2DMarkerEditor` to your network and connect it with your `RegionGro ![SoView2DMarkerEditor](images/tutorials/image_processing/SoView2DMarkerEditor.png "SoView2DMarkerEditor") -The region growing starts on manually clicking *Update* or automatically if *Update Mode* is set to *Auto-Update*. We recommend to set update mode to automatic update. Additionally, you should set the *Neighborhood Relation* to *3D-6-Neighborhood (x,y,z)*, because then your segmentation will also affect the z-axis. +The region growing starts on manually clicking *Update* or automatically if *Update Mode* is set to *Auto-Update*. We recommend to set update mode to automatic update. Additionally, you should set the *Neighborhood Relation* to *3D-6-Neighborhood (x,y,z)*, because then your segmentation will also be performed in the z-direction. Set *Threshold Computation* to *Automatic* and define *Interval Size* as 1.600 % for relative, automatic threshold generation. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing4.md b/mevislab.github.io/content/tutorials/image_processing/image_processing4.md index 155ca005f..3535fd6d7 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing4.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing4.md @@ -1,5 +1,5 @@ --- -title: "Example 4: Subtract 3D objects" +title: "Example 4: Subtracting 3D Surface Objects" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,12 +8,12 @@ tags: ["Advanced", "Tutorial", "Image Processing", "3D", "Subtraction"] menu: main: identifier: "imageprocessing4" - title: "In this example, we create two 3-dimensional and subtract them." + title: "Subtracting 3D Surface Objects" weight: 620 parent: "imageprocessing" --- -# Example 4: Subtract 3D Objects +# Example 4: Subtracting 3D Objects {{< youtube "VdvErVvoq2k" >}} @@ -21,6 +21,7 @@ menu: In this example, we load an image and render it as `WEMIsoSurface`. Then, we create a three-dimensional `SoSphere` and subtract the sphere from the initial WEM. ## Steps to Do + ### Develop Your Network Add a `LocalImage` module to your workspace and select load *$(DemoDataPath)/BrainMultiModal/ProbandT1.dcm*. Add a `WEMIsoSurface`, a `SoWEMRenderer`, a `SoBackground`, and a `SoExaminerViewer` module and connect them as seen below. Make sure to configure the `WEMIsoSurface` to use a *Iso Min. Value* of 420 and a *Voxel Sampling* 1. @@ -30,7 +31,7 @@ The `SoExaminerViewer` now shows the head as a three-dimensional rendering. ![SoExaminerViewer](images/tutorials/image_processing/SoExaminerViewer_initial.png "SoExaminerViewer") -### Add a 3D Sphere to your Scene +### Add a 3D Sphere to Your Scene We now want to add a three-dimensional sphere to our scene. Add a `SoMaterial` and a `SoSphere` to your network, connect them to a `SoSeparator` and then to the `SoExaminerViewer`. Set your material to use a *Diffuse Color* red and adapt the size of the sphere to *Radius* 50. ![Example Network](images/tutorials/image_processing/network_example4b.png "Example Network") @@ -39,12 +40,12 @@ The `SoExaminerViewer` now shows the head and the red sphere inside. ![SoExaminerViewer](images/tutorials/image_processing/SoExaminerViewer_sphere.png "SoExaminerViewer") -### Set Location of your Sphere +### Set Location of Your Sphere In order to define the best possible location of the sphere, we additionally add a `SoTranslation` module and connect it to the `SoSeparator` between the material and the sphere. Define a translation of x=0, y=20 and z=80. ![Example Network](images/tutorials/image_processing/network_example4c.png "Example Network") -### Subtract the Sphere from the Head +### Subtract the Sphere From the Head We now want to subtract the sphere from the head to get a hole. Add another `SoWEMRenderer`, a `WEMLevelSetBoolean`, and a `SoWEMConvertInventor` to the network and connect them to a `SoSwitch` as seen below. The `SoSwitch` also needs to be connected to the `SoWEMRenderer` of the head. Set your `WEMLevelSetBoolean` to use the *Mode* **Difference**. ![Example Network](images/tutorials/image_processing/network_example4d.png "Example Network") diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing5.md b/mevislab.github.io/content/tutorials/image_processing/image_processing5.md index 4e94dd08e..d0495819d 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing5.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing5.md @@ -8,7 +8,7 @@ tags: ["Advanced", "Tutorial", "Image Processing", "3D", "Clip Planes"] menu: main: identifier: "imageprocessing5" - title: "In this example, show some options for integrating clip planes into your 3D views." + title: "Clip Planes" weight: 625 parent: "imageprocessing" --- @@ -21,8 +21,9 @@ menu: In this example, we are using the `SoGVRDrawOnPlane` module to define the currently visible slice from a 2D view as a clip plane in 3D. ## Steps to Do + ### Develop Your Network -First we need to develop the network to scroll through the slices. Add a `LocalImage` module to your workspace and select the file *ProbandT1* from MeVisLab demo data. +First, we need to develop the network to scroll through the slices. Add a `LocalImage` module to your workspace and select the file *ProbandT1* from MeVisLab demo data. Add the modules `OrthoReformat3`, `Switch`, `SoView2D`, `View2DExtensions`, and `SoRenderArea` and connect them as seen below. @@ -44,7 +45,7 @@ We now want to visualize the slice visible in the 2D images as a 3D plane. Add a ![Example Network](images/tutorials/image_processing/network_example5b.png "Example Network") -A three-dimensional plane of the image is shown. Adapt the LUT as seen below. +A three-dimensional plane of the image is shown. Adapt the LUT as seen below. ![SoLUTEditor](images/tutorials/image_processing/tutorial5_lut.png "SoLUTEditor") @@ -63,11 +64,11 @@ This slice shall now be used as a clip plane in 3D. In order to achieve this, yo ![Example Network](images/tutorials/image_processing/network_example5c.png "Example Network") -Now your 3D scene shows a three-dimensional volume cut by a plane in the middle. Once again, the clipping is not the same slice as your 2D view shows. +Now, your 3D scene shows a three-dimensional volume cut by a plane in the middle. Once again, the clipping is not the same slice as your 2D view shows. ![Clip plane in 3D](images/tutorials/image_processing/3D_ClipPlane.png "Clip plane in 3D") -Again create a parameter connection from the `SoView2D` position *Slice as plane*, but this time to the `SoClipPlane`. +Again, create a parameter connection from the `SoView2D` position *Slice as plane*, but this time to the `SoClipPlane`. ![SoClipPlane Plane](images/tutorials/image_processing/SoClipPlane_Plane.png "SoClipPlane Plane") @@ -76,7 +77,7 @@ If you now open all three viewers and scroll through the slices in 2D, the 3D vi ![Final 3 views](images/tutorials/image_processing/Final3Views.png "Final 3 views") ## Summary -* The module `OthoReformat3` transforms input images to the three viewing directions: coronal, axial, and sagittal. +* The module `OrthoReformat3` transforms input images to the three viewing directions: coronal, axial, and sagittal. * A `Switch` can be used to toggle through multiple input images. * The `SoGVRDrawOnPlane` module renders a single slice as a three-dimensional plane. * Three-dimensional clip planes on volumes can be created by using a `SoClipPlane` module. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing6.md b/mevislab.github.io/content/tutorials/image_processing/image_processing6.md index 4e049e5b9..e15ad7dd8 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing6.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing6.md @@ -20,7 +20,7 @@ This tutorial explains how to load and visualize DICOM RT (Radiotherapy) data in * Load CT and related RTSTRUCT data. * Visualize RTSTRUCTs as colored CSOs. * Show labels next to each RTSTRUCT contour. -* Visualize RTDOSE as a semi-transparent colored overlay. +* Visualize RTDOSE as a semitransparent colored overlay. *DICOM RT* files are essential in radiotherapy treatment planning. @@ -31,7 +31,7 @@ They include: Additional objects not used in this tutorial are: * **RT Image**, specifying radiotherapy images that have been obtained on a conical imaging geometry, such as those found on conventional simulators and portal imaging devices. It can also be used for calculated images using the same geometry, such as digitally reconstructed radiographs (DRRs). -* **RT Beams Treatment Record**, **RT Brachy Treatment Record**, and **RT Treatment Summary Record**, containing data obtained from actual radiotherapy treatments. These objects are the historical record of treatment, and are linked with the other „planning” objects to form a complete picture of the treatment. +* **RT Beams Treatment Record**, **RT Brachy Treatment Record**, and **RT Treatment Summary Record**, containing data obtained from actual radiotherapy treatments. These objects are the historical record of the treatment, and are linked with the other „planning” objects to form a complete picture of the treatment. ## Precondition If you do not have DICOM RT data, you can download an example dataset at: @@ -41,13 +41,12 @@ https://medicalaffairs.varian.com/headandneckbilat-imrtsx2 This data is FOR EDUCATIONAL AND SCIENTIFIC EXCHANGE ONLY – NOT FOR SALES OR PROMOTIONAL USE. {{}} -Extract the ZIP file into a new folder named *DICOM_FILES*. +Extract the *.zip* file into a new folder named *DICOM_FILES*. ## Prepare Your Network - Add the module `DicomImport` to your workspace. -Then click {{< mousebutton "left" >}} Browse and select the new folder named *DICOM_FILES* where you copied the content of the ZIP file earlier. Click Import {{< mousebutton "left" >}}. You can see the result after import below: +Then, click {{< mousebutton "left" >}} *Browse* and select the new folder named *DICOM_FILES* where you copied the content of the ZIP file earlier. Click *Import* {{< mousebutton "left" >}}. You can see the result after import below: ![DICOM RT Data in DicomImport module](images/tutorials/image_processing/Example6_1.png "DICOM RT Data in DicomImport module") @@ -72,7 +71,6 @@ You have to select the correct index for the *RTSTRUCT*. In our example it is in ![RTSTRUCT in DicomImportExtraOutput](images/tutorials/image_processing/Example6_2.png "RTSTRUCT in DicomImportExtraOutput") ### Visualize RTSTRUCTs as Colored CSOs - Add an `ExtractRTStruct` module to the `DicomImportExtraOutput` to convert *RTSTRUCT* data into MeVisLab contours (CSOs). CSOs allow to visualize the contours on the CT scan and to interact with them in MeVisLab. A preview of the resulting CSOs can be seen in the *Output Inspector*. @@ -97,7 +95,7 @@ labelString = cso.getGroupAt(0).getLabel() ``` {{}} -Then press apply {{< mousebutton "left" >}}. The name of the structure is defined in the group of each CSO. We now show the label of the group next to the contour. Add a `CSOLabelPlacementGlobal` module to define a better readable location of these labels. +Then, press apply {{< mousebutton "left" >}}. The name of the structure is defined in the group of each CSO. We now show the label of the group next to the contour. Add a `CSOLabelPlacementGlobal` module to define a better readable location of these labels. The module `CSOLabelPlacementGlobal` implements an automatic label placement strategy that considers all CSOs on a slice. @@ -111,7 +109,7 @@ Add a `SoCSO3DRenderer` and a `SoExaminerViewer` module and connect them to the ![CSOs in 3D](images/tutorials/image_processing/Example6_8.png "CSOs in 3D") ### Visualizing RTDOSE as a Colored Overlay -We now want to show the *RTDOSE* data as provided for the patient as a semi-transparent, colored overlay. +We now want to show the *RTDOSE* data as provided for the patient as a semitransparent, colored overlay. Add another `DicomImportExtraOutput` module to get the *RTDOSE* object. Again, select the correct index. In this case, we select index 4. @@ -131,7 +129,7 @@ On tab *Editor*, define a lookup table as seen below. ![Lookup table](images/tutorials/image_processing/Example6_11.png "Lookup table") -The lookup table shall be used for showing the RT Dose data as a semi-transparent overlay on the CT image. Add a `SoView2DOverlay` and a `SoGroup` module to your network. Replace the input of the View2D module from the `SoView2DCSOExtensibleEditor` with the `SoGroup`. +The lookup table shall be used for showing the RT Dose data as a semitransparent overlay on the CT image. Add a `SoView2DOverlay` and a `SoGroup` module to your network. Replace the input of the View2D module from the `SoView2DCSOExtensibleEditor` with the `SoGroup`. ![RT Dose data using SoView2DOverlay](images/tutorials/image_processing/Example6_12.png "RT Dose data using SoView2DOverlay") @@ -143,6 +141,6 @@ If you want to visualize the RT Struct contours together with the RT Dose overla * DICOM RT data can be loaded and processed in MeVisLab. * RT Structure Sets can be converted to MeVisLab contours and visualized using `ExtractRTStruct` and `CSOLabelRenderer` modules. * Anatomical information can be shown using the module `CSOLabelRenderer`. -* RT Dose files can be shown as a semi-transparent colored overlay using `SoView2DOverlay`. +* RT Dose files can be shown as a semitransparent colored overlay using `SoView2DOverlay`. {{< networkfile "/examples/image_processing/example6/DICOMRT.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/openinventor.md b/mevislab.github.io/content/tutorials/openinventor.md index 5766f1234..14bb301e5 100644 --- a/mevislab.github.io/content/tutorials/openinventor.md +++ b/mevislab.github.io/content/tutorials/openinventor.md @@ -8,29 +8,28 @@ tags: ["Beginner", "Tutorial", "Open Inventor", "3D"] menu: main: identifier: "openinventor" - title: "Examples for handling Open Inventor Modules and Scene Graphs in MeVisLab." + title: "Examples for Handling Open Inventor Modules and Scene Graphs in MeVisLab." weight: 500 parent: "tutorials" --- # Open Inventor Modules {#TutorialOpenInventorModules} -## Introduction +## Introduction In total, there are three types of modules: * blue ML modules * brown macro modules * green Open Inventor modules -The names of Open Inventor modules start with the prefix `So\*` (for Scene Objects). Open Inventor modules process and render 3D scene objects and enable image interactions. Scene objects are transmitted using the semi-circle shaped input and output connectors. With the help of these modules, Open Inventor scenes can be implemented. +The names of Open Inventor modules start with the prefix `So\*` (for Scene Objects). Open Inventor modules process and render 3D scene objects and enable image interactions. Scene objects are transmitted using the semicircle-shaped input and output connectors. With the help of these modules, Open Inventor scenes can be implemented. An exemplary Open Inventor scene will be implemented in the following paragraph. ## Open Inventor Scenes and Execution of Scene Graphs{#sceneGraphs} - -Inventor scenes are organized in structures called scene graphs. A scene graph is made up of nodes, which represent 3D objects to be drawn, properties of the 3D objects, nodes that combine other nodes and are used for hierarchical grouping, and others (cameras, lights, etc.). These nodes are accordingly called shape nodes, property nodes, group nodes, and so on. Each node contains one or more pieces of information stored in fields. For example, the Sphere node contains only its radius, stored in its *radius* field. Open Inventor modules function as Inventor nodes, so they may have input connectors to add Inventor child nodes (modules) and output connectors to link themselves to Inventor parent nodes (modules). +Inventor scenes are organized in structures called scene graphs. A scene graph is made up of nodes, which represent 3D objects to be drawn, properties of the 3D objects, nodes that combine other nodes and are used for hierarchical grouping, and others (cameras, lights, etc.). These nodes are accordingly called shape nodes, property nodes, group nodes, and so on. Each node contains one or more pieces of information stored in fields. For example, the `SoSphere` node contains only its radius, stored in its *radius* field. Open Inventor modules function as Open Inventor nodes, so they may have input connectors to add Open Inventor child nodes (modules) and output connectors to link themselves to Open Inventor parent nodes (modules). {{}} -The model below depicts the order in which the modules are executed. The red arrow indicates the traversal order: from top to bottom and from left to right. The modules are numbered accordingly, from 1 to 8. Knowing about the traversal order can be crucial to achieve a certain ouput. +The model below depicts the order in which the modules are traversed. The red arrow indicates the traversal order: from top to bottom and from left to right. The modules are numbered accordingly from 1 to 8. Knowing about the traversal order can be crucial to achieve a certain ouput. ![Traversing in Open Inventor](images/tutorials/openinventor/OI1_13.png "Traversing through a network of Open Inventor modules") {{}} @@ -41,7 +40,7 @@ The `SoGroup` and `SoSeparator` modules can be used as containers for child node In the network above, we render four `SoCone` objects. The left side uses the `SoSeparator` modules, the right side uses the `SoGroup` ones. There is a `SoMaterial` module defining one of the left cone objects to be yellow. As you can see, the `SoMaterial` module is only applied to that cone, the other left cone remains in its default gray color, because the `SoSeparator` module isolates the separator's children from the rest of the scene graph. -On the right side, we are using `SoGroup` ({{< docuLinks "/Standard/Documentation/Publish/ModuleReference/SoGroup.html" "SoGroup module reference" >}}). The material of the cone is set to red. As the `SoGroup` module does not alter the traversal state in any way, the second cone in this group is also red. +On the right side, we are using `SoGroup` ({{< docuLinks "/Standard/Documentation/Publish/ModuleReference/SoGroup.html" "SoGroup module reference" >}}). The material of the cone is set to be of red color. As the `SoGroup` module does not alter the traversal state in any way, the second cone in this group is also colored in red. {{}} Be aware of some Open Inventor modules altering the traversal order. If your scene turns out to differ from your expected result, check whether incorporated `SoSeparator` modules are the cause. diff --git a/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md b/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md index 48408307c..4d93ff42c 100644 --- a/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md +++ b/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Open Inventor", "3D", "Camera", "Perspective Cam menu: main: identifier: "camerainteractions" - title: "Examples for camera interactions in Open Inventor" + title: "Examples for Camera Interactions in Open Inventor" weight: 530 parent: "openinventor" --- @@ -20,20 +20,20 @@ menu: ## Introduction In this example, we are learning the basic principles of camera interactions in Open Inventor. We will show the difference between a `SoRenderArea` and a `SoExaminerViewer` and use different modules of the `SoCamera*` group. -## The `SoRenderArea` module +## The `SoRenderArea` Module The module `SoRenderArea` is a simple renderer for Open Inventor scenes. It offers functionality to record movies and to create snapshots, but does not include an own camera or light. Add a `SoBackground`, a `SoMaterial` and a `SoOrientationModel` module to your workspace and connect them to a `SoGroup`. Add a `SoRenderArea` to the `SoGroup` and open the viewer. ![SoRenderArea without camera and lights](images/tutorials/openinventor/Camera_1.png "SoRenderArea without camera and lights") -You can not interact with your scene and the rendered content is very dark. Open the `SoOrientationModel` and change *Model* to *Skeleton* to see that a little better. You can also change the material by using the panel of the `SoMaterial` module. +You cannot interact with your scene and the rendered content is very dark. Open the `SoOrientationModel` and change *Model* to *Skeleton* to see that a little bit better. You can also change the material by using the panel of the `SoMaterial` module. Add a `SoCameraInteraction` module and connect it between the `SoGroup` and the `SoRenderArea`. ![SoRenderArea with SoCameraInteraction](images/tutorials/openinventor/Camera_2.png "SoRenderArea with SoCameraInteraction") -The `SoCameraInteraction` does not only allow you to change the camera position in your scene but also adds light. The module automatically adds a headlight you can switch off in the fields of the module. +The `SoCameraInteraction` does not only allow you to change the camera position in your scene but also adds light. The module automatically adds a headlight that you can switch off with a field of the module. {{< imagegallery 2 "images/tutorials/openinventor" "Headlight_TRUE" "Headlight_FALSE" >}} @@ -41,7 +41,7 @@ The `SoCameraInteraction` can also be extended by a `SoPerspectiveCamera` or a ` ![SoPerspectiveCamera and SoOrthographicCamera](images/tutorials/openinventor/Camera_3.png "SoPerspectiveCamera and SoOrthographicCamera") -You can now switch between both cameras, but you can not interact with them in the viewer. Select the `SoCameraInteraction` and toggle *detectCamera*. Now the default camera of the `SoCameraInteraction` is replaced by the camera selected in the `SoSwitch`. +You can now switch between both cameras, but you cannot interact with them in the viewer. Select the `SoCameraInteraction` and toggle *detectCamera*. Now the default camera of the `SoCameraInteraction` is replaced by the camera selected in the `SoSwitch`. Whenever you change the camera in the switch, you need to detect the new camera in the `SoCameraInteraction`. @@ -49,13 +49,13 @@ Whenever you change the camera in the switch, you need to detect the new camera A `SoPerspectiveCamera` camera defines a perspective projection from a viewpoint. -The viewing volume for a perspective camera is a truncated pyramid. By default, the camera is located at (0,0,1) and looks along the negative z-axis; the Position and Orientation fields can be used to change these values. The Height Angle field defines the total vertical angle of the viewing volume; this and the Aspect Ratio field determine the horizontal angle. +The viewing volume for a perspective camera is a truncated pyramid. By default, the camera is located at (0, 0, 1) and looks along the negative z-axis; the *Position* and *Orientation* fields can be used to change these values. The *Height Angle* field defines the total vertical angle of the viewing volume; this and the *Aspect Ratio* field determine the horizontal angle. A `SoOrthographicCamera` camera defines a parallel projection from a viewpoint. -This camera does not diminish objects with distance, as an SoPerspectiveCamera does. The viewing volume for an orthographic camera is a cuboid (a box). +This camera does not diminish objects with distance as an SoPerspectiveCamera does. The viewing volume for an orthographic camera is a cuboid (a box). -By default, the camera is located at (0,0,1) and looks along the negative z-axis; the Position and Orientation fields can be used to change these values. The Height field defines the total height of the viewing volume; this and the Aspect Ratio field determine its width. +By default, the camera is located at (0, 0, 1) and looks along the negative z-axis; the *Position* and *Orientation* fields can be used to change these values. The *Height* field defines the total height of the viewing volume; this and the *Aspect Ratio* field determine its width. Add a `SoCameraWidget` and connect it to your `SoGroup`. @@ -65,7 +65,7 @@ This module shows a simple widget on an Inventor viewer that can be used to rota You can also add more than one widget to show multiple widgets in the same scene, see example network of the `SoCameraWidget` module. -## The `SoExaminerViewer` module +## The `SoExaminerViewer` Module The `SoExaminerViewer` makes some things much easier, because a camera and a light are already integrated. Add a `SoExaminerViewer` to your workspace and connect it to the `SoBackground`, the `SoMaterial` and the `SoOrientationModel` modules. @@ -81,8 +81,8 @@ The module also allows you to switch between perspective and orthographic camera The module also provides UI elements to interact. ## Summary -* MeVisLab provides multiple options for adding a camera to a scene +* MeVisLab provides multiple options for adding a camera to a scene. * The `SoExaminerViewer` already has an integrated camera and light, the `SoRenderArea` requires additional modules. -* You can use perspective and orthographic cameras +* You can use perspective and orthographic cameras. {{< networkfile "examples/open_inventor/example3/CameraInteractions.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/openinventor/mouseinteractions.md b/mevislab.github.io/content/tutorials/openinventor/mouseinteractions.md index 44bc3141f..79d433f70 100644 --- a/mevislab.github.io/content/tutorials/openinventor/mouseinteractions.md +++ b/mevislab.github.io/content/tutorials/openinventor/mouseinteractions.md @@ -1,5 +1,5 @@ --- -title: "Example 2: Mouse interactions in Open Inventor" +title: "Example 2: Mouse Interactions in Open Inventor" date: 2022-06-15T08:56:33+02:00 draft: false weight: 520 @@ -8,19 +8,21 @@ tags: ["Beginner", "Tutorial", "Open Inventor", "3D", "Mouse Interactions"] menu: main: identifier: "mouseinteractions" - title: "Implementation of mouse interactions in Open Inventor Scenes" + title: "Implementation of Mouse Interactions in Open Inventor Scenes" weight: 520 parent: "openinventor" --- -# Example 2: Mouse interactions in Open Inventor {#TutorialVisualizationExample5} + +# Example 2: Mouse Interactions in Open Inventor {#TutorialVisualizationExample5} {{< youtube "Ye5lOHDWcRo" >}} ## Introduction In this example, we implement some image or object interactions. We will create a 3D scene, in which we display a cube and change its size using the mouse. We also get to know another viewer, the module `SoExaminerViewer`. This viewer is important. It enables the rendering of Open Inventor scenes and allows interactions with the Open Inventor scenes. -## Steps to do -### Develop your network +## Steps to Do + +### Develop Your Network For implementing the example, build the following network. We already know the module `SoCube`, which builds a 3D scene object forming a cube. In addition to that, add the module `SoMouseGrabber`. Connect the modules as shown below. {{}} @@ -30,8 +32,8 @@ Additional information about the `SoMouseGrabber` can be found here: {{< docuLin [//]: <> (MVL-653) ![SoMouseGrabber](images/tutorials/openinventor/V5_01.png "SoMouseGrabber") -### Configure mouse interactions -Now, open the panels of the module `SoMouseGrabber` and the module `SoExaminerViewer`, which displays a cube. In the Viewer, press the right key of your mouse {{< mousebutton "right" >}} and move the mouse around. This action can be seen in the panel of the module SoMouseGrabber. +### Configure Mouse Interactions +Now, open the panels of the module `SoMouseGrabber` and the module `SoExaminerViewer`, which displays a cube. In the viewer, press the right button of your mouse {{< mousebutton "right" >}} and move the mouse around. This action can be seen in the panel of the module SoMouseGrabber. {{}} Make sure to configure `SoMouseGrabber` fields as seen below. @@ -41,21 +43,20 @@ Make sure to configure `SoMouseGrabber` fields as seen below. **You can see:** 1. *Button 3*, the right mouse button {{< mousebutton "right" >}}, is tagged as being pressed -2. Changes of the mouse coordinates are displayed in the box *Output*. +2. Changes of the mouse coordinates are displayed in the box *Output* ![Mouse Interactions](images/tutorials/openinventor/V5_03.png "Mouse Interactions") -### Resize cube via mouse interactions - -We like to use the detected mouse-movements to change the size of our cube. In order to that, open the panel of `SoCube`. Build parameter connections from the mouse coordinates to the width and depth of the cube. +### Resize Cube via Mouse Interactions +We like to use the detected mouse movements to change the size of our cube. In order to that, open the panel of `SoCube`. Build parameter connections from the mouse coordinates to the width and depth of the cube. -![Change Cube size by Mouse Events](images/tutorials/openinventor/V5_04.png "Change Cube size by Mouse Events") +![Change Cube Size With Mouse Events](images/tutorials/openinventor/V5_04.png "Change Cube Size With Mouse Events") -If you now press the right mouse key {{< mousebutton "right" >}} inside the Viewer and move the mouse around, the size of the cube changes. +If you now press the right mouse button {{< mousebutton "right" >}} in the viewer and move the mouse around, the size of the cube changes. ## Exercises -1. Change location of the cube via Mouse Interactions by using the Module `SoTransform` -1. Add more objects to the scene and interact with them +1. Change location of the cube via Mouse Interactions by using the Module `SoTransform`. +1. Add more objects to the scene and interact with them. ## Summary * The module `SoExaminerViewer` enables the rendering of Open Inventor scenes and allows interactions with the Open Inventor scenes. diff --git a/mevislab.github.io/content/tutorials/openinventor/openinventorobjects.md b/mevislab.github.io/content/tutorials/openinventor/openinventorobjects.md index e28e9a871..7ae777ea6 100644 --- a/mevislab.github.io/content/tutorials/openinventor/openinventorobjects.md +++ b/mevislab.github.io/content/tutorials/openinventor/openinventorobjects.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Open Inventor", "3D"] menu: main: identifier: "openinventorobjects" - title: "Create Open Inventor Objects, change Material, Translate location in 3D and general explanation about Scene Graphs." + title: "Create Open Inventor Objects, Change Material, Translate Location in 3D and General Explanation about Scene Graphs" weight: 510 parent: "openinventor" --- @@ -20,16 +20,16 @@ menu: ## Introduction In this example we like to construct an Open Inventor scene in which we display three 3D objects of different color and shape. -## Steps to do -### Generating Open Inventor Objects {#TutorialGenerateOpenInventorObjects} +## Steps to Do -First, add the modules `SoExaminerViewer` and `SoCone` to the workspace and connect both modules as shown. The module `SoCone` creates a cone shaped object, which can be displayed in the Viewer `SoExaminerViewer`. +### Generating Open Inventor Objects {#TutorialGenerateOpenInventorObjects} +First, add the modules `SoExaminerViewer` and `SoCone` to the workspace and connect both modules as shown. The module `SoCone` creates a cone shaped object, which can be displayed in the viewer `SoExaminerViewer`. ![SoExaminerViewer](images/tutorials/openinventor/OI1_01.png "SoExaminerViewer") -We like to change the color of the cone. In order to do so, add the module `SoMaterial` to the workspace and connect the module as shown below. When creating an Open Inventor scene (by creating networks of Open Inventor modules), the sequence of module connections, in this case the sequence of the inputs to the module `SoExaminerViewer` determines the functionality of the network. +We like to change the color of the cone. In order to do so, add the module `SoMaterial` to the workspace and connect the module as shown below. When creating an Open Inventor scene (by creating networks of Open Inventor modules), the sequence of module connections, in this case the sequence of the inputs to the module `SoExaminerViewer`, determines the functionality of the network. -Open Inventor modules are executed like scene graphs. This means, modules are executed from top to bottom and from left to right. Here, it is important to connect the module `SoMaterial` to an input on the left side of the connection between `SoCone` and `SoExaminerViewer`. With this, we first select features like a color and these features are then assigned to all objects, which were executed afterwards. Now, open the panel of the module `SoMaterial` and select any *Diffuse Color* you like. Here, we choose green. +Open Inventor modules are executed like scene graphs. This means modules are executed from top to bottom and from left to right. Here, it is important to connect the module `SoMaterial` to an input on the left side of the connection between `SoCone` and `SoExaminerViewer`. With this, we first select features like a color and these features are then assigned to all objects, which were executed afterward. Now, open the panel of the module `SoMaterial` and select any *Diffuse Color* you like. Here, we choose green. ![Colors and Material in Open Inventor](images/tutorials/openinventor/OI1_02.png "Colors and Material in Open Inventor") @@ -39,7 +39,7 @@ In order to do that, add the module `SoSphere` to the workspace. Connect this mo ![Adding a SoSphere](images/tutorials/openinventor/OI1_03.png "Adding a SoSphere") -They display both objects at different positions, add the modules `SoSeparator` and `SoTransform` to the scene and connect both modules shown on the following picture. Open the panel of `SoTransform` and implement a translation in x-direction to shift the object. Now you can examine two things: +They display both objects at different positions, add the modules `SoSeparator` and `SoTransform` to the scene and connect both modules shown on the following picture. Open the panel of `SoTransform` and implement a translation in the x-direction to shift the object. Now you can examine two things: 1. The sphere loses its green color 2. The cone is shifted to the side @@ -48,7 +48,7 @@ They display both objects at different positions, add the modules `SoSeparator` The module `SoTransform` is responsible for shifting objects, in this case the cone, to the side. The module `SoSeparator` ensures that only the cone is shifted and also only the cone is colored in green. It separates this features from the rest of the scene. -We like to add a third object, a cube, and shift it to the other side of the sphere. Add the modules `SoCube` and `SoTransform` to the workspace and connect both modules as shown below. To shift the cube to the other side of the sphere, open the panel of `SoTransform` and adjust the Translation in x direction. The sphere is not affected by the translation, as the connection from `SoTransform1` to `SoExaminerViewer` is established on the right side of the connection between `SoSphere` and `SoExaminerViewer`. +We like to add a third object, a cube, and shift it to the other side of the sphere. Add the modules `SoCube` and `SoTransform` to the workspace and connect both modules as shown below. To shift the cube to the other side of the sphere, open the panel of `SoTransform` and adjust the *Translation* in the x-direction. The sphere is not affected by the translation, as the connection from `SoTransform1` to `SoExaminerViewer` is established on the right side of the connection between `SoSphere` and `SoExaminerViewer`. ![Adding a SoCube](images/tutorials/openinventor/OI1_07.png "Adding a SoCube") @@ -56,11 +56,11 @@ Again, we use the module `SoMaterial` to select a color for the cone and the sph ![Multiple Materials](images/tutorials/openinventor/OI1_08.png "Multiple Materials") -For easier handling we group an object together with its features by using the module `SoGroup`. This does not separate features, which is the reason for the cube to be colorized. All modules that are derived from `SoGroup` offer a basically infinite number of input connectors (a new connector is added for every new connection). +For an easier handling, we group an object together with its features by using the module `SoGroup`. This does not separate features, which is the reason for the cube to be colorized. All modules that are derived from `SoGroup` offer a basically infinite number of input connectors (a new connector is added for every new connection). ![SoGroup](images/tutorials/openinventor/OI1_09.png "SoGroup") -If we do not want to colorize the cube, we have to exchange the module `SoGroup` by another `SoSeparator` module. +If we do not want to colorize the cube, we have to exchange the module `SoGroup` for another `SoSeparator` module. ![SoSeparator](images/tutorials/openinventor/OI1_10.png "SoSeparator") diff --git a/mevislab.github.io/content/tutorials/openinventor/posteffectsinopeninventor.md b/mevislab.github.io/content/tutorials/openinventor/posteffectsinopeninventor.md index 9028da984..324f080ff 100644 --- a/mevislab.github.io/content/tutorials/openinventor/posteffectsinopeninventor.md +++ b/mevislab.github.io/content/tutorials/openinventor/posteffectsinopeninventor.md @@ -8,42 +8,42 @@ tags: ["Advanced", "Tutorial", "Open Inventor", "Post Effects"] menu: main: identifier: "posteffectsinopeninventor" - title: "Learn how to use Post Effects in Open Inventor" + title: "Learn How to Use Post Effects in Open Inventor" weight: 540 parent: "openinventor" --- + # Example 4: Post Effects in Open Inventor -## Introduction +## Introduction Up to this point, we practiced constructing Open Inventor scenes and placed three-dimensional Open Inventor objects of different colors and shapes within them. -In this tutorial, we will go over the steps to add shadows to our 3D-objects, make them glow and vary their opacity to make them transparent. We will also incorporate WEMs from multi-frame DICOMs and render them as scene objects to see how different post effects can be used on them. -## Steps to follow +In this tutorial, we will go over the steps to add shadows to our 3D objects, make them glow, and vary their opacity to make them transparent. We will also incorporate WEMs from multiframe DICOMs and render them as scene objects to see how different post effects can be used on them. -### From DICOM to scene object +## Steps to Follow -To incorporate DICOMs into your Open Inventor Scene, they have to be rendered as Open Inventor objects, which can be done by converting them into [WEMs](glossary/#winged-edge-meshes) first. Begin by adding the modules `LocalImage`, `WEMIsoSurface` and `SoWEMRenderer` to your workspace. Open the panel of the `LocalImage` module, browse your files and choose a DICOM with multiple frames as input data. Connect the `LocalImage` module's output connector to `WEMIsoSurface` module's input connector to create a WEM of the study's surface. Then connect the `WEMIsoSurface` module's output connector to the `SoWEMRenderer` module's input connector to render a scene object, that can be displayed by adding a `SoExaminerViewer` module to the workspace and connecting the `SoWEMRenderer` module's output connector to its input connector. +### From DICOM to Scene Object + +To incorporate DICOMs into your Open Inventor Scene, they have to be rendered as Open Inventor objects, which can be done by converting them into [WEMs](glossary/#winged-edge-meshes) first. Begin by adding the modules `LocalImage`, `WEMIsoSurface`, and `SoWEMRenderer` to your workspace. Open the panel of the `LocalImage` module, browse your files, and choose a DICOM with multiple frames as input data. Connect the `LocalImage` module's output connector to `WEMIsoSurface` module's input connector to create a WEM of the study's surface. Then, connect the `WEMIsoSurface` module's output connector to the `SoWEMRenderer` module's input connector to render a scene object that can be displayed by adding a `SoExaminerViewer` module to the workspace and connecting the `SoWEMRenderer` module's output connector to its input connector. {{}} -We don't recommend using single frame DICOMs for this example as a certain depth is required to interact with the scene objects as intended. Also make sure that the pixel data of the DICOM file you choose contains all slices of the study, as it might be difficult to arrange scene objects of individual slices to resemble the originally captured study. +We don't recommend using single-frame DICOMs for this example as a certain depth is required to interact with the scene objects as intended. Also make sure that the pixel data of the DICOM file you choose contains all slices of the study, as it might be difficult to arrange scene objects of individual slices to resemble the originally captured study. {{}} -![From DICOM to SO](images/tutorials/openinventor/multiframetoso.PNG "How to create a scene object out of a multi-frame DICOM") +![From DICOM to SO](images/tutorials/openinventor/multiframetoso.PNG "How to create a scene object out of a multiframe DICOM") {{}} -Consider adding a `View2D` and an `Info` module to your `LocalImage` module's output connector to be able to compare the rendered object with the original image and adapt the ISO values to minimize noise. +Consider adding a `View2D` and an `Info` module to your `LocalImage` module's output connector to be able to compare the rendered object with the original image and adapt the isovalues to minimize noise. {{}} ### PostEffectShader - -To apply shading to our DICOM scene object, add a `SoShaderPipeline` and a `SoShaderPipelineCellShading` module to our network and connect their output connectors to a `SoToggle` module's input connector. Then connect the `SoToggle` module's output connector to the `SoExaminerViewer`, but on the left side of the connection to the `SoWEMRenderer` module. This way, shading can be toggled and is applied to all scene objects connected to the right of the `SoToggle` module's connection. +To apply shading to our DICOM scene object, add a `SoShaderPipeline` and a `SoShaderPipelineCellShading` module to our network and connect their output connectors to a `SoToggle` module's input connector. Then, connect the `SoToggle` module's output connector to the `SoExaminerViewer`, but on the left side of the connection to the `SoWEMRenderer` module. This way, shading can be toggled and is applied to all scene objects connected to the right of the `SoToggle` module's connection. ![Shading toggled off](images/tutorials/openinventor/shadingtoggled1.PNG "Shading toggled off") ![Shading toggled on](images/tutorials/openinventor/shadingtoggledon1.PNG "Shading toggled on") -### Tidying your workspace and preparing the next steps - -Now add a `SoPostEffectBackground` module to your workspace and connect its output connector to the `SoExaminerViewer` module's input connector. Group the modules `SoToggle`, `SoShaderPipeline` and `SoShaderPipelineCellShading` together and name the group "Toggle Shading". Then, group the modules `SoWEMRenderer`, `WEMIsoSurface` and `LocalImage` together and name the group "DICOM Object". +### Tidying Your Workspace and Preparing the Next Steps +Now, add a `SoPostEffectBackground` module to your workspace and connect its output connector to the `SoExaminerViewer` module's input connector. Group the modules `SoToggle`, `SoShaderPipeline`, and `SoShaderPipelineCellShading` together and name the group "Toggle Shading". Then, group the modules `SoWEMRenderer`, `WEMIsoSurface`, and `LocalImage` together and name the group "DICOM Object". {{}} Structuring the workspace by grouping modules based on their functionality helps to stay focused and keeps everything tidy. @@ -56,17 +56,16 @@ Use a `SoPostEffectMainGeometry` module to connect both of the groups you just c You can now change your Open Inventor scene's background color. ### PostEffectEdges - -Add the module `SoPostEffectEdges` to your workspace and connect its output connector with the `SoExaminerViewer` module's input connector. -Then open its panel and choose a color. You can try different modes, sampling distances and thresholds: +Add the module `SoPostEffectEdges` to your workspace and connect its output connector with the `SoExaminerViewer` module's input connector. + +Then, open its panel and choose a color. You can try different modes, sampling distances and thresholds: ![Colored Edges](images/tutorials/openinventor/Edges1.PNG "Colored edges") ![Colored Edges 2](images/tutorials/openinventor/Edges2.PNG "Varying settings of colored edges") ![Colored Edges 3](images/tutorials/openinventor/Edges3.PNG "Varying settings of colored edges") ### PostEffectGeometry - -To include geometrical objects in your Open Inventor scene, add two `SoSeparator` modules to the workspace and connect them to the input connector of `SoPostEffectMainGeometry`. Then add a `SoMaterial`, `SoTransform` and `SoSphere` or `SoCube` module to each `SoSeparator` and adjust their size (using the panel of the `SoSphere` or `SoCube` module) and placement within the scene (using the panel of the `SoTransform` module) as you like. +To include geometrical objects in your Open Inventor scene, add two `SoSeparator` modules to the workspace and connect them to the input connector of `SoPostEffectMainGeometry`. Then, add a `SoMaterial`, `SoTransform`, and `SoSphere` or `SoCube` module to each `SoSeparator` and adjust their size (using the panel of the `SoSphere` or `SoCube` module) and placement within the scene (using the panel of the `SoTransform` module) as you like. {{}} You'll observe that the transparency setting in the `SoMaterial` module does not apply to the geometrical objects. Add a `SoPostEffectTransparentGeometry` module to your workspace, connect its output connector to the `SoExaminerViewer` module's input connector and its input connectors to the `SoSeparator` module's output connector to create transparent geometrical objects in your scene. @@ -75,13 +74,12 @@ You'll observe that the transparency setting in the `SoMaterial` module does not ![Workspace](images/tutorials/openinventor/WorkspaceAndNetwork.PNG "Workspace") ### PostEffectGlow - To put a soft glow on the geometrical scene objects, the module `SoPostEffectGlow` can be added to the workspace. ![Glow](images/tutorials/openinventor/WorkspaceWithGlow.PNG "Applied SoPostEffectGlow") ## Summary -* Multi-frame DICOM images can be rendered to be scene objects by converting them into WEMs first -* Open Inventor scenes can be augmented by adding PostEffects to scene objects +* Multiframe DICOM images can be rendered to be scene objects by converting them into WEMs first. +* Open Inventor scenes can be augmented by adding PostEffects to scene objects. {{< networkfile "examples/open_inventor/PostEffectTutorial.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/shorts.md b/mevislab.github.io/content/tutorials/shorts.md index fc4fbbdbc..aa98fcf38 100644 --- a/mevislab.github.io/content/tutorials/shorts.md +++ b/mevislab.github.io/content/tutorials/shorts.md @@ -14,7 +14,6 @@ menu: --- # MeVisLab Tips and Tricks - This chapter shows some features and functionalities that are helpful but do not provide its own tutorial. * [Keyboard Shortcuts](tutorials/shorts#shortcuts) @@ -36,7 +35,7 @@ This is a collection of useful keyboard shortcuts in MeVisLab. {{< keyboard "CTRL" "1" >}} - Automatically arrange selection of modules / in the current network + Automatically arrange selection of modules in the current network {{< keyboard "CTRL" "2" >}} @@ -52,15 +51,15 @@ This is a collection of useful keyboard shortcuts in MeVisLab. {{< keyboard "CTRL" "A" >}} then {{< keyboard "TAB" >}} - Layout .script file (in MATE) + Layout *.script* file (in MATE) {{< keyboard "CTRL" "D" >}} Duplicate currently selected module (including all field values) - {{< keyboard "CTRL" >}} and Left Mouse {{< mousebutton "left" >}} or Middle Mouse Button {{< mousebutton "middle" >}} - Show Internal Network + {{< keyboard "CTRL" >}} and Left Mouse Button {{< mousebutton "left" >}} or Middle Mouse Button {{< mousebutton "middle" >}} + Show internal network {{< keyboard "SPACE" >}} @@ -85,10 +84,7 @@ This is a collection of useful keyboard shortcuts in MeVisLab. - - ## Using Snippets {#snippets} - {{< youtube "xX7wJiyfxhA" >}} Sometimes you have to create the same network over and over again -- for example, to quickly preview DICOM files. Generally, you will at least add one module to load and another module to display your images. Sometimes you may also want to view the DICOM header data. A network you possibly generate whenever opening DICOM files will be the following: @@ -99,12 +95,11 @@ Create a snippet of your commonly used networks by adding the snippets list from Enter a name for your snippet like *DICOM Viewer* and click *Add*. -A new snippet will be shown in your Snippets List. You can drag and drop the snippet to your workspace and the modules are re-used, including all defined field values. +A new snippet will be shown in your Snippets List. You can drag and drop the snippet to your workspace and the modules are reused, including all defined field values. ![Snippets List](images/tutorials/Snippets_Panel.png "Snippets List") ## Scripting Assistant {#scriptingassistant} - {{< youtube "y6110PW5N_w" >}} If you are new to Python or don't have experiences in accessing fields in MeVisLab via Python scripting, the Scripting Assistant might help you. @@ -116,7 +111,6 @@ If you now interact with a network, module, or macro module, your user interacti ![Scripting Assistant](images/tutorials/ScriptingAssistant_Panel.png "Scripting Assistant") ## User Scripts {#user_scripts} - User scripts allow you to call any Python code from the main menu entry {{< menuitem "Scripting">}}. MeVisLab already comes with some user scripts you can try. You can also view the sources for example code via right-click {{< mousebutton "right" >}} on the menu entry under {{< menuitem "Scripting">}}. This example shows you how to change the color of the MeVisLab IDE to a dark mode. @@ -132,7 +126,7 @@ UserIDEActions { Action "Set Dark Theme" { name = changeTheme userScript = $(LOCAL)/changeTheme.py - statusTip = "Change Theme to dark mode." + statusTip = "Change theme to dark mode" accel = "ctrl+F9" } @@ -173,10 +167,9 @@ QApplication.setPalette(palette) This script defines the color of the MeVisLab user interface elements. You can define other colors and more items; this is just an example of what you can do with user scripts. -Switch back to the MeVisLab IDE and select the menu item {{< menuitem "Extras" "Reload Module Database (Clear Cache)">}} again. The colors of the MeVisLab IDE change as defined in our Python script. This change persists until you restart MeVisLab and can always be repeated by selecting the menu entry or the keyboard shortcut {{< keyboard "ctrl+F9" >}}. +Switch back to the MeVisLab IDE and select the menu item {{< menuitem "Extras" "Reload Module Database (Clear Cache)">}} again. The colors of the MeVisLab IDE change as defined in our Python script. This change persists until you restart MeVisLab and can always be repeated by selecting the menu entry or pressing the keyboard shortcut {{< keyboard "ctrl+F9" >}}. ## Show Status of Module Input and Output {#mlimagestate} - Especially in large networks it is useful to see the state of the input and output connectors of a module. By default, the module connectors do not show if data is available. Below image shows a `DicomImport` module and a `View2D` module where no data is loaded. ![No status on connector](images/tutorials/LMIMageState_Off.png "No status on connector") @@ -185,7 +178,7 @@ In the MeVisLab preferences dialog, you can see a checkbox *Show ML image state* ![Show ML image state](images/tutorials/LMIMageState.png "Show ML image state") -After enabling *Show ML image state*, your network changes and the input and output connectors appear red in case no data is available at the output. +After enabling *Show ML image state*, your network changes and the input and output connectors appear red in the case no data is available at the output. ![No data on connector](images/tutorials/LMIMageState_On_1.png "No data on connector") @@ -194,7 +187,6 @@ After loading a valid DICOM directory, the connectors providing a valid ML image ![No data on connector](images/tutorials/LMIMageState_On_2.png "No data on connector") ## Module Suggestion of Module Input and Output {#modulesuggest} - {{< youtube "q_cw583EE_s" >}} MeVisLab provides a functionality to suggest frequently used modules for the selected output in your network. diff --git a/mevislab.github.io/content/tutorials/summary.md b/mevislab.github.io/content/tutorials/summary.md index 0e73e7bd2..16c13647c 100644 --- a/mevislab.github.io/content/tutorials/summary.md +++ b/mevislab.github.io/content/tutorials/summary.md @@ -14,15 +14,16 @@ menu: --- # MeVisLab Tutorial Chapter VII {#TutorialChapter7} + ## Summary -This chapter will summarize all previous chapters and you will develop a whole application in MeVisLab. The complete workflow from developing a prototype to delivering your final application to your customer is explained step-by-step. +This chapter will summarize all previous chapters and you will develop an entire application in MeVisLab. The complete workflow from developing a prototype to delivering your final application to your customer is explained step-by-step. ![Prototype to Product](images/tutorials/summary/Prototyping.png "Prototype to Product") {{}} -Some of the features described here will require a separate license. Building an installable executable requires the **MeVisLab ApplicationBuilder** license. It extends the **MeVisLab SDK** so that you can generate an installer of your developed macro module. +Some of the features described here will require a separate license. Building an installable executable requires the **MeVisLab ApplicationBuilder** license. It extends the **MeVisLab SDK**, so that you can generate an installer of your developed macro module. -Free evaluation licenses of the **MeVisLab ApplicationBuilder**, time-limited to 3 months, can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). +Free evaluation licenses of the **MeVisLab ApplicationBuilder**, time-limited to three months, can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). {{}} ## Prototype @@ -31,13 +32,13 @@ In the first step, you are developing an application based on the following requ * **Requirement 1**: The application shall be able to load DICOM data * **Requirement 2**: The application shall provide a 2D and a 3D viewer * **Requirement 3**: The 2D viewer shall display the loaded images -* **Requirement 4**: The 2D viewer shall provide the possibility to segment parts of the image based on a RegionGrowing algorithm - * **Requirement 4.1**: It shall be possible to click into the image for defining a marker position for starting the RegionGrowing - * **Requirement 4.2**: It shall be possible to define a threshold for the RegionGrowing algorithm -* **Requirement 5**: The 2D viewer shall display the segmentation results as a semi-transparent overlay +* **Requirement 4**: The 2D viewer shall provide the possibility to segment parts of the image based on a region growing algorithm + * **Requirement 4.1**: It shall be possible to click into the image for defining a marker position for starting the region growing algorithm + * **Requirement 4.2**: It shall be possible to define a threshold for the region growing algorithm +* **Requirement 5**: The 2D viewer shall display the segmentation results as a semitransparent overlay * **Requirement 5.1**: It shall be possible to define the color of the overlay -* **Requirement 6**: The 3D viewer shall visualize the loaded data in a 3-dimensional volume rendering -* **Requirement 7**: The 3D viewer shall additionally show the segmentation result as a 3-dimensional mesh +* **Requirement 6**: The 3D viewer shall visualize the loaded data in a three-dimensional volume rendering +* **Requirement 7**: The 3D viewer shall additionally show the segmentation result as a three-dimensional mesh * **Requirement 8**: The total volume of the segmented area shall be calculated and shown (in ml) * **Requirement 9**: It shall be possible to toggle the visible 3D objects * **Requirement 9.1**: Original data @@ -48,7 +49,7 @@ In the first step, you are developing an application based on the following requ Your network will be encapsulated in a macro module for later application development. For details about macro modules, see [Example 2.2: Global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules/). ### Step 3: Develop a User Interface and Add Python Scripting {#UIDesign} -Develop the UI and Python Scripts based on your requirements from Step 1. The resulting UI will look like below mockup: +Develop the UI and Python Scripts based on your requirements from Step 1. The resulting UI will look like the below mockup: ![User Interface Design](images/tutorials/summary/UIMockUp.png "User Interface Design") @@ -64,6 +65,6 @@ Create a standalone application by using the **MeVisLab ApplicationBuilder** and Integrate feedback from customers having installed your executable and adapt your test cases from Step 4. ### Step 7: Update Your Installable Executable -Re-build your executable and release a new version of your application. +Rebuild your executable and release a new version of your application. The above loop can easily be repeated until your product completely fulfills your defined requirements. diff --git a/mevislab.github.io/content/tutorials/summary/summary1.md b/mevislab.github.io/content/tutorials/summary/summary1.md index 3e6167db0..95f4a3bc3 100644 --- a/mevislab.github.io/content/tutorials/summary/summary1.md +++ b/mevislab.github.io/content/tutorials/summary/summary1.md @@ -1,5 +1,5 @@ --- -title: "Step 1: Prototyping - Develop your Network" +title: "Step 1: Prototyping - Develop Your Network" date: "2023-01-15" status: "open" draft: false @@ -8,48 +8,50 @@ tags: ["Advanced", "Tutorial", "Prototyping"] menu: main: identifier: "summaryexample1" - title: "Develop a prototype of your application in MeVisLab SDK." + title: "Develop a Prototype of Your Application in MeVisLab SDK" weight: 805 parent: "summary" --- -# Step 1: Prototyping - Develop your Network + +# Step 1: Prototyping - Develop Your Network {{< youtube "-hbddg0bXcA" >}} ## Introduction -In this example, we will develop a network which fulfills the requirements mentioned on the [overview page](tutorials/summary#DevelopNetwork). The network will be developed by re-using existing modules and defining basic field values. +In this example, we will develop a network that fulfills the requirements mentioned on the [overview page](tutorials/summary#DevelopNetwork). The network will be developed by reusing existing modules and defining basic field values. + +## Steps to Do -## Steps to do -### 2D viewer -The 2D viewer shall visualize the loaded images. In addition to that, it shall be possible to click into the image to trigger a RegionGrowing algorithm to segment parts of the loaded image based on a threshold. +### 2D Viewer +The 2D viewer shall visualize the loaded images. In addition to that, it shall be possible to click into the image to trigger a region growing algorithm to segment parts of the loaded image based on a position and a threshold. The following requirements from the [overview](tutorials/summary#DevelopNetwork) will be implemented: -* **Requirement 1**: The application shall be able to load DICOM data. +* **Requirement 1**: The application shall be able to load DICOM data * **Requirement 3**: The 2D viewer shall display the loaded images -* **Requirement 4**: The 2D viewer shall provide the possibility to segment parts of the image based on a RegionGrowing algorithm - * **Requirement 4.1**: It shall be possible to click into the image to set a marker position to start the RegionGrowing - * **Requirement 4.2**: It shall be possible to define a threshold for the RegionGrowing algorithm -* **Requirement 5**: The 2D viewer shall display the segmentation results as a semi-transparent overlay +* **Requirement 4**: The 2D viewer shall provide the possibility to segment parts of the image based on a region growing algorithm + * **Requirement 4.1**: It shall be possible to click into the image to set a marker position to start the region growing algorithm + * **Requirement 4.2**: It shall be possible to define a threshold for the region growing algorithm +* **Requirement 5**: The 2D viewer shall display the segmentation results as a semitransparent overlay * **Requirement 5.1**: It shall be possible to define the color of the overlay Add a `LocalImage` and a `View2D` module to your workspace. You are now able to load an image and view the slices. ![Loading an image](images/tutorials/summary/Example1_1.png "Loading an image") -RegionGrowing requires a `SoView2DMarkerEditor`, a `SoView2DOverlay` and a `RegionGrowing` module. Add them to your network and connect them as seen below. Configure the `RegionGrowing` module to use a *3D-6-Neighborhood (x,y,z)* relation and an automatic threshold value of *1.500*. Also select *Auto-Update*. +Region growing requires a `SoView2DMarkerEditor`, a `SoView2DOverlay`, and a `RegionGrowing` module. Add them to your network and connect them as seen below. Configure the `RegionGrowing` module to use a *3D-6-Neighborhood (x,y,z)* relation and an automatic threshold value of *1.500*. Also select *Auto-Update*. Set `SoView2DMarkerEditor` to allow only one marker by defining *Max Size = 1* and *Overflow Mode = Remove All*. For our application we only want one marker to be set for defining the `RegionGrowing`. -If you now click into your loaded image via left mouse button {{< mousebutton "left" >}}, the `RegionGrowing` module segments all neighborhood pixels with a mean intensity value plus/minus defined percentage value from your click position. +If you now click into your loaded image via left mouse button {{< mousebutton "left" >}}, the `RegionGrowing` module segments all neighborhood voxels with a mean intensity value plus/minus the defined percentage value from your click position. The overlay is shown in white. ![RegionGrowing via marker editor](images/tutorials/summary/Example1_2.png "RegionGrowing via marker editor") -Open the `SoView2DOverlay` module, change Blend Mode to *Blend* and select any color and *Alpha Factor* for your overlay. The applied changes are immediately visible. +Open the `SoView2DOverlay` module, change *Blend Mode* to *Blend*, and select any color and *Alpha Factor* for your overlay. The applied changes are immediately visible. ![Overlay color and transparency](images/tutorials/summary/Example1_3.png "Overlay color and transparency") -The segmented results from the `RegionGrowing` module might contain gaps because of differences in the intensity value of neighboring pixels. You can close these gaps by adding a `CloseGap` module. Connect it to the `RegionGrowing` and the `SoView2DOverlay` module and configure Filter Mode as *Binary Dilatation*, Border Handling as *Pad Dst Fill* and set KernelZ to *3*. +The segmented results from the `RegionGrowing` module might contain gaps because of differences in the intensity value of neighboring voxels. You can close these gaps by adding a `CloseGap` module. Connect it to the `RegionGrowing` and the `SoView2DOverlay` module and configure *Filter Mode* as *Binary Dilatation*, *Border Handling* as *Pad Dst Fill*, and set *KernelZ* to *3*. Lastly, we want to calculate the volume of the segmented parts. Add a `CalculateVolume` module to the `CloseGap` module. The 2D viewer now provides the basic functionalities. @@ -61,20 +63,20 @@ You can group the modules in your network for an improved overview by selecting The 3D viewer shall visualize your loaded image in 3D and additionally provide the possibility to render your segmentation results. You will be able to decide for different views, displaying the image and the segmentation, only the image or only the segmentation. The volume (in ml) of your segmentation results shall be calculated. The following requirements from [overview](tutorials/summary#DevelopNetwork) will be implemented: -* **Requirement 2**: The application shall provide a 2D and a 3D viewer. -* **Requirement 6**: The 3D viewer shall visualize the loaded data in a 3-dimensional volume rendering. -* **Requirement 7**: The 3D viewer shall additionally show the segmentation result as a 3-dimensional mesh. -* **Requirement 8**: The total volume of the segmented area shall be calculated and shown (in ml). -* **Requirement 9**: It shall be possible to toggle the visible 3D objects. +* **Requirement 2**: The application shall provide a 2D and a 3D viewer +* **Requirement 6**: The 3D viewer shall visualize the loaded data in a three-dimensional volume rendering +* **Requirement 7**: The 3D viewer shall additionally show the segmentation result as a three-dimensional mesh +* **Requirement 8**: The total volume of the segmented area shall be calculated and shown (in ml) +* **Requirement 9**: It shall be possible to toggle the visible 3D objects * **Requirement 9.1**: Original data * **Requirement 9.2**: Segmentation results * **Requirement 9.3**: All -Add a `SoExaminerViewer`, a `SoWEMRenderer` and an `IsoSurface` module to your existing network and connect them to the `LocalImage` module. Configure the `IsoSurface` to use an IsoValue of *200*, a Resolution of *1* and check *Auto-Update* and *Auto-Apply*. +Add a `SoExaminerViewer`, a `SoWEMRenderer`, and an `IsoSurface` module to your existing network and connect them to the `LocalImage` module. Configure the `IsoSurface` to use an *IsoValue* of *200*, a *Resolution* of *1* and check *Auto-Update* and *Auto-Apply*. ![3D Viewer](images/tutorials/summary/Example1_5.png "3D Viewer") -The result should be a 3-dimensional rendering of your image. +The result should be a three-dimensional rendering of your image. ![SoExaminerViewer](images/tutorials/summary/Example1_6.png "SoExaminerViewer") @@ -82,7 +84,7 @@ The result should be a 3-dimensional rendering of your image. If the rendering is not immediately applied, click *Apply* in your `IsoSurface` module. {{}} -Define the field instanceName of your `IsoSurface` module as *IsoSurfaceImage* and add another `IsoSurface` module to your network. Set the instanceName to *IsoSurfaceSegmentation* and connect the module to the output of the `CloseGap` module from the image segmentation. Set IsoValue to *420*, Resolution to *1* and check *Auto-Update* and *Auto-Apply*. +Define the field instanceName of your `IsoSurface` module as *IsoSurfaceImage* and add another `IsoSurface` module to your network. Set the instanceName to *IsoSurfaceSegmentation* and connect the module to the output of the `CloseGap` module from the image segmentation. Set *IsoValue* to *420*, *Resolution* to *1*, and check *Auto-Update* and *Auto-Apply*. Set instanceName of the `SoWEMRenderer` module to *SoWEMRendererImage* and add another `SoWEMRenderer` module. Set this instanceName to *SoWEMRendererSegmentation* and connect it to the *IsoSurfaceSegmentation* module. Selecting the output of the new `SoWEMRenderer` shows the segmented parts as a 3D object in the output inspector. @@ -109,18 +111,18 @@ Add a `SoGroup` module and connect both `SoWEMRenderer` modules as input. The ou ![SoGroup](images/tutorials/summary/Example1_10.png "SoGroup") -You can now also toggle input 2 of the switch showing both 3D objects. The only problem is: You cannot see the brain because it is located inside the head. Open the `SoWEMRendererImage` module panel and set faceAlphaValue to *0.5*. The viewer now shows the head in a semi transparent manner so that you can see the brain. Certain levels of opacity are difficult to render. Add a `SoDepthPeelRenderer` module and connect it to the semi transparent `SoWEMRendererImage` module. Set Layers of the renderer to *1*. +You can now also toggle input 2 of the switch showing both 3D objects. The only problem is: You cannot see the brain because it is located inside the head. Open the `SoWEMRendererImage` module panel and set *faceAlphaValue* to *0.5*. The viewer now shows the head in a semitransparent manner, so that you can see the brain. Certain levels of opacity are difficult to render. Add a `SoDepthPeelRenderer` module and connect it to the semitransparent `SoWEMRendererImage` module. Set *Layers* of the renderer to *1*. ![SoDepthPeelRenderer](images/tutorials/summary/Example1_Both.png "SoDepthPeelRenderer") -You have a 2D and a 3D viewer now. Let's define the colors of the overlay to be re-used for the 3D segmentation. +You have a 2D and a 3D viewer now. Let's define the colors of the overlay to be reused for the 3D segmentation. -### Parameter connections for visualization +### Parameter Connections for Visualization Open the panels of the `SoView2DOverlay` and the `SoWEMRendererSegmentation` module. Draw a parameter connection from *SoView2DOverlay.baseColor* to *SoWEMRendererSegmentation.faceDiffuseColor*. ![Synchronized segmentation colors](images/tutorials/summary/Example1_11.png "Synchronized segmentation colors") -Now the 3D visualization uses the same color as the 2D overlay. +Now, the 3D visualization uses the same color as the 2D overlay. ## Summary * You built a network providing the basic functionalities of your application. diff --git a/mevislab.github.io/content/tutorials/summary/summary2.md b/mevislab.github.io/content/tutorials/summary/summary2.md index 319bdef8a..4cf58b80d 100644 --- a/mevislab.github.io/content/tutorials/summary/summary2.md +++ b/mevislab.github.io/content/tutorials/summary/summary2.md @@ -1,5 +1,5 @@ --- -title: "Step 2: Prototyping - Create a macro module" +title: "Step 2: Prototyping - Create a Macro Module" date: "2023-01-16" status: "open" draft: false @@ -8,31 +8,32 @@ tags: ["Advanced", "Tutorial", "Prototyping", "Macro modules"] menu: main: identifier: "summaryexample2" - title: "Create a macro module from your network." + title: "Create a Macro Module From Your Network" weight: 810 parent: "summary" --- -# Step 2: Prototyping - Create a macro module + +# Step 2: Prototyping - Create a Macro Module {{< youtube "gNlOTiEOJgU" >}} ## Introduction In this example, we encapsulate the previously developed prototype network into a macro module for future application development and automated testing. -## Steps to do -Make sure to have your *.mlab file from the previous [tutorial](tutorials/summary/summary1/) available. +## Steps to Do +Make sure to have your *.mlab* file from the previous [tutorial](tutorials/summary/summary1/) available. -### Package creation +### Package Creation Packages are described in detail in [Example 2.1: Package creation](tutorials/basicmechanisms/macromodules/package/). If you already have your own package, you can skip this part and continue creating a macro module. -Open Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *New Package*. Run the Wizard and enter details of your new package and click *Create*. +Open the Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *New Package*. Run the Wizard and enter details of your new package and click *Create*. ![Package wizard](images/tutorials/summary/Example2_1.png "Package wizard") MeVisLab reloads and you can start creating your macro module. -### Create a macro module -Open Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *macro module*. Run the Wizard and enter details of your new macro module. +### Create a Macro Module +Open the Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *macro module*. Run the Wizard and enter details of your new macro module. ![Macro module wizard](images/tutorials/summary/Example2_2.png "Macro module wizard") @@ -40,13 +41,13 @@ Select the created package and click *Next*. ![Macro module wizard](images/tutorials/summary/Example2_3.png "Macro module wizard") -Select your \*.mlab file from [Step 1](tutorials/summary/summary1/) and check *Add Python file*. Click *Next*. +Select your *.mlab* file from [Step 1](tutorials/summary/summary1/) and check *Add Python file*. Click *Next*. ![Macro module wizard](images/tutorials/summary/Example2_4.png "Macro module wizard") You do not have to define fields of your macro module now, we will do that later. Click *Create*. The Windows Explorer opens showing the directory of your macro module. It should be the same directory you selected for your Package. -### Directory Structure of a macro module +### Directory Structure of a Macro Module The directory structure for a macro module is as follows: * From Package Wizard: * Package target directory is the root directory of the module @@ -61,15 +62,15 @@ The directory structure for a macro module is as follows: ![Directory Structure](images/tutorials/summary/Example2_6.png "Directory Structure") -#### Definition (\*.def) file -The initial \*.def file contains information you entered into the Wizard for the macro module. +#### Definition (*.def*) File +The initial *.def* file contains information you entered into the Wizard for the macro module. {{< highlight filename=".def" >}} ```Stan Macro module TutorialSummary { genre = "VisualizationMain" author = "MeVis Medical Solutions AG" - comment = "Macro module for MeVisLab Tutorials" + comment = "Macro module for MeVisLab tutorials" keywords = "2D 3D RegionGrowing" seeAlso = "" @@ -78,13 +79,13 @@ Macro module TutorialSummary { ``` {{}} -An *externalDefinition* to a script file is also added (see below for the \*.script file). +An *externalDefinition* to a script file is also added (see below for the *.script* file). -#### MeVisLab Network (\*.mlab) file -The \*.mlab file is a copy of the \*.mlab file you developed in [Step 1](tutorials/summary/summary1/) and re-used in the wizard. In the next chapters, this file will be used as *internal network*. +#### MeVisLab Network (*.mlab*) File +The *.mlab* file is a copy of the *.mlab* file you developed in [Step 1](tutorials/summary/summary1/) and reused in the wizard. In the next chapters, this file will be used as *internal network*. -#### Python (\*.py) file -The initial \*.py file only contains the import of MeVisLab specific objects and functions. In the future steps, we will add functionalities to our application in Python. +#### Python (*.py*) File +The initial *.py* file only contains the import of MeVisLab-specific objects and functions. In the future steps, we will add functionalities to our application in Python. {{< highlight filename=".py" >}} ```Python @@ -92,8 +93,8 @@ from mevis import * ``` {{}} -#### Script (\*.script) file -The script (\*.script) file defines fields accessible from outside the macro module, inputs and outputs and allows you to develop a User Interface for your prototype and your final application. +#### Script (*.script*) File +The script (*.script*) file defines fields accessible from outside the macro module, inputs and outputs, and allows you to develop a user interface for your prototype and your final application. {{< highlight filename=".script" >}} ```Stan @@ -110,14 +111,14 @@ Commands { ``` {{}} -The source also defines your Python file to be used when calling functions and events from the User Interface. +The source also defines your Python file to be used when calling functions and events from the user interface. -### Using your macro module +### Using Your Macro Module As you created a global macro module, you can search for it in the MeVisLab *Module Search*. ![Module Search](images/tutorials/summary/Example2_7.png "Module Search") -We did not define inputs or outputs. You cannot connect your module to others. In addition to that, we did not develop a User Interface. Double-clicking your module {{< mousebutton "left" >}} only opens the Automatic Panel showing the *instanceName*. +We did not define inputs or outputs. You cannot connect your module to others. In addition to that, we did not develop a user interface. Double-clicking your module {{< mousebutton "left" >}} only opens the automatic panel showing the *instanceName*. ![Automatic Panel](images/tutorials/summary/Example2_8.png "Automatic Panel") @@ -126,6 +127,6 @@ Right-click on your module allows you to open the internal network as developed ## Summary * Macro modules encapsulate an entire MeVisLab network including all modules. * The internal network can be shown (and edited) via right-click {{< mousebutton "right" >}} {{< menuitem "Show Internal Network" >}} -* The Wizard already creates the necessary folder structure and generates files for User Interface and Python development. +* The Wizard already creates the necessary folder structure and generates files for user interface and Python development. {{< networkfile "examples/summary/TutorialSummary.zip" >}} diff --git a/mevislab.github.io/content/tutorials/summary/summary3.md b/mevislab.github.io/content/tutorials/summary/summary3.md index 828379dfa..6ae7ebb9b 100644 --- a/mevislab.github.io/content/tutorials/summary/summary3.md +++ b/mevislab.github.io/content/tutorials/summary/summary3.md @@ -1,5 +1,5 @@ --- -title: "Step 3: Prototyping - User Interface and Python scripting" +title: "Step 3: Prototyping - User Interface and Python Scripting" date: "2023-01-17" status: "open" draft: false @@ -8,27 +8,29 @@ tags: ["Advanced", "Tutorial", "Prototyping", "User Interface", "Python", "GUI E menu: main: identifier: "summaryexample3" - title: "Develop your User Interface and add Python functions." + title: "Develop Your User Interface and Add Python Functions" weight: 815 parent: "summary" --- -# Step 3: Prototyping - User Interface and Python scripting + +# Step 3: Prototyping - User Interface and Python Scripting {{< youtube "dOyncLUpclU" >}} ## Introduction In this step, we will develop a user interface and add Python scripting to the macro module you created in [Step 2](tutorials/summary/summary2). -## Steps to do +## Steps to Do + ### Develop the User Interface A mockup of the user interface you are going to develop is available [here](tutorials/summary#UIDesign). The interface provides the possibility to load files and shows a 2D and a 3D viewer. In addition to that, some settings and information for our final application are available. Search for your macro module and add it to your workspace. Right-click {{< mousebutton "right">}} and select {{< menuitem "Related Files" ".script" >}}. -The MeVisLab text editor MATE opens showing the \*.script file of your module. +The MeVisLab text editor MATE opens showing the *.script* file of your module. #### Layout -You can see that the interface is divided into 4 parts in vertical direction: +You can see that the interface is divided into four parts in vertical direction: * Source or file/directory selection * Viewing (2D and 3D) * Settings @@ -36,7 +38,7 @@ You can see that the interface is divided into 4 parts in vertical direction: Inside the vertical parts, the elements are placed next to each other horizontally. -Add a *Window* section to your \*.script file. Inside the *Window*, we need a *Vertical* for the 4 parts and a *Box* for each part. Name the Boxes Source, Viewing, Settings and Info. The layout inside each *Box* shall be *Horizontal*. +Add a *Window* section to your *.script* file. Inside the *Window*, we need a *Vertical* for the four parts and a *Box* for each part. Name the boxes "Source", "Viewing", "Settings", and "Info". The layout inside each *Box* shall be *Horizontal*. In addition to that, we define the minimal size of the Window as 400 x 300 pixels. @@ -69,19 +71,20 @@ You can preview your initial layout in MeVisLab by double-clicking your module { ![Initial Window Layout](images/tutorials/summary/Example3_1.png "Initial Window Layout") -You can see the 4 vertical aligned parts as defined in the \*.script file. Now we are going to add the content of the Boxes. +You can see the four vertical aligned parts as defined in the *.script* file. Now, we are going to add the content of the boxes. {{}} An overview over the existing layout elements in MeVisLab Definition Language (MDL) can be found {{< docuLinks "/Resources/Documentation/Publish/SDK/MDLReference/index.html#N11695" "here" >}} {{}} -#### Adding the UI elements +#### Adding the UI Elements + ##### Source -The *Source Box* shall provide the possibility to select a file for loading into the viewers. You have many options to achieve that in MeVisLab and Python. The easiest way is to re-use the existing field of the `LocalImage` module in your internal network. +The *Source Box* shall provide the possibility to select a file for loading into the viewers. You have many options to achieve that in MeVisLab and Python. The easiest way is to reuse the existing field of the `LocalImage` module in your internal network. -Add a field to the *Parameters* section of your \*.script file. Name the field *openFile* and set type to *String* and internalName to *LocalImage.name*. +Add a field to the *Parameters* section of your *.script* file. Name the field *openFile* and set type to *String* and internalName to *LocalImage.name*. -Then add another field to your *Box* for the *Source* and use the field name from *Parameters* section, in this case *openFile*. Set *browseButton = True* and *browseMode = open* and save your script. +Then, add another field to your *Box* for the *Source* and use the field name from *Parameters* section, in this case *openFile*. Set *browseButton = True* and *browseMode = open* and save your script. {{< highlight filename=".script" >}} ```Stan @@ -125,14 +128,14 @@ Window { ``` {{}} -Again, you can preview your user interface in MeVisLab directly. You can already select a file to open. The image is available at the output of the `LocalImage` module in your internal network but the Viewers are missing in our interface. +Again, you can preview your user interface in MeVisLab directly. You can already select a file to open. The image is available at the output of the `LocalImage` module in your internal network but the viewers are missing in our interface. ![Source Box](images/tutorials/summary/Example3_2.png "Source Box") ##### Viewing -Add the 2 viewer modules to the *Viewing* section of your \*.script file and define their field as *View2D.self* and *SoExaminerViewer.self*. Set *expandX = Yes* and *expandY = Yes* for both viewing modules. We want them to resize in case the size of the Window changes. +Add the two viewer modules to the *Viewing* section of your *.script* file and define their field as *View2D.self* and *SoExaminerViewer.self*. Set *expandX = Yes* and *expandY = Yes* for both viewing modules. We want them to resize in the case the size of the Window changes. -Set the 2D Viewer type to *SoRenderArea* and the 3D Viewer type to *SoExaminerViewer* and inspect your new user interface in MeVisLab. +Set the 2D viewer's type to *SoRenderArea* and the 3D viewer's type to *SoExaminerViewer* and inspect your new user interface in MeVisLab. {{< highlight filename=".script" >}} ```Stan @@ -158,22 +161,22 @@ Set the 2D Viewer type to *SoRenderArea* and the 3D Viewer type to *SoExaminerVi ![2D and 3D Viewer](images/tutorials/summary/Example3_3.png "2D and 3D Viewer") -The images selected in the *Source* section are shown in 2D and 3D. We simply re-used the existing fields and viewers from your internal network and are already able to interact with the images. As the `View2D` of your internal network itself provides the possibility to accept markers and starts the `RegionGrowing`, this is also already possible and the segmentations are shown in 2D and 3D. +The images selected in the *Source* section are shown in 2D and 3D. We simply reused the existing fields and viewers from your internal network and are already able to interact with the images. As the `View2D` of your internal network itself provides the possibility to accept markers and starts the `RegionGrowing`, this is also already possible and the segmentations are shown in 2D and 3D. ##### Settings -Let's define the Settings section. Once again we first define the necessary fields. For automated tests which we are going to develop later, it makes sense to make some of the fields of the internal network available from outside. +Let's define the *Settings* section. Once again, we first define the necessary fields. For automated tests that we are going to develop later, it makes sense to make some of the fields of the internal network available from outside. -The following shall be accessible as Field for our macro module: +The following shall be accessible as *Field* for our macro module: * Filename to be opened * Color of the 2D overlay and 3D segmentation * Transparency of the 3D image * Threshold to be used for RegionGrowing -* Iso value of the 3D surface to use for rendering -* Position of the Marker to use for RegionGrowing -* Selection for 3D visualization (image, segmentation or both) +* Isovalue of the 3D surface to use for rendering +* Position of the marker to use for RegionGrowing +* Selection for 3D visualization (image, segmentation, or both) * Trigger to reset the application to its initial state -We already defined the filename as a field. Next we want to change the color of the overlay. Add another field to your *Parameters* section as *selectOverlayColor*. Define *internalName = SoView2DOverlay.baseColor* and *type = Color*. You may also define a title for the field, for example *Color*. +We already defined the filename as a field. Next we want to change the color of the overlay. Add another field to your *Parameters* section as *selectOverlayColor*. Define *internalName = SoView2DOverlay.baseColor* and *type = Color*. You may also define a title for the field, for example, *Color*. The *baseColor* field of the `SoView2DOverlay` already has a parameter connection to the color of the `SoWEMRendererSegmentation`. This has been done in the internal network. The defined color is used for 2D and 3D automatically. @@ -205,17 +208,17 @@ Interface { ``` {{}} -The next elements follow the same rules, therefore the final script will be available at the end for completeness. +The next elements follow the same rules; therefore, the final script will be available at the end for completeness. -In order to set the transparency of the 3D image, we need another field re-using the *SoWEMRendererImage.faceAlphaValue*. Add a field *imageAlpha* to the *Parameters* section. Define *internalName = SoWEMRendererImage.faceAlphaValue*, *type = Integer*, *min = 0* and *max = 1*. +In order to set the transparency of the 3D image, we need another field reusing the *SoWEMRendererImage.faceAlphaValue*. Add a field *imageAlpha* to the *Parameters* section. Define *internalName = SoWEMRendererImage.faceAlphaValue*, *type = Integer*, *min = 0*, and *max = 1*. Add the field to the *Settings Box* and set *step = 0.1* and *slider = True*. -For the `RegionGrowing` threshold, add the field *thresholdInterval* to *Parameters* section and set *type = Integer*, *min = 1*, *max = 100* and *internalName = RegionGrowing.autoThresholdIntervalSizeInPercent*. +For the `RegionGrowing` threshold, add the field *thresholdInterval* to *Parameters* section and set *type = Integer*, *min = 1*, *max = 100*, and *internalName = RegionGrowing.autoThresholdIntervalSizeInPercent*. -Add the field to the *Settings* UI and define *step = 0.1* and *slider = True*. +Add the field to the *Settings* UI, and define *step = 0.1* and *slider = True*. -Define a field *isoValueImage* in the *Parameters* section and set *internalName = IsoSurfaceImage.isoValue*, *type = Integer*, *min = 1* and *max = 1000*. +Define a field *isoValueImage* in the *Parameters* section and set *internalName = IsoSurfaceImage.isoValue*, *type = Integer*, *min = 1*, and *max = 1000*. In the *Settings* section of the UI, set *step = 2* and *slider = True*. @@ -317,21 +320,21 @@ Your user interface of the macro module should now look similar to this: For the next elements, we require Python scripting. Nevertheless, you are already able to use your application and perform the basic functionalities without writing any line of code. -### Python scripting -Python scripting is always necessary in case you do not want to re-use an existing field for your user interface but implement functions to define what happens in case of any event. +### Python Scripting +Python scripting is always necessary in the case you do not want to reuse an existing field for your user interface but implement functions to define what happens in the case of any event. -Events can be raised by the user (i.e. by clicking a button) or by the application itself (i.e. when the window is opened). +Events can be raised by the user (e.g., by clicking a button) or by the application itself (e.g., when the window is opened). -#### 3D visualization selection -You will now add a selection possibility for the 3D viewer. This allows you to define the visibility of the 3D objects File, Segmented or Both. +#### 3D Visualization Selection +You will now add a selection possibility for the 3D viewer. This allows you to define the visibility of the 3D objects File, Segmented, or Both. Add another field to your *Parameters* section. Define the field as *selected3DView* and set *type = Enum* and *values =Segmented,File,Both*. Add a *ComboBox* to your *Settings* and use the field name defined above. Set *alignX = Left* and *editable = False* and open the *Window* of the macro module in MeVisLab. -The values of the field can be selected, but nothing happens in our viewers. We need to implement a *FieldListener* in Python which reacts on any value changes of the field *selected3DView*. +The values of the field can be selected, but nothing happens in our viewers. We need to implement a *FieldListener* in Python that reacts on any value changes of the field *selected3DView*. -Open your script file and go to the *Commands* section. Add a *FieldListener* and re-use the name of our internal field *selected3DView*. Add a *Command* to the *FieldListener* calling a Python function *viewSelectionChanged*. +Open your script file and go to the *Commands* section. Add a *FieldListener* and reuse the name of our internal field *selected3DView*. Add a *Command* to the *FieldListener* calling a Python function *viewSelectionChanged*. {{< highlight filename=".script" >}} ```Stan @@ -364,7 +367,7 @@ def viewSelectionChanged(field): The function sets the `SoSwitch` to the child value depending on the selected field value from the *ComboBox* and you should now be able to switch the 3D rendering by selecting an entry in the user interface. #### Setting the Marker -The Marker for the `RegionGrowing` is defined by the click position as Vector3. Add another field *markerPosition* to the *Parameters* section and define *type = Vector3*. +The marker for the `RegionGrowing` is defined by the clicked position as Vector3. Add another field *markerPosition* to the *Parameters* section and define *type = Vector3*. Then, add a trigger field *applyMarker* to your *Parameters* section. Set *type = Trigger* and *title = Add*. @@ -420,7 +423,7 @@ def applyPosition(): ``` {{}} -Whenever the field *markerPosition* changes its value, the value is automatically applied to the *SoView2DMarkerEditor.newPosXYZ*. Clicking *SoView2DMarkerEditor.add* adds the new Vector to the `SoView2DMarkerEditor` and the region growing starts. +Whenever the field *markerPosition* changes its value, the value is automatically applied to the *SoView2DMarkerEditor.newPosXYZ*. Clicking *SoView2DMarkerEditor.add* adds the new position to the `SoView2DMarkerEditor` and the region growing starts. {{}} The *Field* *SoView2DMarkerEditor.useInsertTemplate* needs to be set to *True* in order to allow adding markers via Python. @@ -463,7 +466,7 @@ Commands { {{}} What shall happen when we reset the application? -* The loaded image shall be unloaded, the Viewer shall be empty +* The loaded image shall be unloaded, the viewer shall be empty * The marker shall be reset if available Add the Python function *resetApplication* and implement the following: @@ -478,7 +481,7 @@ def resetApplication(): ``` {{}} -You can also reset the application to initial state by adding a *initCommand* to your *Window*. Call the resetApplication function here, too and whenever the window is opened, the application is reset to its initial state. +You can also reset the application to initial state by adding a *initCommand* to your *Window*. Call the *resetApplication* function here, too, and whenever the window is opened, the application is reset to its initial state. {{< highlight filename=".script" >}} ```Stan @@ -493,7 +496,7 @@ Window { ``` {{}} -This can also be used for setting/resetting to default values of the application. For example update your Python function *resetApplication* the following way: +This can also be used for setting/resetting to default values of the application. For example, update your Python function *resetApplication* the following way: {{< highlight filename=".py" >}} ```Python @@ -513,26 +516,26 @@ def resetApplication(): ### Information In the end, we want to provide some information about the volume of the segmented area (in ml). -Add one more field to your *Parameters* section and re-use the internal network fields *CalculateVolume.totalVolume*. Set field to *editable = False* +Add one more field to your *Parameters* section and reuse the internal network fields *CalculateVolume.totalVolume*. Set field to *editable = False* -Add the field to the Info section of your window. +Add the field to the *Info* section of your window. -Opening the window of your macro module in MeVisLab now provides all functionalities we wanted to achieve. You can also play around in the window and define some additional Boxes or MDL controls but the basic application prototype is now done. +Opening the window of your macro module in MeVisLab now provides all functionalities we wanted to achieve. You can also play around in the window and define some additional boxes or other MDL controls but the basic application prototype is now finished. ![Final Macro module](images/tutorials/summary/Example3_5.png "Final Macro module") ### MeVisLab GUI Editor -MATE provides a powerful GUI Editor showing a preview of your current user interface and allowing to re-order elements in the UI via drag and drop. In MATE open {{< menuitem "Extras" "Enable GUI Editor" >}}. +MATE provides a powerful GUI editor showing a preview of your current user interface and allowing to reorder elements in the UI via drag and drop. In MATE, open {{< menuitem "Extras" "Enable GUI Editor" >}}. ![MeVisLab GUI Editor](images/tutorials/summary/Example3_4b.png "MeVisLab GUI Editor") -Changing the layout via drag and drop automatically adapts your *\*.script* file. Save and Reload the script and your changes are applied. +Changing the layout via drag and drop automatically adapts your *.script* file. Save and reload the script and your changes are applied. {{}} -If the GUI Editor is not shown in MATE, make sure to check *[View → Preview]*. +If the GUI editor is not shown in MATE, make sure to check *[View → Preview]*. {{}} -## Final Script and Python files +## Final Script and Python Files {{< highlight filename=".script" >}} ```Stan Interface { @@ -697,8 +700,8 @@ def applyPosition(): ## Summary * You now added a user interface to your macro module. -* The window opens automatically on double-click {{< mousebutton "right" >}} -* Fields defined in the *Parameters* section can be modified in the MeVisLab Module Inspector +* The window opens automatically on double-click {{< mousebutton "right" >}}. +* Fields defined in the *Parameters* section can be modified in the MeVisLab Module Inspector. * Python allows to implement functions executed on events raised by the user or by the application itself. {{< networkfile "examples/summary/TutorialSummary_UI.zip" >}} diff --git a/mevislab.github.io/content/tutorials/summary/summary4.md b/mevislab.github.io/content/tutorials/summary/summary4.md index c8cb172ba..7a4bab21a 100644 --- a/mevislab.github.io/content/tutorials/summary/summary4.md +++ b/mevislab.github.io/content/tutorials/summary/summary4.md @@ -8,28 +8,30 @@ tags: ["Advanced", "Tutorial", "Prototyping", "Automated Tests", "Python"] menu: main: identifier: "summaryexample4" - title: "Test your macro module in MeVisLab. Your requirements are translated into test cases written in Python." + title: "Automated Tests" weight: 820 parent: "summary" --- + # Step 4: Review - Automated Tests {{< youtube "_wheDC8TBJQ" >}} ## Introduction -In the previous chapters you developed a macro module with User Interface and Python scripting. In this step you will see how to implement an automated test to verify and validate the Requirements defined in [Overview](tutorials/summary). +In the previous chapters you developed a macro module with a user interface and Python scripting. In this step you will see how to implement an automated test to verify and validate the requirements defined in [Overview](tutorials/summary). + +## Steps to Do -## Steps to do -### Create a test network using your macro module -Create a new and empty network and save it as \*.mlab file. Remember the location. +### Create a Test Network Using Your Macro Module +Create a new and empty network and save it as *.mlab* file. Remember the location. -Use Module Search and add your macro module developed in previous steps to your Workspace. +Use *Module Search* and add your macro module developed in previous steps to your workspace. ![Macro module](images/tutorials/summary/Example4_1.png "Macro module") -You can see that the module does not have any inputs or outputs. You cannot connect it to other modules. For testing purposes it makes sense to provide the viewers and images as outputs so that you can use them for generating screenshots. +You can see that the module does not have any inputs or outputs. You cannot connect it to other modules. For testing purposes it makes sense to provide the viewers and images as outputs, so that you can use them for generating screenshots. -Open the \*.script file in MATE as already explained in [Step 3](tutorials/summary/summary3). In the *Outputs* section, add the following: +Open the *.script* file in MATE as already explained in [Step 3](tutorials/summary/summary3). In the *Outputs* section, add the following: {{< highlight filename=".script" >}} ```Stan @@ -51,8 +53,8 @@ You can now add a viewer or any other module to your macro module and use them f ![Test Network](images/tutorials/summary/Example4_3.png "Test Network") -### Create test case -Open MeVisLab TestCaseManager via {{< menuitem "File" "Run TestCaseManager..." >}}. On tab *Test Creation* define a name of your test case, for example *TutorialSummaryTest*. Select Type as *Macros*, define the package and use the same as for your macro module, select *Import Network* and Select your saved \*.mlab file from the step above. Click *Create*. +### Create Test Case +Open MeVisLab TestCaseManager via {{< menuitem "File" "Run TestCaseManager..." >}}. On the tab *Test Creation*, define a name of your test case, for example, *TutorialSummaryTest*. Select "Type" as *Macros*, define the package and use the same as for your macro module, select *Import Network*, and select your saved *.mlab* file from the step above. Click *Create*. ![Test Creation](images/tutorials/summary/Example4_4.png "Test Creation") @@ -60,7 +62,8 @@ MATE automatically opens the Python file of your test case and it appears in MeV ![Test Creation](images/tutorials/summary/Example4_5.png "Test Creation") -### Write test functions in Python +### Write Test Functions in Python + #### Preparations Before writing a test case, we need some helper functions in Python, which we will use in our test cases. The first thing we need is a function to load images. @@ -85,7 +88,7 @@ We define the path to a file to be loaded. The function *loadImage* sets the *op The arrays for the marker location and color will be used later. -Next we need a function to check if the loaded image available at the first output of our macro module (*out2D*) is valid. +Next, we need a function to check if the loaded image available at the first output of our macro module (*out2D*) is valid. {{< highlight filename=".py" >}} ```Python @@ -119,7 +122,7 @@ def setMarkerPosition(vector): ``` {{}} -The *setMarkerPosition* function gets a 3-dimensional vector and sets the *markerPosition* field of our module. Then the *applyMarker* trigger is touched. As the region growing algorithm might need some time to segment, we need to wait until the *outSegmentationMask* output field is valid, meaning that there is a valid segmentation mask at the segmentation mask output of our macro module. +The *setMarkerPosition* function gets a three-dimensional vector and sets the *markerPosition* field of our module. Then, the *applyMarker* trigger is touched. As the region growing algorithm might need some time to segment, we need to wait until the *outSegmentationMask* output field is valid, meaning that there is a valid segmentation mask at the segmentation mask output of our macro module. Finally, we need to reset the application to its initial state, so that each test case has the initial start conditions of the application. A test case should never depend on another test case so that they all can be executed exclusively. @@ -138,7 +141,7 @@ def reset(): For a reset, we just touch the *resetApplication* field of our macro module `TutorialSummary`. -#### Requirement 1: The application shall be able to load DICOM data +#### Requirement 1: The Application Shall be Able to Load DICOM Data The first requirement we want to test is the possibility to load DICOM data. After setting the file to be loaded, the output provides a valid image. Resetting the application shall unload the image. {{< highlight filename=".py" >}} @@ -156,9 +159,9 @@ def TEST_LoadDICOMData(): ``` {{}} -#### Requirement 4: The 2D viewer shall provide the possibility to segment parts of the image based on a RegionGrowing algorithm -##### Requirement 4.1: It shall be possible to click into the image for defining a marker position for starting the RegionGrowing -This test case shall make sure the `RegionGrowing` module calculates the total volume and number of voxels to be larger than 0 in case a marker has been set. Without loading an image or after resetting the application, the values shall be 0. +#### Requirement 4: The 2D Viewer Shall Provide the Possibility to Segment Parts of the Image Based on a RegionGrowing Algorithm +##### Requirement 4.1: It Shall be Possible to Click Into the Image for Defining a Marker Position for Starting the RegionGrowing +This test case shall make sure the `RegionGrowing` module calculates the total volume and number of voxels to be larger than 0 in the case a marker has been set. Without loading an image or after resetting the application, the values shall be 0. {{< highlight filename=".py" >}} ```Python @@ -189,7 +192,7 @@ def TEST_RegionGrowing(): ``` {{}} -##### Requirement 4.2: It shall be possible to define a threshold for the RegionGrowing algorithm +##### Requirement 4.2: It Shall be Possible to Define a Threshold for the RegionGrowing Algorithm For the threshold of the region growing it makes sense to extend the previous test case instead of writing a new one. We already have a segmentation based on the default threshold value and can just change the threshold and compare the resulting volumes. Increasing the threshold shall result in larger volumes, decreasing shall result in smaller values. @@ -237,9 +240,9 @@ def TEST_RegionGrowing(): ``` {{}} -#### Requirement 5: The 2D viewer shall display the segmentation results as a semi-transparent overlay -##### Requirement 5.1: It shall be possible to define the color of the overlay -The requirement 5 can not be tested automatically. Transparencies should be tested by a human being. +#### Requirement 5: The 2D Viewer Shall Display the Segmentation Results as a Semitransparent Overlay +##### Requirement 5.1: It Shall be Possible to Define the Color of the Overlay +The requirement 5 cannot be tested automatically. Transparencies should be tested by a human being. Nevertheless, we can write an automated test checking the possibility to define the color of the overlay and the 3D segmentation. @@ -270,13 +273,13 @@ def TEST_OverlayColor(): ``` {{}} -Again, we reset the application to an initial state, load the image and set a marker. We remember the initial color and set a new color for our macro module. Then we check if the new color differs from the old color and if the colors used by the internal modules `SoWEMRendererSegmentation` and `SoView2DOverlay` changed to our new color. +Again, we reset the application to an initial state, load the image, and set a marker. We remember the initial color and set a new color for our macro module. Then, we check if the new color differs from the old color and if the colors used by the internal modules `SoWEMRendererSegmentation` and `SoView2DOverlay` changed to our new color. -Finally an image comparison is done for the 3D rendering using the old and the new color. The images shall differ. +Finally, an image comparison is done for the 3D rendering using the old and the new color. The images shall differ. -The call *MLAB.processInventorQueue()* is sometimes necessary if an inventor scene changed via Python scripting, because the viewers might not update immediately after changing the field. MeVisLab is now forced to process the queue in inventor and to update the renderings. +The call *MLAB.processInventorQueue()* is sometimes necessary if an Open Inventor scene changed via Python scripting, because the viewers might not update immediately after changing the field. MeVisLab is now forced to process the queue in Open Inventor and to update the renderings. -#### Requirement 8: The total volume of the segmented area shall be calculated and shown (in ml) +#### Requirement 8: The Total Volume of the Segmented Area Shall be Calculated and Shown (in ml) For the correctness of the volume calculation, we added the `CalculateVolume` module to our test network. The volume given by our macro is compared to the volume of the segmentation from output *outSegmentationMask* calculated by the `CalculateVolume` module. {{< highlight filename=".py" >}} @@ -309,11 +312,14 @@ def TEST_VolumeCalculation(): ``` {{}} -#### Requirement 9: It shall be possible to toggle the visible 3D objects -##### Requirement 9.1: Original data -##### Requirement 9.2: Segmentation results +#### Requirement 9: It Shall be Possible to Toggle the Visible 3D Objects + +##### Requirement 9.1: Original Data + +##### Requirement 9.2: Segmentation Results + ##### Requirement 9.3: All -In the end, we want to develop a testcase for the 3D toggling of the view. We can not exactly test if the rendering is correct, therefore we will check if the 3D rendering image changes when toggling the 3D view. We will use the modules `OffscreenRenderer`, `ImageCompare` and `SoCameraInteraction` which we added to our test network. +In the end, we want to develop a testcase for the 3D toggling of the view. We cannot exactly test if the rendering is correct; therefore, we will check if the 3D rendering image changes when toggling the 3D view. We will use the modules `OffscreenRenderer`, `ImageCompare`, and `SoCameraInteraction`, which we added to our test network. Initially, without any marker and segmentation, the views *Both* and *Head* show the same result. After adding a marker, we are going to test if different views result in different images. @@ -365,24 +371,24 @@ def TEST_Toggle3DVolumes(): ``` {{}} -### Sorting order in TestCaseManager +### Sorting Order in TestCaseManager The MeVisLab TestCaseManager sorts your test cases alphabetically. Your test cases should look like this now: ![TestCaseManager Sorting](images/tutorials/summary/Example4_6.png "TestCaseManager Sorting") -Generally, test cases should not depend on each other and the order of their execution does not matter. Sometimes it makes sense though to execute tests in a certain order, for example for performance reasons. In this case you can add numeric prefixes to your test cases. This might look like this then: +Generally, test cases should not depend on each other and the order of their execution should not matter. Sometimes it makes sense though to execute tests in a certain order, for example, for performance reasons. In this case, you can add numeric prefixes to your test cases. This might look like this then: ![TestCaseManager Custom Sorting](images/tutorials/summary/Example4_7.png "TestCaseManager Custom Sorting") -### Not testable requirements -As already mentioned, some requirements can not be tested in an automated environment. Human eyesight cannot be replaced completely. +### Not Testable Requirements +As already mentioned, some requirements cannot be tested in an automated environment. Human inspection cannot be replaced completely. In our application, the following tests have not been tested automatically: -* Requirement 2: The application shall provide a 2D and a 3D viewer. +* Requirement 2: The application shall provide a 2D and a 3D viewer * Requirement 3: The 2D viewer shall display the loaded images -* Requirement 5: The 2D viewer shall display the segmentation results as a semi-transparent overlay -* Requirement 6: The 3D viewer shall visualize the loaded data in a 3-dimensional volume rendering -* Requirement 7: The 3D viewer shall additionally show the segmentation result as a 3-dimensional mesh +* Requirement 5: The 2D viewer shall display the segmentation results as a semitransparent overlay +* Requirement 6: The 3D viewer shall visualize the loaded data in a three-dimensional volume rendering +* Requirement 7: The 3D viewer shall additionally show the segmentation result as a three-dimensional mesh ### Test Reports The results of your tests are shown in a Report Viewer. You can also export the results to JUnit for usage in build environments like [Jenkins](https://www.jenkins.io/). @@ -390,7 +396,7 @@ The results of your tests are shown in a Report Viewer. You can also export the ![ReportViewer](images/tutorials/summary/Example4_8.png "ReportViewer") ### Screenshots -You can also add screenshots of your inventor scene to the report. Add the following to your Python script wherever you want to capture the content of the `SoCameraInteraction` module and a Snapshot of your 3D scene is attached to your test report: +You can also add screenshots of your Open Inventor scene to the report. Add the following to your Python script wherever you want to capture the content of the `SoCameraInteraction` module and a snapshot of your 3D scene is attached to your test report: {{< highlight filename=".py" >}} ```Python @@ -403,9 +409,9 @@ Logging.showFile("Link to screenshot file", result) {{}} ## Summary -* Define accessible fields for macro modules so that they can be set in Python tests -* Add outputs to your macro modules for automated testing and connecting testing modules -* Testcase numbering allows you to sort them and define execution order +* Define accessible fields for macro modules, so that they can be set in Python tests. +* Add outputs to your macro modules for automated testing and connecting testing modules. +* Testcase numbering allows you to sort them and define execution order. {{}} Additional information about MeVisLab TestCenter can be found in {{< docuLinks "/Resources/Documentation/Publish/SDK/TestCenterManual/index.html" "TestCenter Manual" >}} diff --git a/mevislab.github.io/content/tutorials/summary/summary5.md b/mevislab.github.io/content/tutorials/summary/summary5.md index 13c9d4b22..428f648b3 100644 --- a/mevislab.github.io/content/tutorials/summary/summary5.md +++ b/mevislab.github.io/content/tutorials/summary/summary5.md @@ -8,25 +8,28 @@ tags: ["Advanced", "Tutorial", "Prototyping", "Application Builder", "Installer" menu: main: identifier: "summaryexample5" - title: "Create a standalone application by using the MeVisLab ApplicationBuilder and install the application on another system." + title: "Installer creation" weight: 825 parent: "summary" --- + # Step 5: Review - Installer creation {{< youtube "64l3igSmeWY" >}} ## Introduction -Your macro module has been tested manually and/or automatically? Then you should create your first installable executable and deliver it to your customer(s) for final evaluation. +Your macro module has been tested manually and/or automatically? Then, you should create your first installable executable and deliver it to your customer(s) for final evaluation. {{}} -This step requires a valid **MeVisLab ApplicationBuilder** license. It extends the **MeVisLab SDK** so that you can generate an installer of your developed macro module. -Free evaluation licenses of the **MeVisLab ApplicationBuilder**, time-limited to 3 months, can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). +This step requires a valid **MeVisLab ApplicationBuilder** license. It extends the **MeVisLab SDK**, so that you can generate an installer of your developed macro module. + +Free evaluation licenses of the **MeVisLab ApplicationBuilder**, time-limited to three months, can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). {{}} -## Steps to do -### Install tools necessary for installer generation -The MeVisLab Project Wizard for Standalone Applications {{}} provides a check for all necessary tools you need to install before generating an installer. +## Steps to Do + +### Install Tools Necessary for Installer Generation +The MeVisLab Project Wizard for standalone applications {{}} provides a check for all necessary tools you need to install before generating an installer. ![MeVisLab Project Wizard](images/tutorials/summary/Example5_1.png "MeVisLab Project Wizard") @@ -36,24 +39,24 @@ Click on *Check if required tools are installed*. The following dialog opens: You can see that [NSIS](https://nsis.sourceforge.io/Download) and either [Dependency Walker](http://www.dependencywalker.com/) or [Dependencies](https://github.com/lucasg/Dependencies) are necessary to create an installable executable. MeVisLab provides information about the necessary version(s). -Download and install/extract *NSIS* and *Dependency Walker* or *Dependencies*. Add both executables to your *PATH* environment variable, for example *C:\Program Files\depends* and *C:\Program Files (x86)\NSIS*. +Download and install/extract *NSIS* and *Dependency Walker* or *Dependencies*. Add both executables to your *PATH* environment variable, for example, *C:\Program Files\depends* and *C:\Program Files (x86)\NSIS*. Restart MeVisLab and open Project Wizard again. All required tools should now be available. -### Use MeVisLab Project Wizard to generate the installer +### Use MeVisLab Project Wizard to Generate the Installer Select your macro module and the package and click *Next*. ![Welcome](images/tutorials/summary/Example5_3.png "Welcome") -The general settings dialog allows you to define a name for your application. You can also define a version, in our case we decide not to be finished and have a version *0.5*. You can include debug files and decide to build a desktop or web application. We want to build an *Application Installer* for a desktop system. You can decide to precompile your Python files and you have to select your MeVisLab **MeVisLab ApplicationBuilder** license. +The general settings dialog allows you to define a name for your application. You can also define a version, in our case, we decide not to be finished and have a version *0.5*. You can include debug files and decide to build a desktop or web application. We want to build an *Application Installer* for a desktop system. You can decide to precompile your Python files and you have to select your MeVisLab **MeVisLab ApplicationBuilder** license. ![General Settings](images/tutorials/summary/Example5_4.png "General Settings") -Define your license text which is shown during installation of your executable. You can decide to use our pre-defined text, select a custom file or do not include any license text. +Define your license text that is shown during installation of your executable. You can decide to use our predefined text, select a custom file, or do not include any license text. ![License Text](images/tutorials/summary/Example5_5.png "License Text") -The next dialog can be skipped for now, you can include additional files into your installer which are not automatically added by MeVisLab from the dependency analysis. +The next dialog can be skipped for now, you can include additional files into your installer that are not automatically added by MeVisLab from the dependency analysis. ![Manual File Lists](images/tutorials/summary/Example5_6.png "Manual File Lists") @@ -70,26 +73,30 @@ The MeVisLab ToolRunner starts generating your installer. After finishing instal ![MeVisLab ToolRunner](images/tutorials/summary/Example5_9.png "MeVisLab ToolRunner") The directory contains the following files (and some more maybe): -* Batch (\*.bat) file -* Installer (\*.exe) file -* MeVisLab Install (\*.mlinstall) file -* Shell (\*.sh) script -* ThirdParty list (\*.csv) +* Batch (*.bat*) file +* Installer (*.exe*) file +* MeVisLab Install (*.mlinstall*) file +* Shell (*.sh*) script +* Third-party list (*.csv*) -#### Batch file +#### Batch File The batch file allows you to generate the executable again via a Windows batch file. You do not need the Project Wizard anymore now. -#### Installer file -The resulting installer file for your application is an executable -#### MeVisLab Install file -The \*.mlinstall file provides all information you just entered into the wizard. We will need this in [Step 7: Refine - Re-Build Installer](tutorials/summary/summary7/) again. + +#### Installer File +The resulting installer file for your application is an executable. + +#### MeVisLab Install File +The *.mlinstall* file provides all information you just entered into the wizard. We will need this in [Step 7: Refine - Rebuild Installer](tutorials/summary/summary7/) again. The file is initially generated by the Project Wizard. Having a valid file already, you can create new versions by using the MeVisLab ToolRunner. -#### Shell skript + +#### Shell Skript The shell skript allows you to generate the executable again via a Unix shell like bash. You do not need the Project Wizard anymore now. -#### ThirdParty file -The third party file contains all third party software tools MeVisLab integrated into your installer from dependency analysis. The file contains the tool name, version, license and general information about the tool. -### Install your executable +#### Third-party File +The third-party file contains all third-party software tools MeVisLab integrated into your installer from dependency analysis. The file contains the tool name, version, license, and general information about the tool. + +### Install Your Executable You can now execute the installer of your application. The installer initially shows a welcome screen showing the name and version of your application. @@ -114,6 +121,7 @@ After the installer finished the setup, you will find a desktop icon and a start {{}} MeVisLab executables require an additional **MeVisLab Runtime** license. It makes sure that your resulting application needs to be licensed, too. + Free evaluation licenses of the **MeVisLab ApplicationBuilder** and **MeVisLab Runtime** licenses for testing purposes can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). {{}} @@ -128,6 +136,6 @@ By default, your user interface uses a standard stylesheet for colors and appear {{}} ## Summary -* The **MeVisLab ApplicationBuilder** allows you to create installable executables from your MeVisLab networks -* The resulting application can be customized to your needs via Project Wizard -* Your application will be licensed separately so that you can completely control the usage +* The **MeVisLab ApplicationBuilder** allows you to create installable executables from your MeVisLab networks. +* The resulting application can be customized to your needs via the Project Wizard. +* Your application will be licensed separately, so that you can completely control the usage. diff --git a/mevislab.github.io/content/tutorials/summary/summary6.md b/mevislab.github.io/content/tutorials/summary/summary6.md index 5fe4aa77a..8b0d7532e 100644 --- a/mevislab.github.io/content/tutorials/summary/summary6.md +++ b/mevislab.github.io/content/tutorials/summary/summary6.md @@ -8,29 +8,31 @@ tags: ["Advanced", "Tutorial", "Prototyping"] menu: main: identifier: "summaryexample6" - title: "Integrate feedback from customers having installed your executable and adapt your test cases from Example 4." + title: "Update Application" weight: 830 parent: "summary" --- + # Step 6: Refine - Update Application {{< youtube "1v_UyGs8W1g" >}} ## Introduction -In previous step you developed an application which can be installed on your customers systems for usage. In this step we are going to integrate simple feedback into our executable and re-create the installer. +In the previous step you developed an application that can be installed on your customers systems for usage. In this step we are going to integrate simple feedback into our executable and recreate the installer. We want to show you how easy it is to update your application using MeVisLab. Your customer requests an additional requirement to define the transparency of your 2D overlay in addition to defining the color. * **Requirement 5.2**: It shall be possible to define the alpha value of the overlay -## Steps to do -### Adapt your macro module +## Steps to Do + +### Adapt Your Macro Module Use the module search to add your macro module to your workspace. We need an additional UI element for setting the alpha value of the overlay. Right-click {{< mousebutton "right" >}} your module and select {{< menuitem "Related Files" ".script" >}}. -In MATE, add another field to your *Parameters* section and re-use the field by setting the *internalName*. Add the field to the *Settings* section of your *Window*, maybe directly after the color selection. +In MATE, add another field to your *Parameters* section and reuse the field by setting the *internalName*. Add the field to the *Settings* section of your *Window*, maybe directly after the color selection. {{< highlight filename=".script" >}} ```Stan @@ -59,16 +61,16 @@ Window { ``` {{}} -Back in MeVisLab IDE, your user interface should now provide the possibility to define an alpha value of the overlay. Changes are applied automatically because you re-used the field of the `SoView2DOverlay` module directly. +Back in MeVisLab IDE, your user interface should now provide the possibility to define an alpha value of the overlay. Changes are applied automatically because you reused the field of the `SoView2DOverlay` module directly. ![Updated User Interface](images/tutorials/summary/Example6_1.png "Updated User Interface") -You can also update your Python files for new or updated requirements. In this example we just want to show the basic principles, therefore we only add this new element to the Script file. +You can also update your Python files for new or updated requirements. In this example we just want to show the basic principles; therefore, we only add this new element to the *.script* file. If you want to write an additional Python test case, you can also do that. ## Summary -* Your application can be updated by modifying the macro module and/or network file of your application -* Any changes will be applied to your installable executable in the next step +* Your application can be updated by modifying the macro module and/or internal network of your application. +* Any changes will be applied to your installable executable in the next step. {{< networkfile "examples/summary/TutorialSummaryUpdated.zip" >}} diff --git a/mevislab.github.io/content/tutorials/summary/summary7.md b/mevislab.github.io/content/tutorials/summary/summary7.md index b3f6bd787..41f229a02 100644 --- a/mevislab.github.io/content/tutorials/summary/summary7.md +++ b/mevislab.github.io/content/tutorials/summary/summary7.md @@ -1,5 +1,5 @@ --- -title: "Step 7: Refine - Re-Build Installer" +title: "Step 7: Refine - Rebuild Installer" date: "2023-01-21" status: "open" draft: false @@ -8,23 +8,25 @@ tags: ["Advanced", "Tutorial", "Prototyping", "Tool Runner", "Installer"] menu: main: identifier: "summaryexample7" - title: "Re-build your executable and release a new version of your application." + title: "Rebuild Installer" weight: 835 parent: "summary" --- -# Step 7: Refine - Re-Build Installer + +# Step 7: Refine - Rebuild Installer {{< youtube "E0GnWPXT8Og" >}} ## Introduction -In this step you are re-creating your application installer after changing the UI in previous [Step 6: Refine - Update Application](tutorials/summary/summary6/). +In this step you are recreating your application installer after changing the UI in previous [Step 6: Refine - Update Application](tutorials/summary/summary6/). + +## Steps to Do -## Steps to do -### Update the \*.mlinstall file -You do not need to use the Project Wizard now, because you already have a valid \*.mlinstall file. The location should be in your package, under *.\Configuration\Installers\TutorialSummary*. Open the file in any text editor and search for the *$VERSION 0.5*. Change the version to something else, in our case we now have our first major release 1.0. +### Update the *.mlinstall* File +You do not need to use the Project Wizard now, because you already have a valid *.mlinstall* file. The location should be in your package under *.\Configuration\Installers\TutorialSummary*. Open the file in any text editor and search for the *$VERSION 0.5*. Change the version to something else, in our case, we now have our first major release 1.0. {{}} -You can also run the Project Wizard again but keep in mind that manual changes on your \*.mlinstall file might be overwritten. The wizard re-creates your \*.mlinstall file whereas the ToolRunner just uses it. +You can also run the Project Wizard again but keep in mind that manual changes on your *.mlinstall* file might be overwritten. The wizard recreates your *.mlinstall* file whereas the ToolRunner just uses it. {{}} ### Use MeVisLab ToolRunner @@ -32,23 +34,23 @@ Save the file and open *MeVisLab ToolRunner*. ![MeVisLab ToolRunner](images/tutorials/summary/Example7_1.png "MeVisLab ToolRunner") -Open the \*.mlinstall file in ToolRunner and select the file. Click *Run on Selection*. +Open the *.mlinstall* file in ToolRunner and select the file. Click *Run on Selection*. ![Run on Selection](images/tutorials/summary/Example7_2.png "Run on Selection") The ToolRunner automatically builds your new installer using version 1.0. -### Install application again +### Install Application Again Execute your installable executable again. You do not have to uninstall previous version(s) of your application first. Already existing applications will be replaced by new installation - at least if you select the same target directory. ![Install new version](images/tutorials/summary/Example7_3.png "Install new version") -The installer already shows your updated version 1.0. It is not necessary to select your Runtime license again because it has not been touched during update. +The installer already shows your updated version 1.0. It is not necessary to select your runtime license again because it has not been touched during update. ![Application version 1.0](images/tutorials/summary/Example7_4.png "Application version 1.0") The new installed application now provides your new UI element for defining the alpha value of the overlay. ## Summary -* Updates of your application installer can be applied by using the MeVisLab ToolRunner -* The executable can be updated on your customers system(s) and your changes on the macro module and network(s) are applied +* Updates of your application installer can be applied by using the MeVisLab ToolRunner. +* The executable can be updated on your customers system(s) and your changes on the macro module and network(s) are applied. diff --git a/mevislab.github.io/content/tutorials/summary/summary8.md b/mevislab.github.io/content/tutorials/summary/summary8.md index 62cf05455..c23533770 100644 --- a/mevislab.github.io/content/tutorials/summary/summary8.md +++ b/mevislab.github.io/content/tutorials/summary/summary8.md @@ -1,5 +1,5 @@ --- -title: "Extra: Run your application in Browser" +title: "Extra: Run Your Application in a Browser" date: "2023-02-24" status: "open" draft: false @@ -8,11 +8,12 @@ tags: ["Advanced", "Tutorial", "Prototyping", "Browser", "Web"] menu: main: identifier: "summaryexample8" - title: "Adapt existing application to run in a browser window." + title: "Adapt an Existing Application to Run in a Browser" weight: 840 parent: "summary" --- -# Extra: Run your application in Browser + +# Extra: Run Your Application in a Browser {{< youtube "XgOyeu65f7Q" >}} @@ -20,19 +21,20 @@ menu: This step explains how to run your developed application in a browser. The MeVisLab network remains the same, only some adaptations are necessary for running any macro module in a browser window. {{}} -This step requires a valid **MeVisLab Webtoolkit** license. It extends the **MeVisLab SDK** so that you can develop web macro modules. -Free evaluation licenses of the **MeVisLab Webtoolkit**, time-limited to 3 months, can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). +This step requires a valid **MeVisLab Webtoolkit** license. It extends the **MeVisLab SDK**, so that you can develop web macro modules. + +Free evaluation licenses of the **MeVisLab Webtoolkit**, time-limited to three months, can be requested at [sales(at)mevislab.de](mailto://sales@mevislab.de). {{}} -## Steps to do +## Steps to Do Make sure to have your macro module from previous [Step 2](tutorials/summary/summary2/) available. -### Create a Web macro module +### Create a Web Macro Module Open Project Wizard via {{< menuitem "File" "Run Project Wizard..." >}} and select *Web Macro module*. Run the Wizard and enter details of your new macro module. ![Web macro module wizard](images/tutorials/summary/Example8_1.png "Web macro module wizard") -Run the wizard and enter details of your web macro module. +Run the Wizard and enter details of your web macro module. ![Web macro module properties](images/tutorials/summary/Example8_2.png "Web macro module properties") @@ -40,21 +42,21 @@ Click *Next* and select optional web plugin features. Click *Create*. ![Web macro module](images/tutorials/summary/Example8_3.png "Web macro module") -The folder of your project automatically opens in explorer window. +The folder of your project automatically opens in an Explorer window. -### Using your web macro module -As you created a global web macro module, you can search for it in the MeVisLab *Module Search*. In case the module cannot be found, select {{< menuitem "Extras" "Reload Module Database (Clear Cache)" >}}. +### Using Your Web Macro Module +As you created a global web macro module, you can search for it in the MeVisLab *Module Search*. In the case the module cannot be found, select {{< menuitem "Extras" "Reload Module Database (Clear Cache)" >}}. ![Web macro module](images/tutorials/summary/Example8_4.png "Web macro module") -The internal network of your module is empty. We will re-use the internal network of your macro module developed in [Step 2](tutorials/summary/summary2/). +The internal network of your module is empty. We will reuse the internal network of your macro module developed in [Step 2](tutorials/summary/summary2/). -#### Add internal network of your application +#### Add the Internal Network of Your Application Open the internal network of your previously created macro module from [Step 2](tutorials/summary/summary2/). Select all and copy to your internal network of the *TutorialSummaryBrowser* module. Save the internal network and close the tab in MeVisLab. ![Internal network](images/tutorials/summary/Example8_5a.png "Internal network") -We are going to develop a web application, therefore we need special `RemoteRendering` modules for the viewer. Add 2 `RemoteRendering` modules and a `SoCameraInteraction` to your workspace and connect them to your existing modules as seen below. +We are going to develop a web application; therefore, we need special `RemoteRendering` modules for the viewer. Add two `RemoteRendering` modules and a `SoCameraInteraction` to your workspace and connect them to your existing modules as seen below. ![Remote Rendering](images/tutorials/summary/Example8_5b.png "Remote Rendering") @@ -62,14 +64,14 @@ We are going to develop a web application, therefore we need special `RemoteRend We are using the hidden outputs of the `View2D` and the `SoExaminerViewer`. You can show them by pressing the *SPACE* key. {{}} -#### Develop the user interface -Make sure to have both macro modules visible in MeVisLab SDK, we are re-using the *\*.script* and *\*.py* files developed in [Step 3](tutorials/summary/summary3/). +#### Develop the User Interface +Make sure to have both macro modules visible in MeVisLab SDK, we are reusing the *.script* and *.py* files developed in [Step 3](tutorials/summary/summary3/). ![Macro modules](images/tutorials/summary/Example8_6.png "Macro modules") Right-click {{< mousebutton "right" >}} the module *TutorialSummaryBrowser* and select {{< menuitem "Related Files" "TutorialSummaryBrowser.script" >}}. -The file opens in MATE and you will see that it looks similar to the *\*.script* file of a normal macro module. The only difference is an additional *Web* section at the end of the file. It defines the locations of some *javascript* libraries and the *url* to be used for a preview of your website. +The file opens in MATE and you will see that it looks similar to the *.script* file of a normal macro module. The only difference is an additional *Web* section at the end of the file. It defines the locations of some *JavaScript* libraries and the *URL* to be used for a preview of your website. {{< highlight filename="TutorialSummaryBrowser.script" >}} ```Stan @@ -77,7 +79,7 @@ Web { plugin = "$(MLAB_MeVisLab_Private)/Sources/Web/application/js/jquery/Plugin.js" plugin = "$(MLAB_MeVisLab_Private)/Sources/Web/application/js/yui/Plugin.js" - // Specify web plugins here. If you have additional Javascript files, you can load them from + // Specify web plugins here. If you have additional JavaScript files, you can load them from // the plugin. It is also possible to load other plugins here. plugin = "$(LOCAL)/www/js/Plugin.js" @@ -86,7 +88,7 @@ Web { directory = "$(LOCAL)/www" } - // The developer url is used by the startWorkerService.py user script. + // The developer URL is used by the startWorkerService.py user script. developerUrl = "MeVis/TutorialSummary/Projects/TutorialSummaryBrowser/Modules/www/TutorialSummaryBrowser.html" } ``` @@ -159,7 +161,7 @@ Interface { ``` {{}} -Reloading your web macro in MeVisLab SDK now shows the same outputs as the original macro module. The only difference is the type of your output. It changed from MLImage and Inventor Scene to MLBase from your `RemoteRendering` modules. +Reloading your web macro in MeVisLab SDK now shows the same outputs as the original macro module. The only difference is the type of your output. It changed from MLImage and Open Inventor scene to MLBase from your `RemoteRendering` modules. ![Macro modules](images/tutorials/summary/Example8_7.png "Macro modules") @@ -171,7 +173,7 @@ You can emulate the final viewer by adding a `RemoteRenderingClient` module to t ![RemoteRenderingClient](images/tutorials/summary/Example8_9.png "RemoteRenderingClient") -Open the *\*.script* files of your macro modules and copy the *FieldListeners* from *Commands* section of your *TutorialSummary.script* to *TutorialSummaryBrowser.script*. +Open the *.script* files of your macro modules and copy the *FieldListeners* from the *Commands* section of your *TutorialSummary.script* to *TutorialSummaryBrowser.script*. {{< highlight filename="TutorialSummaryBrowser.script" >}} ```Stan @@ -260,8 +262,8 @@ Window "MainPanel" { ``` {{}} -#### Python functions -After we re-used the scripts, we now need to copy the Python functions from *TutorialSummary.py* to *TutorialSummaryBrowser.py*. Open the Python file of your web macro. You will see an additional import from *MLABRemote*, which is required for remote rendering calls. The *MLABRemote* context is already setup automatically and can be used. +#### Python Functions +After we reused the scripts, we now need to copy the Python functions from *TutorialSummary.py* to *TutorialSummaryBrowser.py*. Open the Python file of your web macro. You will see an additional import from *MLABRemote*, which is required for remote rendering calls. The *MLABRemote* context is already setup automatically and can be used. {{< highlight filename="TutorialSummaryBrowser.py" >}} ```Python @@ -311,20 +313,20 @@ def applyPosition(): ``` {{}} -### Run your application in browser +### Run Your Application in a Browser MeVisLab provides a local webserver and you can preview your application in a browser by selecting the module and open {{< menuitem "Scripting" "Web" "Start Module Through Webservice" >}}. The integrated webserver starts and your default browser opens the local website showing your application. ![Webserver preview](images/tutorials/summary/Example8_10.png "Webserver preview") Select your web macro *TutorialSummaryBrowser* and right-click {{< mousebutton "right" >}} to select {{< menuitem "Related Files" "Show Definition Folder" >}}. You can see the folder structure of your web macro and modify the stylesheet depending on your needs. -### Open current web instance in MeVisLab SDK +### Open the Current Web Instance in MeVisLab SDK If you want to inspect the internal state of the modules and your internal network, open the console of your browser and enter *MLAB.GUI.Application.module('TutorialSummaryBrowser').showIDE()*. MeVisLab opens and you can change your internal network while all modifications are applied on the website on-the-fly. ![MeVisLab SDK](images/tutorials/summary/Example8_11.png "MeVisLab SDK") ## Summary -* MeVisLab macro modules can easily be adapted to run in a browser window +* MeVisLab macro modules can easily be adapted to run in a browser window. * MeVisLab `RemoteRendering` allows to run in a browser or embedded into other application user interfaces. It does so by sending updated images to a client and receiving input events from this client. * Clients can be emulated by using a `RemoteRenderingClient` module. diff --git a/mevislab.github.io/content/tutorials/testing.md b/mevislab.github.io/content/tutorials/testing.md index 3b61c767e..ca149c67b 100644 --- a/mevislab.github.io/content/tutorials/testing.md +++ b/mevislab.github.io/content/tutorials/testing.md @@ -12,6 +12,7 @@ menu: weight: 785 parent: "tutorials" --- + # MeVisLab Tutorial Chapter VI {#TutorialChapter6} ## Testing, Profiling, and Debugging in MeVisLab {#TutorialTesting} @@ -42,7 +43,7 @@ If you have multiple versions installed, make sure to check and, if needed, alte {{}} ### Profiling -Profiling allows you to get detailed information on the behavior of your modules and networks. You can add the profiling view via {{}}. The Profiling will be displayed in the Views area of the MeVisLab IDE. +Profiling allows you to get detailed information on the behavior of your modules and networks. You can add the Profiling view via {{}}. The Profiling will be displayed in the Views area of the MeVisLab IDE. ![MeVisLab Profiling](images/tutorials/testing/Profiling.png "MeVisLab Profiling") diff --git a/mevislab.github.io/content/tutorials/testing/testingexample1.md b/mevislab.github.io/content/tutorials/testing/testingexample1.md index 2cf629a3a..edb5b736e 100644 --- a/mevislab.github.io/content/tutorials/testing/testingexample1.md +++ b/mevislab.github.io/content/tutorials/testing/testingexample1.md @@ -1,5 +1,5 @@ --- -title: "Example 1: Writing a simple test case in MeVisLab" +title: "Example 1: Writing a Simple Test Case in MeVisLab" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,26 +8,28 @@ tags: ["Beginner", "Tutorial", "Testing", "Python", "Automated Tests"] menu: main: identifier: "testingexample1" - title: "Writing a simple test case for the module DicomImport in MeVisLab using Python and MeVisLab TestCenter." + title: "Writing a Simple Test Case for the Module DicomImport in MeVisLab Using Python and MeVisLab TestCenter" weight: 790 parent: "testing" --- -# Example 1: Writing a simple test case in MeVisLab + +# Example 1: Writing a Simple Test Case in MeVisLab {{< youtube "DqpVaKai_00" >}} ## Introduction -In this example, you will learn how to write an automated test for a simple network using the `DicomImport`, `MinMaxScan` and `View3D` modules. Afterwards, you will be able to write test cases for any other module and network yourself. +In this example you will learn how to write an automated test for a simple network using the `DicomImport`, `MinMaxScan`, and `View3D` modules. Afterward, you will be able to write test cases for any other module and network yourself. + +## Steps to Do -## Steps to do -### Creating the network to be used for testing +### Creating the Network to be Used for Testing Add the following modules to your workspace and connect them as seen below: ![Testcase network ](images/tutorials/testing/testNetwork1.png "Testcase network ") Save your network as *NetworkTestCase.mlab*. -## Test creation +## Test Creation Open the MeVisLab TestCaseManager via menu {{}}. The following window will appear. ![TestCaseManager window ](images/tutorials/testing/testCaseManagerWindow.png "TestCaseManager window ") @@ -73,7 +75,7 @@ def TEST_DicomImport(): The *filePath* variable defines the absolute path to the DICOM files that will be given to *source* field of the `DicomImport` module in the second step of the *OpenFiles* function. -The *OpenFiles* function first defines the `DicomImport` field *inputMode* to be a *Directory*. If you want to open single files, set this field's value to *Files*. Then the *source* field is set to your previously defined *filePath*. After clicking *triggerImport*, the `DicomImport` module needs some time to load all images in the directory and process the DICOM tree. We have to wait until the field *ready* is *TRUE*. While the import is not ready yet, we wait for 1 millisecond at a time and check again. *MLAB.processEvents()* lets MeVisLab continue execution while waiting for the `DicomImport` to be ready. +The *OpenFiles* function first defines the `DicomImport` field *inputMode* to be a *Directory*. If you want to open single files, set this field's value to *Files*. Then, the *source* field is set to your previously defined *filePath*. After clicking *triggerImport*, the `DicomImport` module needs some time to load all images in the directory and process the DICOM tree. We have to wait until the field *ready* is *True*. While the import is not ready yet, we wait for 1 millisecond at a time and check again. *MLAB.processEvents()* lets MeVisLab continue execution while waiting for the `DicomImport` to be ready. When calling the function *TEST_DicomImport*, an expected value of *1.0* is defined. Then, the DICOM files are opened. @@ -85,8 +87,7 @@ When *ready* is true, the test touches the *selectNextItem* trigger, so that the The value of our `DicomImport`s *progress* field is saved as the *currentValue* variable and compared to the *expectedValue* variable by calling *ASSERT_FLOAT_EQ(expectedValue,currentValue)* to determine if the DICOM import has finished (*currentValue* and *expectedValue* are equal) or not. -### Run your test case - +### Run Your Test Case Open the TestCase Manager und run your test by selecting your test case and clicking on the *Play* button in the bottom right corner. ![Run Test Case](images/tutorials/testing/runTestCase.png "Run Test Case") @@ -96,7 +97,7 @@ After execution, the ReportViewer will open automatically displaying your test's ![ReportViewer](images/tutorials/testing/successTestCase.png "ReportViewer") -### Writing a test for global macro modules +### Writing a Test for Global Macro Modules Please observe that field access through Python scripting works differently for global macros. Instead of accessing a field directly by calling their respective module, the module itself needs to be accessed as part of the global macro first. {{< highlight filename="NetworkTestCase.py" >}} @@ -109,7 +110,7 @@ Please observe that field access through Python scripting works differently for ``` {{}} -*Imagine unpeeled nuts in a bag as a concept - the field as a nut, their module as their nutshell and the bag as the global macro.* +*Imagine unpeeled nuts in a bag as a concept - the field as a nut, their module as their nutshell, and the bag as the global macro.* {{}} [Example 2.2: Global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules/) provides additional info on global macro modules and their creation. @@ -117,9 +118,9 @@ Please observe that field access through Python scripting works differently for ## Exercise Create a global macro module and implement the following test objectives for both (network and macro module): -* Check, if the file exists. -* Check, if the max value of file is greater than zero. -* Check, if the `View3D`-Input and `DicomImport`-output have the same data. +* Check if the file exists. +* Check if the max value of file is greater than zero. +* Check if the `View3D` input and `DicomImport` output have the same data. ## Summary * MeVisLab provides a TestCenter for writing automated tests in Python. diff --git a/mevislab.github.io/content/tutorials/testing/testingexample2.md b/mevislab.github.io/content/tutorials/testing/testingexample2.md index 36b265fe5..f84e1d824 100644 --- a/mevislab.github.io/content/tutorials/testing/testingexample2.md +++ b/mevislab.github.io/content/tutorials/testing/testingexample2.md @@ -8,31 +8,33 @@ tags: ["Beginner", "Tutorial", "Profiling"] menu: main: identifier: "testingexample2" - title: "Enabling the MeVisLab Profiler and inspecting the behaviour of your network" + title: "Profiling in MeVisLab" weight: 792 parent: "testing" --- + # Example 2: Profiling in MeVisLab {{< youtube "DZ4BcAne4hM" >}} ## Introduction -In this example, we are using the MeVisLab Profiler to inspect the memory and CPU consumption of the modules in an example network. +In this example we are using the MeVisLab Profiler to inspect the memory and CPU consumption of the modules in an example network. + +## Steps to Do -## Steps to do -### Creating the network to be used for profiling +### Creating the Network to be Used for Profiling You can open any network you like, here we are using the example network of the module `MinMaxScan` for profiling. Add the module `MinMaxScan` to your workspace, open the example network via right-click {{}} and select {{}}. ![MinMaxScan Example Network](images/tutorials/testing/profiling_network.png "MinMaxScan Example Network") ### Enable Profiling -Next, enable the MeVisLab Profiler via menu item {{}}. The Profiler is opened in your Views Area but can be detached and dragged over the workspace holding the left mouse button {{}}. +Next, enable the MeVisLab Profiler via menu item {{}}. The Profiler is opened in your views area but can be detached and dragged over the workspace holding the left mouse button {{}}. ![MeVisLab Profiling](images/tutorials/testing/Profiling.png "MeVisLab Profiling") Enable profiling by checking *Enable* in the top left corner of the Profiling window. -### Inspect your network +### Inspect Your Network Now open the `View2D` module's panel via double-click and scroll through the slices. Inspect the Profiler. ![MeVisLab Profiling Network](images/tutorials/testing/Profiling_Network1.png "MeVisLab Profiling Network") diff --git a/mevislab.github.io/content/tutorials/testing/testingexample3.md b/mevislab.github.io/content/tutorials/testing/testingexample3.md index ab516007a..300bc178d 100644 --- a/mevislab.github.io/content/tutorials/testing/testingexample3.md +++ b/mevislab.github.io/content/tutorials/testing/testingexample3.md @@ -1,5 +1,5 @@ --- -title: "Example 3: Iterative tests in MeVisLab with Screenshots" +title: "Example 3: Iterative Tests in MeVisLab With Screenshots" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,24 +8,26 @@ tags: ["Advanced", "Tutorial", "Testing", "Python", "Automated Tests", "Iterativ menu: main: identifier: "testingexample3" - title: "Writing an iterative test in MeVisLab" + title: "Writing an Iterative Test in MeVisLab" weight: 795 parent: "testing" --- -# Example 3: Iterative tests in MeVisLab + +# Example 3: Iterative Tests in MeVisLab {{}} ## Introduction -In this example, you are writing an iterative test. Iterative test functions run a function for every specified input. They return a tuple consisting of the function object called and the inputs iterated over. The iterative test functions are useful if the same function should be applied to different input data. These could be input values, names of input images, etc. +In this example you are writing an iterative test. Iterative test functions run a function for every specified input. They return a tuple consisting of the function object called and the inputs iterated over. The iterative test functions are useful if the same function should be applied to different input data. These could be input values, names of input images, etc. + +## Steps to Do -## Steps to do -### Creating the network to be used for testing +### Creating the Network to be Used for Testing Add a `LocalImage` and a `DicomTagViewer` module to your workspace and connect them. ![Example Network](images/tutorials/testing/network_test3.png "Example Network") -### Test case creation +### Test Case Creation Open the panel of the `DicomTagViewer` and set *Tag Name* to *WindowCenter*. The value of the DICOM tag from the current input image is automatically set as value. Save the network. @@ -34,7 +36,7 @@ Start MeVisLab TestCaseManager and create a new test case called *IterativeTestC ![DicomTagViewer](images/tutorials/testing/DicomTagViewer.png "DicomTagViewer") -### Defining the test data +### Defining the Test Data In TestCaseManager open the test case Python file via *Edit File*. Add a list for test data to be used as input and a prefix for the path of the test data as seen below. @@ -52,10 +54,10 @@ testData = { "ProbandT1":("ProbandT1.dcm", "439.9624938965"), ``` {{}} -The above list contains an identifier for the test case (*ProbandT1/2*), the file names and a number value. The number value is the value of the DICOM tag (0028,1050) WindowCenter for each file. +The above list contains an identifier for the test case (*ProbandT1/2*), the file names, and a number value. The number value is the value of the DICOM tag (0028,1050) *WindowCenter* for each file. -### Create your iterative test function -Add the python function to your script file: +### Create Your Iterative Test Function +Add the Python function to your *.script* file: {{< highlight filename="IterativeTestCase.py" >}} ```Python def ITERATIVETEST_TestWindowCenter(): @@ -83,17 +85,17 @@ def testPatient(path, windowCenter): 4. The final test functions *ASSERT_EQ* evaluate if the given values are equal. {{}} -You can use many other *ASSERT** possibilities, just try using the MATE auto completion and play around with them. +You can use many other *ASSERT** possibilities, just try using the MATE autocompletion and play around with them. {{}} -### Run your iterative test -Open MeVisLab TestCase Manager and select your package and test case. You will see 2 test functions on the right side. +### Run Your Iterative Test +Open MeVisLab TestCase Manager and select your package and test case. You will see two test functions on the right side. ![Iterative Test](images/tutorials/testing/TestCaseManager_TestWindowCenter.png "Iterative Test") The identifiers of your test functions are shown as defined in the list (*ProbandT1/2*). The *TestWindowCenter* now runs for each entry in the list and calls the function *testPatient* for each entry using the given values. -### Adding screenshots to your TestReport +### Adding Screenshots to Your TestReport Now, extend your network by adding a `View2D` module and connect it with the `LocalImage` module. Add the following lines to the end of your function *testPatient*: {{< highlight filename="IterativeTestCase.py" >}} ```Python @@ -112,4 +114,4 @@ Your ReportViewer now shows a screenshot of the image in the `View2D`. ## Summary * Iterative tests allow you to run the same test function on multiple input entries. -* It is possible to add screenshots to test cases +* It is possible to add screenshots to test cases. diff --git a/mevislab.github.io/content/tutorials/thirdparty.md b/mevislab.github.io/content/tutorials/thirdparty.md index 6e07eb433..59e254d44 100644 --- a/mevislab.github.io/content/tutorials/thirdparty.md +++ b/mevislab.github.io/content/tutorials/thirdparty.md @@ -1,22 +1,23 @@ --- -title: "Chapter VIII: ThirdParty components" +title: "Chapter VIII: Third-party Components" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false weight: 850 -tags: ["Advanced", "Tutorial", "ThirdParty"] +tags: ["Advanced", "Tutorial", "Third-party"] menu: main: identifier: "thirdparty" - title: "Usage of ThirdParty software integrated into MeVisLab" + title: "Usage of Third-party Software Integrated into MeVisLab" weight: 850 parent: "tutorials" --- # MeVisLab Tutorial Chapter VIII {#TutorialChapter8} -## Using ThirdParty Software Integrated into MeVisLab {#TutorialThirdParty} -MeVisLab is equipped with a lot of useful software right out of the box, like the Insight Segmentation and Registration Toolkit (ITK) or the Visualization Toolkit (VTK). This chapter works as a guide on how to use some of the third party components integrated in MeVisLab for your projects via Python scripting. +## Using Third-party Software Integrated into MeVisLab {#TutorialThirdParty} +MeVisLab is equipped with a lot of useful software right out of the box, like the Insight Segmentation and Registration Toolkit (ITK) or the Visualization Toolkit (VTK). This chapter works as a guide on how to use some of the third-party components integrated in MeVisLab for your projects via Python scripting. + {{}} You will also find instructions to install and use any Python package (e.g., PyTorch) in MeVisLab using the `PythonPip` module. {{}} @@ -48,7 +49,7 @@ A list of supported formats can be found [here](https://assimp-docs.readthedocs. The tutorials available here shall provide examples on how to integrate AI into MeVisLab. You can also integrate other Python AI packages the same way. -### matplotlib +### Matplotlib [Matplotlib](https://matplotlib.org/) is a library for creating static, animated, and interactive visualizations in Python. * create publication quality plots diff --git a/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample1.md b/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample1.md index 2de393a50..193c474d3 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample1.md +++ b/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample1.md @@ -1,5 +1,5 @@ --- -title: "Example 1: Installing MONAI using the PythonPip module" +title: "Example 1: Installing MONAI Using the PythonPip Module" date: 2025-11-13 status: "OK" draft: false @@ -8,18 +8,19 @@ tags: ["Advanced", "Tutorial", "MONAI", "Python", "PythonPip", "AI"] menu: main: identifier: "monaiexample1" - title: "Installing MONAI using the PythonPip module." + title: "Installing MONAI Using the PythonPip Module" weight: 878 parent: "monai" --- -# Example 1: Installing MONAI using the PythonPip module + +# Example 1: Installing MONAI Using the PythonPip Module ## Introduction With the `PythonPip` module, you can import additional Python libraries into MeVisLab. -### Steps to do -#### Install PyTorch +### Steps to Do +#### Install PyTorch As *MONAI* requires *PyTorch*, install it by using the `PythonPip` module as described [here](tutorials/thirdparty/pytorch/pytorchexample1/). #### Install MONAI @@ -34,12 +35,12 @@ After clicking *Install*, the pip console output opens and you can follow the pr {{}} If you are behind a proxy server, you may have to set the **HTTP_PROXY** and **HTTPS_PROXY** environment variables to the hostname and port of your proxy. These are used by pip when accessing the internet. -Alternatively you can also add a parameter to *pip install* command: *--proxy https://proxy:port* +Alternatively, you can also add a parameter to *pip install* command: *--proxy https://proxy:port* {{}} ![PythonPip MONAI](images/tutorials/thirdparty/monai_example1_2.png "PythonPip MONAI") -After the installation was finished with exit code 0, you should see the new packages in the `PythonPip` module. +After the installation has finished with exit code 0, you should see the new packages in the `PythonPip` module. ![MONAI installed](images/tutorials/thirdparty/monai_example1_3.png "MONAI installed") diff --git a/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample2.md b/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample2.md index a810ad6de..7ee3790b7 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample2.md +++ b/mevislab.github.io/content/tutorials/thirdparty/MONAI/monaiexample2.md @@ -1,5 +1,5 @@ --- -title: "Example 2: Applying a spleen segmentation model from MONAI in MeVisLab" +title: "Example 2: Applying a Spleen Segmentation Model from MONAI in MeVisLab" date: 2025-11-13 status: "OK" draft: false @@ -8,21 +8,23 @@ tags: ["Advanced", "Tutorial", "MONAI", "Python", "PythonPip", "AI"] menu: main: identifier: "monaiexample2" - title: "Applying a spleen segmentation model from MONAI in MeVisLab." + title: "Applying a Spleen Segmentation Model from MONAI in MeVisLab" weight: 879 parent: "monai" --- -# Example 2: Applying a spleen segmentation model from MONAI in MeVisLab + +# Example 2: Applying a Spleen Segmentation Model from MONAI in MeVisLab ## Introduction -In the following, we will perform a spleen segmentation using a model from the *MONAI Model Zoo*. The MONAI Model Zoo is a collection of pre-trained models for medical imaging, offering standardized bundles for tasks like segmentation, classification, and detection across MRI, CT, and pathology data, all built for easy use and reproducibility within the MONAI framework. Further information and the required files can be found [here](https://github.com/Project-MONAI/model-zoo/tree/dev "here"). +In the following, we will perform a spleen segmentation using a model from the *MONAI Model Zoo*. The MONAI Model Zoo is a collection of pretrained models for medical imaging, offering standardized bundles for tasks like segmentation, classification, and detection across MRI, CT, and pathology data, all built for easy use and reproducibility within the MONAI framework. Further information and the required files can be found [here](https://github.com/Project-MONAI/model-zoo/tree/dev "here"). This example shows how to use the model for **Spleen CT Segmentation** directly in MeVisLab. -## Steps to do -### Download necessary files +## Steps to Do + +### Download Necessary Files Create a folder named *spleen_ct_segmentation* somewhere on your system. -Inside this folder, create two subfolders, one named *configs* and another named *models* and remember their paths. +Inside this folder, create two subfolders, one named *configs* and another one named *models*, and remember their paths. ![Directory Structure](images/tutorials/thirdparty/monai_example2_1.png "Directory Structure"). @@ -32,13 +34,13 @@ Download all *config* files from [MONAI-Model-Zoo](https://github.com/Project-MO Download *model* files from [NVIDIA Download Server](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_spleen_ct_segmentation_v1.pt "NVIDIA Download Server") and save it in your local *models* directory. {{}} -The path to the latest model \**.pt*-file can be found in [large_files.yml](https://github.com/Project-MONAI/model-zoo/blob/dev/models/spleen_ct_segmentation/large_files.yml "large_files.yml"). +The path to the latest model *.pt* file can be found in [large_files.yml](https://github.com/Project-MONAI/model-zoo/blob/dev/models/spleen_ct_segmentation/large_files.yml "large_files.yml"). {{}} -### Download example images -The recommended CT images used for training the algorithm, can be found [here](https://msd-for-monai.s3-us-west-2.amazonaws.com/Task09_Spleen.tar "here"). +### Download Example Images +The recommended CT images used for training the algorithm can be found [here](https://msd-for-monai.s3-us-west-2.amazonaws.com/Task09_Spleen.tar "here"). -### Create a macro module and add inputs and outputs +### Create a Macro Module and Add Inputs and Outputs Add a `PythonImage` module and save the network as *MONAISpleenSegmentation.mlab*. ![PythonImage module](images/tutorials/thirdparty/monai_example2_1a.png "PythonImage module"). @@ -47,13 +49,13 @@ Now, right-click {{< mousebutton "right" >}} on the `PythonImage` module, select Right-click {{< mousebutton "right" >}} on the group's name and choose *Convert to Local Macro* using the same name. -Our new module does not provide an input or output. +Our new module does not provide any input or output. ![Local Macro Module MONAIDemo](images/tutorials/thirdparty/monai_example2_2.png "Local Macro Module MONAIDemo") -Right-click {{< mousebutton "right" >}} on the Macro Module and select {{< menuitem "Related Files" "MONAIDemo.script">}}. +Right-click {{< mousebutton "right" >}} on the macro module and select {{< menuitem "Related Files" "MONAIDemo.script">}}. -Add the following code into the \**.script*-file and save. +Add the following code into the *.script* file and save. {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -75,7 +77,7 @@ If you now reload your module in MeVisLab, you can see the new input and output. ![MONAIDemo with input and output](images/tutorials/thirdparty/monai_example2_3.png "MONAIDemo with input and output") -Add a *Commands* section to your \**.script*-file. +Add a *Commands* section to your *.script* file. {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -87,20 +89,20 @@ Commands { ``` {{}} -Right-click {{< mousebutton "right" >}} on the MONAIDemo.py and select {{< menuitem "Open File $(LOCAL)/MONAIDemo.py">}}. An empty Python file is created and opens automatically. Save the empty Python file. +Right-click {{< mousebutton "right" >}} on the *MONAIDemo.py* and select {{< menuitem "Open File $(LOCAL)/MONAIDemo.py">}}. An empty Python file is created and opens automatically. Save the empty Python file. -### Create the network for the segmentation -Right-click {{< mousebutton "right" >}} on the Macro Module and select {{< menuitem "Related Files" "MONAIDemo.mlab">}}. Create the network seen below. +### Create the Network for the Segmentation +Right-click {{< mousebutton "right" >}} on the macro module and select {{< menuitem "Related Files" "MONAIDemo.mlab">}}. Create the network seen below. ![MONAIDemo Network](images/tutorials/thirdparty/monai_example2_3a.png "MonaiDemo Network") -Fields of the internal network can be left with default values, we will change them later. +Fields of the internal network can be left with default values; we will change them later. The left part defines actions executed on the input image, the right part defines what shall happen on the output after the *MONAI* segmentation has been done. A detailed description will be provided later. -Open your *\*.script* file via right-click {{< mousebutton "right" >}} on the Macro Module and select {{< menuitem "Related Files" "MONAIDemo.script">}}. +Open your *.script* file via right-click {{< mousebutton "right" >}} on the macro module and select {{< menuitem "Related Files" "MONAIDemo.script">}}. -Define your input image field to re-use the internal name of the left input of the `Resample3D` module. +Define your input image field to reuse the internal name of the left input of the `Resample3D` module. {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -117,7 +119,7 @@ If you now open the internal network of your macro module, you can see that the ![MONAIDemo Internal Network](images/tutorials/thirdparty/monai_example2_3b.png "MonaiDemo Internal Network") -Again open the *\*.script* file and change the internal name of your *outImage* field to re-use the field *Resample3D1.output0*. +Again, open the *.script* file and change the internal name of your *outImage* field to reuse the field *Resample3D1.output0*. {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -136,7 +138,7 @@ If you now open the internal network of your macro module, you can see that the ![MONAIDemo Internal Network](images/tutorials/thirdparty/monai_example2_3c.png "MonaiDemo Internal Network") -### Adapt input image to *MONAI* parameters from training +### Adapt Input Image to *MONAI* Parameters from Training The model has been trained for strictly defined assumptions for the input image. All values can normally be found in the *inference.json* file in your *configs* directory. Use the `itkImageFileReader` module to load the file *Task09_Spleen/Task09_SpleenimagesTr/spleen_7.nii.gz* from dowloaded example patients. The *Output Inspector* shows the image and additional information about the size. @@ -145,15 +147,15 @@ We can see that the image size is 512 x 512 x 114 and the voxel size is 0.9766 x ![Output Inspector](images/tutorials/thirdparty/monai_example2_3d.png "Output Inspector") -Connect the module to your local macro module `MonaiDemo`. The result of the segmentation shall be visualized as a semi-transparent overlay on your original image. +Connect the module to your local macro module `MonaiDemo`. The result of the segmentation shall be visualized as a semitransparent overlay on your original image. Add a `SoView2DOverlay` and a `View2D` module and connect them to your local macro module `MonaiDemo`. ![Final network](images/tutorials/thirdparty/monai_example2_4.png "Final network") -The **Spleen CT Segmentation** network expects images having a defined voxel size of 1.5 x 1.5 x 2. We want to define these values via fields in the Module inspector. +The **Spleen CT Segmentation** network expects images having a defined voxel size of 1.5 x 1.5 x 2. We want to define these values via fields in the Module Inspector. -Open the *\*.script* file and add the fields *start* and *voxelSize* to your local macro module `MonaiDemo`: +Open the *.script* file and add the fields *start* and *voxelSize* to your local macro module `MonaiDemo`: {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -168,17 +170,17 @@ Interface { ``` {{}} -If you reload your module now, we can set the voxel size to use for the segmentation directly in our macro module `MonaiDemo`. Additionally we can trigger a start function for running the segmentation. This is implemented later. +If you reload your module now, we can set the voxel size to use for the segmentation directly in our macro module `MonaiDemo`. Additionally, we can trigger a start function for running the segmentation. This is implemented later. ![Voxel Size](images/tutorials/thirdparty/monai_example2_4a.png "Voxel Size") -If you select the output field of the `Resample3D` module in the internal network, you can see the dimensions of the currently opened image after changing the voxel size to 1.5 x 1.5 x 2. It shows 333 x 333 x 143. +If you select the output field of the `Resample3D` module in the internal network, you can see the extent of the currently opened image after changing the voxel size to 1.5 x 1.5 x 2. It shows 333 x 333 x 143. ![Original Image Size](images/tutorials/thirdparty/monai_example2_5.png "Original Image Size") The algorithm expects image sizes of 160 x 160 x 160. We add this expected size of the image to our macro module in the same way. -Open the *\*.script* file and add the following fields to your local macro module `MonaiDemo`: +Open the *.script* file and add the following fields to your local macro module `MonaiDemo`: {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -203,7 +205,7 @@ Reload your macro module and enter the following values for your new fields: Next, we change the gray values of the image, because the algorithm has been trained on values between -57 and 164. Again, the values can be found in the *inference.json* file in your *configs* directory. -Open the *\*.script* file and add the following fields to your local macro module `MonaiDemo`: +Open the *.script* file and add the following fields to your local macro module `MonaiDemo`: {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -226,7 +228,7 @@ As already done before, we can now defined the threshold values for our module v As defined in the *inference.json* file in your *configs* directory, the gray values in the image must be between 0 and 1. -Open the *\*.script* file and add the following fields to your local macro module `MonaiDemo`: +Open the *.script* file and add the following fields to your local macro module `MonaiDemo`: {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -253,9 +255,9 @@ Open the panel of the `SwapFlipDimensions` module and select X as *Axis 1* and Z ![SwapFlipDimensions](images/tutorials/thirdparty/monai_example2_11.png "SwapFlipDimensions") -After the algorithm has been executed, we have to flip the images back to the original order. Open the panel of the `SwapFlipDimensions1` module and select X as *Axis 1* and Z as *Axis 2*. +After the algorithm has finished, we have to flip the images back to the original order. Open the panel of the `SwapFlipDimensions1` module and select X as *Axis 1* and Z as *Axis 2*. -Finally we want to show the results of the algorithm as a semi-transparent overlay on the image. Open tha panel of the `View2DOverlay` and define the following settings: +Finally, we want to show the results of the algorithm as a semitransparent overlay on the image. Open the panel of the `View2DOverlay` and define the following settings: * Blend Mode: Blend * Alpha Factor: 0.5 * Base Color: red @@ -263,7 +265,7 @@ Finally we want to show the results of the algorithm as a semi-transparent overl ![View2DOverlay](images/tutorials/thirdparty/monai_example2_12.png "View2DOverlay") ### Field Listeners -We add some Field Listeners to our Commands section of the *\*.script* file. They are necessary to react on changes the user makes on the fields of our module. +We add some Field Listeners to our *Commands* section of the *.script* file. They are necessary to react on changes the user makes on the fields of our module. {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -297,10 +299,10 @@ Commands { If the user touches the trigger *start*, a Python function *onStart* will be executed. Whenever the size of our image is changed, we call a function called *_sizeChanged* and if the input image changes, we want to reset the module to its default values. -### Python scripting +### Python Scripting The next step is to write our Python code. -Right-click {{< mousebutton "right" >}} *MONAIDemo.py* in *Commands* section line *source*. MATE opens showing the *\*.py* file of our module. +Right-click {{< mousebutton "right" >}} *MONAIDemo.py* in *Commands* section line *source*. MATE opens showing the *.py* file of our module. Insert the following code: @@ -316,7 +318,7 @@ MODEL_DIR = r"C:\tmp\spleen_ct_segmentation" MODEL_PATH = MODEL_DIR + r"\models\model_spleen_ct_segmentation_v1.pt" TRAIN_JSON = MODEL_DIR + r"\configs\train.json" -# using cpu or cude +# using CPU or Cuda DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") def onStart(): @@ -334,9 +336,9 @@ def _sizeChanged(): ``` {{}} -These functions should be enough to run the module. You can try them by changing the input image of our module, by changing any of the size values in *Module Inspector* or by clicking *start*. +These functions should be enough to run the module. You can try them by changing the input image of our module, by changing any of the size values in *Module Inspector*, or by clicking *start*. -Lets implement the *_getImage* function first: +Let's implement the *_getImage* function first: {{< highlight filename="MONAIDemo.py" >}} ```Python @@ -353,7 +355,7 @@ Lets implement the *_getImage* function first: ``` {{}} -We want to use the image that has been modified according to our pre-trained network requirements discussed above. We use the output image of the `SwapFlipDimensions` module when clicking *start*. +We want to use the image that has been modified according to our pretrained network requirements discussed above. We use the output image of the `SwapFlipDimensions` module when clicking *start*. {{< highlight filename="MONAIDemo.py" >}} ```Python @@ -409,7 +411,7 @@ We want to use the image that has been modified according to our pre-trained net ``` {{}} -This function now already calculates the segmentation using the *MONAI* model. The problem is, that it may happen that our subimage with the size 160 x 160 x 160 is located somewhere in our original image, where no spleen is visible. +This function now already calculates the segmentation using the *MONAI* model. The problem is that it may happen that our subimage with the size 160 x 160 x 160 is located somewhere in our original image where no spleen is visible. We have to calculate a bounding box in our `ROISelect` module and need to be able to move this bounding box to the correct location. @@ -431,7 +433,7 @@ We have to calculate a bounding box in our `ROISelect` module and need to be abl ctx.field("ROISelect.startVoxelY").value = roiStartY ctx.field("ROISelect.startVoxelZ").value = roiStartZ - # Subtract 1 because the pixel values start with 0 + # Subtract 1 because the voxel values start with 0 ctx.field("ROISelect.endVoxelX").value = voxelSizeImageExtent[0] - 1 ctx.field("ROISelect.endVoxelY").value = voxelSizeImageExtent[1] - 1 ctx.field("ROISelect.endVoxelZ").value = voxelSizeImageExtent[2] - 1 @@ -439,9 +441,9 @@ We have to calculate a bounding box in our `ROISelect` module and need to be abl ``` {{}} -Whenever our size fields are modified, the bounding box is re-calculated using the size of the given image and the values of the sizes defined by the user. The calculated bounding box is not positioned. This needs to be done manually, if necessary. +Whenever our size fields are modified, the bounding box is recalculated using the size of the given image and the values of the sizes defined by the user. The calculated bounding box is not positioned. This needs to be done manually, if necessary. -Open the *\*.script* file and add a *Window* section. In this window, we re-use the panel of the `ROISelect` module to manually correct the location of our calculated bounding box. +Open the *.script* file and add a *Window* section. In this window, we reuse the panel of the `ROISelect` module to manually correct the location of our calculated bounding box. {{< highlight filename="MONAIDemo.script" >}} ```Stan @@ -465,7 +467,7 @@ If you now open the panel of our `MONAIDemo` module, we can manually move the bo ![MONAIDemo panel](images/tutorials/thirdparty/monai_example2_5.png "MONAIDemo panel"). -Back to Python, we now need to reset our module to default, in case the input image changes. This also removes previous segmentations from the `PythonImage` module. +Back to Python, we now need to reset our module to default in the case the input image changes. This also removes previous segmentations from the `PythonImage` module. {{< highlight filename="MONAIDemo.py" >}} ```Python @@ -486,18 +488,18 @@ Back to Python, we now need to reset our module to default, in case the input im ``` {{}} -## Execute the segmentation +## Execute the Segmentation If you now load an image using the `itkImageFileReader` module, you can manually adapt your bounding box to include the spleen and start segmentation. -The results are shown as a semi-transparent overlay. +The results are shown as a semitransparent overlay. ![Segmentation result](images/tutorials/thirdparty/monai_example2_6.png "Segmentation result"). -You can also use the other examples from *MONAI Model Zoo* the same way, just make sure to apply the necessary changes on the input images like size, voxel size and other parameters defined in the *inference.json* file of the model. +You can also use the other examples from *MONAI Model Zoo* the same way, just make sure to apply the necessary changes on the input images like size, voxel size, and other parameters defined in the *inference.json* file of the model. ## Summary -* Pre-trained *MONAI* networks can be used directly in MeVisLab via `PythonImage` module -* The general principles are always the same for all models +* Pretrained *MONAI* networks can be used directly in MeVisLab via `PythonImage` module. +* The general principles are always the same for all models. {{< networkfile "examples/thirdparty/monai/MONAIDemo.zip" >}} diff --git a/mevislab.github.io/content/tutorials/thirdparty/assimp.md b/mevislab.github.io/content/tutorials/thirdparty/assimp.md index e61519cc0..20b30378e 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/assimp.md +++ b/mevislab.github.io/content/tutorials/thirdparty/assimp.md @@ -8,11 +8,13 @@ tags: ["Beginner", "Tutorial", "assimp", "3D"] menu: main: identifier: "assimp" - title: "Asset-Importer-Lib (assimp)" + title: "Asset Importer Library (assimp)" weight: 860 parent: "thirdparty" --- + # Asset-Importer-Lib (assimp) {#assimp} + ## Introduction [Assimp](http://www.assimp.org "assimp") (Asset-Importer-Lib) is a library to load and process geometric scenes from various 3D data formats. @@ -23,7 +25,7 @@ This chapter provides some examples of how 3D formats can be imported into MeVis You can also use the `SoSceneWriter` module to export your 3D scenes from MeVisLab in a number of output formats. ## File Formats -The Assimp-Lib currently supports the following [file formats](https://assimp-docs.readthedocs.io/en/v5.1.0/about/introduction.html): +The assimp library currently supports the following [file formats](https://assimp-docs.readthedocs.io/en/v5.1.0/about/introduction.html): * 3D Manufacturing Format (.3mf) * Collada (.dae, .xml) diff --git a/mevislab.github.io/content/tutorials/thirdparty/assimp/assimpexample1.md b/mevislab.github.io/content/tutorials/thirdparty/assimp/assimpexample1.md index c952ad633..dd683adb8 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/assimp/assimpexample1.md +++ b/mevislab.github.io/content/tutorials/thirdparty/assimp/assimpexample1.md @@ -8,28 +8,30 @@ tags: ["Beginner", "Tutorial", "assimp", "3D", "3D Printing", "stl"] menu: main: identifier: "assimpexample1" - title: "Open a 3D file and save the file or 3D scene as *.stl file for 3D printing." + title: "3D Printing in MeVisLab" weight: 862 parent: "assimp" --- + # Example 1: 3D Printing in MeVisLab {{< youtube "82ysCYNTyso">}} ## Introduction -This example uses the assimp library to load a 3D file and save the file as \*.stl for 3D printing. +This example uses the *assimp* library to load a 3D file and save the file as *.stl* for 3D printing. + +## Steps to Do -## Steps to do -### Develop your network -Add the modules `SoSceneLoader`, `SoBackground` and `SoExaminerViewer` to your workspace and connect them as seen below. +### Develop Your Network +Add the modules `SoSceneLoader`, `SoBackground`, and `SoExaminerViewer` to your workspace and connect them as seen below. ![Example Network](images/tutorials/thirdparty/assimp_example1.png "Example Network") -### Open the 3D file +### Open the 3D File Select the file *vtkCow.obj* from MeVisLab demo data directory. Open `SoExaminerViewer` and inspect the scene. You will see a 3D cow. {{}} -In case you cannot see the cow, it might be located outside your current camera location. Trigger the field *rescanScene* in case the cow is not visible. +In the case you cannot see the cow, it might be located outside your current camera location. Trigger the field *rescanScene* in the case the cow is not visible. {{}} ![Cow in SoExaminerViewer](images/tutorials/thirdparty/vtkCow.png "Cow in SoExaminerViewer") @@ -38,36 +40,36 @@ Add a `SoSphere` to the workspace and connect it to your viewer. Define the *Rad ![Cow and Sphere in SoExaminerViewer](images/tutorials/thirdparty/CowAndSphere.png "Cow and Sphere in SoExaminerViewer") -You can also define a material for your sphere but what we wanted to show is: You can use the loaded 3D files in MeVisLab Open Inventor Scenes. +You can also define a material for your sphere but what we wanted to show is: You can use the loaded 3D files in MeVisLab Open Inventor scenes. ![Cow and red Sphere in SoExaminerViewer](images/tutorials/thirdparty/CowAndSphere_red.png "Cow and red Sphere in SoExaminerViewer") -### Save your scene as \*.stl file for 3D Printing -Add a `SoSceneWriter` module to your workspace. The `SoExaminerViewer` has a hidden output which can be shown on pressing {{}}. Connect the `SoSceneWriter` to the output. +### Save Your Scene as *.stl* File for 3D Printing +Add a `SoSceneWriter` module to your workspace. The `SoExaminerViewer` has a hidden output that can be shown on pressing {{}}. Connect the `SoSceneWriter` to the output. -Name your output \*.stl file and select *Stl Ascii* as output format so that we can inspect the result afterwards. +Name your output *.stl* file and select *Stl Ascii* as output format, so that we can inspect the result afterward. ![SoSceneWriter](images/tutorials/thirdparty/SoSceneWriter.png "SoSceneWriter") {{}} -The `SoSceneWriter` can save node color information when saving in Inventor (ASCII or binary) or in VRML format. The `SoSceneWriter` needs to be attached to a `SoWEMRenderer` that renders in *ColorMode:NodeColor*. +The `SoSceneWriter` can save node color information when saving in Open Inventor (ASCII or binary) or in VRML format. The `SoSceneWriter` needs to be attached to a `SoWEMRenderer` that renders in *ColorMode:NodeColor*. There are [tools](https://www.patrickmin.com/meshconv/) to convert from at least VRML to STL available for free. {{}} -Write your Scene and open the resulting file in your preferred editor. As an alternative, you can also open the file in an \*.stl file reader like Microsoft 3D-Viewer. +Write your scene and open the resulting file in your preferred editor. As an alternative, you can also open the file in an *.stl* file reader like Microsoft 3D Viewer. ![Microsoft 3D-Viewer](images/tutorials/thirdparty/Microsoft_3D_Viewer.png "Microsoft 3D-Viewer") -### Load the file again -For loading your \*.stl file, you can use a `SoSceneLoader` and a `SoExaminerViewer`. +### Load the File Again +For loading your *.stl* file, you can use a `SoSceneLoader` and a `SoExaminerViewer`. {{}} -More information about the \*.stl format can be found [here](https://en.wikipedia.org/wiki/STL_(file_format)) +More information about the *.stl* format can be found [here](https://en.wikipedia.org/wiki/STL_(file_format)) {{}} ![SoSceneLoader](images/tutorials/thirdparty/SoSceneLoader_2.png "SoSceneLoader") ## Summary -* MeVisLab is able to load and write many different 3D file formats including *.stl format for 3D Printing. -* Inventor Scenes can be saved by using a `SoExaminerViewer` together with a `SoSceneWriter` +* MeVisLab is able to load and write many different 3D file formats including *.stl* format for 3D printing. +* Open Inventor scenes can be saved by using a `SoExaminerViewer` together with a `SoSceneWriter`. diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib.md index 532e159c1..7b236647a 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib.md @@ -12,9 +12,9 @@ menu: weight: 880 parent: "thirdparty" --- -# Matplotlib -Matplotlib, introduced by John Hunter in 2002 and initially released in 2003, is a comprehensive data visualization library in Python. It is widely used among the scientific world as it is easy to grasp for beginners and provides high quality plots and images that are widely customizable. +# Matplotlib +Matplotlib, introduced by John Hunter in 2002 and initially released in 2003, is a comprehensive data visualization library in Python. It is widely used in the scientific world as it is easy to grasp for beginners and provides high quality plots and images that are widely customizable. {{}} The documentation on Matplotlib along with general examples, cheat sheets, and a starting guide can be found [here](https://matplotlib.org/). @@ -24,10 +24,10 @@ As MeVisLab supports the integration of Python scripts, e.g., for test automatio In the following tutorial pages on Matplotlib, you will be shown how to create a module in MeVisLab that helps you plot greyscale distributions of single slices or defined sequences of slices of a DICOM image and layer the grayscale distributions of two chosen slices for comparison. -+ The module that is adapted during the tutorials is set up in the [Example 1: Module Setup](tutorials/thirdparty/matplotlib/modulesetup) tutorial. -+ The panel and two dimensional plotting functionality is added in [Example 2: 2D Plotting](tutorials/thirdparty/matplotlib/2dplotting). -+ In [Example 3: Slice Comparison](tutorials/thirdparty/matplotlib/slicecomparison), the comparison between two chosen slices is enabled by overlaying their grayscale distributions. -+ [Example 4: 3D Plotting](tutorials/thirdparty/matplotlib/3dplotting) adds an additional three-dimensional plotting functionality to the panel. +* The module that is adapted during the tutorials is set up in the [Example 1: Module Setup](tutorials/thirdparty/matplotlib/modulesetup) tutorial. +* The panel and two-dimensional plotting functionality is added in [Example 2: 2D Plotting](tutorials/thirdparty/matplotlib/2dplotting). +* In [Example 3: Slice Comparison](tutorials/thirdparty/matplotlib/slicecomparison), the comparison between two chosen slices is enabled by overlaying their grayscale distributions. +* [Example 4: 3D Plotting](tutorials/thirdparty/matplotlib/3dplotting) adds an additional three-dimensional plotting functionality to the panel. {{}} Notice that for the Matplotlib tutorials, the previous tutorial always works as a foundation for the following one. diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/2Dplotting.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/2Dplotting.md index 80a58c604..98f806042 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/2Dplotting.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/2Dplotting.md @@ -8,25 +8,24 @@ tags: ["Advanced", "Tutorial", "Matplotlib", "Visualization"] menu: main: identifier: "matplotlibexample2" - title: "Example 2: 2D Plotting" + title: "2D Plotting" weight: 882 parent: "matplotlib" --- + # Example 2: 2D Plotting ## Introduction - In this tutorial, we will equip the macro module we created in the [previous tutorial](tutorials/thirdparty/matplotlib/modulesetup) with a responsive and interactable panel to plot grayscale distributions of single slices as well as defined sequences of slices in 2D. -## Steps to do - -Open the module definition folder of your macro module and the related .script file in MATE. Then activate the Preview as shown below: +## Steps to Do +Open the module definition folder of your macro module and the related *.script* file in MATE. Then, activate the preview as shown below: ![MATE Preview](images/tutorials/thirdparty/Matplotlib7.png) -Drag the small Preview window to the bottom right corner of your window where it does not bother you. We will now be adding contents to be displayed there. +Drag the small preview window to the bottom right corner of your window where it does not bother you. We will now be adding contents to be displayed there. -Adding the following code to your .script file will open a panel window if the macro module is clicked. +Adding the following code to your *.script* file will open a panel window if the macro module is clicked. This new panel window contains a Matplotlib canvas where the plots will be displayed later on as well as two prepared boxes that we will add functions to in the next step. {{< highlight filename = "BaseNetwork.script">}} @@ -62,28 +61,28 @@ Window { } ``` {{}} -Letting a box expand on the x- or y-axis or adding an empty object do so contributes to the panel looking a certain way and helps the positioning of the elements. You can also try to vary the positioning by adding or removing expand-statements or moving boxes from a vertical to a horizontal alignment. Hover over the boxes in the preview to explore the concept. +Letting a box expand on the x- or y-axis or adding an empty object do so contributes to the panel looking a certain way and helps the positioning of the elements. You can also try to vary the positioning by adding or removing "expand" statements or moving boxes from a vertical to a horizontal alignment. Hover over the boxes in the preview to explore the concept. {{}} -You can click and hold onto a box to move it within the Preview. Your code will automatically be changed according to the new positioning. +You can click and hold onto a box to move it within the preview. Your code will automatically be changed according to the new positioning. {{}} **Now, we need to identify which module parameters we want to be able to access from the panel of our macro:** To plot a slice or a defined sequence of slices, we need to be able to set a start and an end. -Go back into your MeVisLab workspace, right click your `BaseNetwork` module and choose "Show Internal Network". +Go back into your MeVisLab workspace, right-click your `BaseNetwork` module and choose "Show Internal Network". ![SubImage module info](images/tutorials/thirdparty/Matplotlib8.png "The `SubImage` module provides the option to set sequences of slices.") ![SubImage module panel](images/tutorials/thirdparty/Matplotlib9.PNG "The starting and ending slices of the sequence can be set in the module panel.") {{}} -To find out what the parameters are called, what type of values they contain and receive and what they refer to, you can right-click on them within the panel. +To find out what the parameters are called, what type of values they contain and receive, and what they refer to, you can right-click on them within the panel. {{}} We now know that we will need `SubImage.z` and `SubImage.sz` to define the start and end of a sequence. But there are a few other module parameters that must be set beforehand to make sure the data we extract to plot later is compareable and correct. -To do so, we will be defining a "setDefaults" function for our module. Open the .py file and add the code below. +To do so, we will be defining a "setDefaults" function for our module. Open the *.py* file and add the code below. {{< highlight filename = "BaseNetwork.py">}} ```Stan @@ -100,7 +99,7 @@ def setDefaults(): ctx.field("Histogram.curveStyle").value = 7 ``` {{}} -As it is also incredibly important, that the values of the parameters we are referencing are regularly updated, we will be setting some global values containing those values. +As it is also incredibly important that the values of the parameters we are referencing are regularly updated, we will be setting some global values containing those values. {{< highlight filename = "BaseNetwork.py">}} ```Stan @@ -116,7 +115,7 @@ def updateSlices(): bins = ctx.field("Histogram.binSize").value ``` {{}} -Make sure that the variable declarations as none are put above the "setDefaults" function and add the execution of the "updateSlices()" function into the "setDefaults" function, like so: +Make sure that the variable declarations as "None" are put above the "setDefaults" function and add the execution of the "updateSlices()" function into the "setDefaults" function, like so: {{< highlight filename = "BaseNetwork.py">}} ```Stan @@ -134,7 +133,7 @@ def setDefaults(): updateSlices() ``` {{}} -Now we are ensuring, that the "setDefaults" function and therefore also the "updateSlices" function are executed everytime the panel is opened by setting "setDefaults" as a wake up command. +Now we are ensuring that the "setDefaults" function and therefore also the "updateSlices" function are executed every time the panel is opened by setting "setDefaults" as a wakeup command. {{< highlight filename = "BaseNetwork.script">}} ```Stan @@ -145,7 +144,7 @@ Commands { } ``` {{}} -And we add field listeners, so that the field values that we are working with are updated everytime they are changed. +And we add field listeners, so that the field values that we are working with are updated every time they are changed. {{< highlight filename = "BaseNetwork.script">}} ```Stan @@ -215,8 +214,8 @@ If you followed all of the listed steps, your panel preview should look like thi ![Adapted macro panel](images/tutorials/thirdparty/Matplotlib10.PNG) We can now work on the functions that visualize the data as plots on the Matplotlib canvas. -You will have noticed how all of the buttons in the .script file have a command. Whenever that button is clicked, its designated command is executed. -However, for any of the functions referenced via command to work, we need one that ensures, that the plots are shown on the integrated Matplotlib canvas. We will start with that one. +You will have noticed how all of the buttons in the *.script* file have a command. Whenever that button is clicked, its designated command is executed. +However, for any of the functions referenced via "command" to work, we need one that ensures that the plots are shown on the integrated Matplotlib canvas. We will start with that one. {{< highlight filename = "BaseNetwork.py">}} ```Stan @@ -242,12 +241,11 @@ def getY(): return [float(s) for s in yValues] ``` {{}} -And lastly enable the plotting of a single slice as well as a sequence in 2D through our panel by adding the code below. +And lastly, enable the plotting of a single slice as well as a sequence in 2D through our panel by adding the code below. {{< highlight filename = "BaseNetwork.py">}} ```Stan def singleSlice2D(): - global endSlice lastSlice = endSlice ctx.field("SubImage.z").value = endSlice click2D() @@ -304,7 +302,7 @@ Notice how the bin size affects the plots appearance. You can download the .py file below if you want. {{< networkfile "/tutorials/thirdparty/matplotlib/BaseNetwork.py" >}} -### Summary -+ Functions are connected to fields of the panel via commands -+ The panel preview in MATE can be used to alter positioning of panel components without touching the code -+ An "expand" statement can help the positioning of components in the panel +## Summary +* Functions are connected to fields of the panel via commands. +* The panel preview in MATE can be used to change positioning of panel components without touching the code. +* An "expand" statement can help the positioning of components in the panel. diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/3Dplotting.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/3Dplotting.md index 9b93cfe07..747a3e17e 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/3Dplotting.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/3Dplotting.md @@ -8,19 +8,18 @@ tags: ["Advanced", "Tutorial", "Matplotlib", "Visualization"] menu: main: identifier: "matplotlibexample4" - title: "Example 4: 3D Plotting" + title: "3D Plotting" weight: 884 parent: "matplotlib" --- + # Example 4: 3D Plotting ## Introduction - In this tutorial, we will equip the macro module we created in the [Example 1: Module Setup](tutorials/thirdparty/matplotlib/modulesetup) and later on adapted by enabling it to plot grayscale distributions of single slices and sequences in 2D in [Example 2: 2D Plotting](tutorials/thirdparty/matplotlib/2dplotting) with a three-dimensional plotting functionality. -## Steps to do - -The fields and commands needed have already been prepared in the second tutorial. We will just have to modify our .py file a little to make them usable. Integrate the following code into your .py file and import numpy. +## Steps to Do +The fields and commands needed have already been prepared in the second tutorial. We will just have to modify our *.py* file a little bit to make them usable. Integrate the following code into your *.py* file and import numpy. {{< highlight filename = "BaseNetwork.py">}} ```Stan @@ -59,5 +58,5 @@ You cannot zoom into 3D plots on a Matplotlib canvas. Try changing the viewing a ![Single Slice 3D](images/tutorials/thirdparty/Matplotlib27.PNG) ![Single Slice 3D](images/tutorials/thirdparty/Matplotlib29.PNG) -You can download the .py file below if you want. +You can download the *.py* file below if you want. {{< networkfile "/tutorials/thirdparty/matplotlib/BaseNetwork3D.py" >}} diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/modulesetup.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/modulesetup.md index 07a8e8f05..daef8cefd 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/modulesetup.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/modulesetup.md @@ -8,7 +8,7 @@ tags: ["Beginner", "Tutorial", "Matplotlib", "Visualization"] menu: main: identifier: "matplotlibexample1" - title: "Example 1: Module Setup" + title: "Module Setup" weight: 881 parent: "matplotlib" --- @@ -16,13 +16,11 @@ menu: # Example 1: Module Setup ## Introduction +To be able to access the data needed for our grayscale distribution plots, we need a network consisting of a module that imports DICOM data, a module that differentiates between slices, and another module that ouputs histogram data. -To be able to access the data needed for our grayscale distribution plots, we need a network consisting of a module that imports DICOM data, a module that differentiates between slices and another that ouputs histogram data. - -## Steps to do - -Open up your MeVisLab workspace and add the modules `LocalImage`, `SubImage` and `Histogram` to it. -Connect the output of `LocalImage` to the input of `SubImage` and the output of `SubImage` with the input of `Histogram`. +## Steps to Do +Open up your MeVisLab workspace and add the modules `LocalImage`, `SubImage`, and `Histogram` to it. +Connect the output of `LocalImage` to the input of `SubImage`, and the output of `SubImage` with the input of `Histogram`. If you feel like using a shortcut, you can also download the base network below and open it in your MeVisLab. Your finished network should look like this: @@ -31,24 +29,24 @@ Your finished network should look like this: {{< networkfile "/tutorials/thirdparty/matplotlib/MatplotlibBaseNetwork.mlab" >}} -### Excursion on the concept behind modules - +### Excursion on the Concept Behind Modules To be able to build on the foundation we just set, it can be useful to understand how modules are conceptualized: -You will have noticed how, for every module, a panel will pop up if you double-click it. The modules panel contains all of its functional parameters and enables you, as the user, to change them within a graphical user interface (GUI). We will do something similar later on. +You will have noticed how for every module, a panel will pop up if you double-click it. The modules panel contains all of its functional parameters and enables you, as the user, to change them within a graphical user interface (GUI). We will do something similar later on. + But where and how is a module panel created? To answer this question, please close the module panel and right-click on the module. A context menu will open, click on "Related Files". ![Context menu of the "SubImage" module](images/tutorials/thirdparty/Matplotlib2.png) -As you can see, each module has a .script and a .py file, named like the module itself: -+ The .script file is, where the appearance and structure of the module panel as well as their commands are declared. -+ The .py file contains Python functions and methods, which are triggered by their referenced commands within the .script file. +As you can see, each module has a *.script* and a *.py* file named like the module itself: +* The *.script* file is where the appearance and structure of the module panel as well as their commands are declared. +* The *.py* file contains Python functions and methods, which are triggered by their referenced commands within the *.script* file. -Some modules also reference a .mlab file which usually contains their internal network as the module is a macro. +Some modules also reference an *.mlab* file, which usually contains their internal network as the module is a macro. **Let's continue with our module setup now:** If your network is ready, group it by right-clicking on your group's title and select "Grouping", then "Add To A New Group". -After, convert your grouped network into a macro module. +Afterward, convert your grouped network into a macro module. ![Converting to a macro](images/tutorials/thirdparty/Matplotlib3.png) {{}} @@ -57,7 +55,7 @@ Information on how to convert groups into macros can be found [here](tutorials/b Depending on whether you like to reuse your projects in other workspaces, it can make sense to convert them. We'd recommend to do so. -Now open the .script file of your newly created macro through the context menu. The file will be opened within MATE (MeVisLab Advanced Text Editor). Add this short piece of code into your .script file and make sure that the .script and the .py are named exactly the same as the module they are created for. +Now open the *.script* file of your newly created macro through the context menu. The file will be opened within MATE (MeVisLab Advanced Text Editor). Add this short piece of code into your *.script* file and make sure that the *.script* and the *.py* are named exactly the same as the module they are created for. {{< highlight filename = "BaseNetwork.script">}} ```Stan @@ -67,30 +65,17 @@ Now open the .script file of your newly created macro through the context menu. ``` {{}} -Click the "Reload" button that is located above the script for the .py file to be added into the module definition folder, then open it using the "Files" button on the same bar as demonstrated below: +Click the "Reload" button, which is located above the script for the *.py* file to be added into the module definition folder, then open it using the "Files" button on the same bar as demonstrated below: ![MATE](images/tutorials/thirdparty/Matplotlib5.png) {{}} The [MDL Reference](https://mevislabdownloads.mevis.de/docs/current/MeVisLab/Resources/Documentation/Publish/SDK/MDLReference/index.html) is a very handy tool for this and certainly also for following projects. {{}} -You have now created your own module and enabled the .script file (hence the GUI or panel later on) to access functions and methods written in the .py file. - -### Summary -+ Modules are defined by the contents within their definition folder. -+ A module consists of of a .script file that contains the panel configuration and a .py file containing methods that are accessed via the panel and provide functionalities (Interacting with the parameters of modules in the macros internal network). -+ A macro module's panel can access parameters of its internal modules. -+ The panel is layouted using MDL. - - - - - - - - - - - - +You have now created your own module and enabled the *.script* file (hence the GUI or panel later on) to access functions and methods written in the *.py* file. +## Summary +* Modules are defined by the contents within their definition folder. +* A module consists of of a *.script* file containing the panel configuration and a *.py* file containing functions that are accessed via the panel and provide functionalities (interacting with the parameters of modules in the macros internal network). +* A macro module's panel can access parameters of its internal modules. +* The panel is layouted using MDL. diff --git a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/slicecomparison.md b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/slicecomparison.md index 3e94558f2..96c1a6415 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/matplotlib/slicecomparison.md +++ b/mevislab.github.io/content/tutorials/thirdparty/matplotlib/slicecomparison.md @@ -8,23 +8,23 @@ tags: ["Beginner", "Tutorial", "Matplotlib", "Visualization"] menu: main: identifier: "matplotlibexample3" - title: "Example 3: Slice Comparison" + title: "Slice Comparison" weight: 883 parent: "matplotlib" --- + # Example 3: Slice Comparison ## Introduction - We will adapt the previously created macro module to be able to overlay two defined slices to compare their grayscale distributions. -+ The module we are adapting has been set up in the [Example 1: Module Setup](tutorials/thirdparty/matplotlib/modulesetup) tutorial. -+ The panel and two-dimensional plotting functionality has been added in [Example 2: 2D Plotting] +* The module we are adapting has been set up in the [Example 1: Module Setup](tutorials/thirdparty/matplotlib/modulesetup) tutorial. +* The panel and two-dimensional plotting functionality has been added in [Example 2: 2D Plotting] (tutorials/thirdparty/matplotlib/2dplotting). -## Steps to do -At first, we will extend the panel: Open your `BaseNetwork` macro module within an empty MeVisLab workspace and select the .script file from its related files. +## Steps to Do +At first, we will extend the panel: Open your `BaseNetwork` macro module within an empty MeVisLab workspace and select the *.script* file from its related files. -Add the following code into your .script file, between the "Single Slice" and the "Sequence" box. +Add the following code into your *.script* file between the "Single Slice" and the "Sequence" box. {{< highlight filename = "BaseNetwork.script">}} ```Stan @@ -43,11 +43,11 @@ Add the following code into your .script file, between the "Single Slice" and th } ``` {{}} -Your panel should now be altered to look like this: +Your panel should now be changed to look like this: ![MATE Preview](images/tutorials/thirdparty/Matplotlib14.PNG) -We will now add the "comparison" function, to give the "Plot" button in our "Comparison" box a purpose. To do so, change into your modules .py file and choose a cosy place for the following piece of code: +We will now add the "comparison" function, to give the "Plot" button in our "Comparison" box a purpose. To do so, switch to your module's *.py* file and choose a cosy place for the following piece of code: {{< highlight filename = "BaseNetwork.py">}} ```Stan @@ -83,6 +83,6 @@ You should now be able to reproduce results like these: ![Comparison](images/tutorials/thirdparty/Matplotlib16.PNG) ![Comparison](images/tutorials/thirdparty/Matplotlib17.PNG) -### Summary -+ Grayscale distributions of two slices can be layered to compare them and make deviations noticeable +## Summary +* Grayscale distributions of two slices can be layered to compare them and make deviations noticeable. diff --git a/mevislab.github.io/content/tutorials/thirdparty/monai.md b/mevislab.github.io/content/tutorials/thirdparty/monai.md index 7d18bf734..e655ff3e9 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/monai.md +++ b/mevislab.github.io/content/tutorials/thirdparty/monai.md @@ -12,9 +12,11 @@ menu: weight: 877 parent: "thirdparty" --- + # MONAI {#monai} + ## Introduction -[MONAI](https://github.com/Project-MONAI "monai") (**M**edical **O**pen **N**etwork for **AI**) is an open-source framework built on [PyTorch](http://www.pytorch.org "pytorch"), designed for developing and deploying AI models in medical imaging. +[MONAI](https://github.com/Project-MONAI "monai") (**M**edical **O**pen **N**etwork for **AI**) is an open-source framework built on [PyTorch](http://www.pytorch.org "pytorch") designed for developing and deploying AI models in medical imaging. Created by [NVIDIA](https://docs.nvidia.com/monai/index.html "NVIDIA") and the Linux Foundation, it provides specialized tools for handling medical data formats like DICOM and NIfTI, along with advanced preprocessing, augmentation, and 3D image analysis capabilities. diff --git a/mevislab.github.io/content/tutorials/thirdparty/opencv.md b/mevislab.github.io/content/tutorials/thirdparty/opencv.md index 426011a34..1d5397ccf 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/opencv.md +++ b/mevislab.github.io/content/tutorials/thirdparty/opencv.md @@ -12,11 +12,13 @@ menu: weight: 852 parent: "thirdparty" --- + # Open Source Computer Vision Library (OpenCV) {#OpenCV} + ## Introduction [OpenCV](https://opencv.org/ "OpenCV") (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. This chapter provides some examples how to use OpenCV in MeVisLab. -## Other resources +## Other Resources You can find a lot of OpenCV examples and tutorials on their [website](https://docs.opencv.org/4.x/d9/df8/tutorial_root.html). \ No newline at end of file diff --git a/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample1.md b/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample1.md index 615ae8148..8546a7e94 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample1.md +++ b/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample1.md @@ -1,24 +1,26 @@ --- -title: "Example 1: WebCam access with OpenCV" +title: "Example 1: Webcam Access with OpenCV" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false weight: 855 -tags: ["Advanced", "Tutorial", "OpenCV", "Python", "WebCam", "Macro", "Macro modules", "Global Macro"] +tags: ["Advanced", "Tutorial", "OpenCV", "Python", "Webcam", "Macro", "Macro modules", "Global Macro"] menu: main: identifier: "thirdpartyexample1" - title: "Access your webcam and use the live video in MeVisLab via OpenCV." + title: "Access Your Webcam and Use the Live Video in MeVisLab Via OpenCV." weight: 855 parent: "opencv" --- -# Example 1: WebCam access with OpenCV + +# Example 1: Webcam Access with OpenCV ## Introduction -In this example, we are using the `PythonImage` module and access your WebCam to show the video in a `View2D`. +In this example, we are using the `PythonImage` module and access your webcam to show the video in a `View2D`. + +## Steps to Do -## Steps to do -### Creating the network to be used for testing +### Creating the Network to be Used for Testing Add the modules to your workspace and connect them as seen below. ![Example Network ](images/tutorials/thirdparty/network_example1.png "Example Network ") @@ -29,15 +31,15 @@ The viewer is empty because the image needs to be set via Python scripting. More information about the `PythonImage` module can be found {{< docuLinks "/Standard/Documentation/Publish/ModuleReference/PythonImage.html" "here" >}} {{}} -### Create a macro module -Now you need to create a macro module from your network. You can either group your modules, create a local macro and convert it to a global macro module, or you use the Project Wizard and load your \*.mlab file. +### Create a Macro Module +Now you need to create a macro module from your network. You can either group your modules, create a local macro, and convert it to a global macro module, or you use the Project Wizard and load your *.mlab* file. {{}} -A tutorial how to create your own macro modules can be found in [Example 2.2: Global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules "Example 2.2: Global macro modules"). Make sure to add a Python file to your macro module. +A tutorial on how to create your own macro modules can be found in [Example 2.2: Global macro modules](tutorials/basicmechanisms/macromodules/globalmacromodules "Example 2.2: Global macro modules"). Make sure to add a Python file to your macro module. {{}} -### Add the View2D to your UI -Next, we need to add the `View2D` to a Window of your macro module. Right click on your module {{< mousebutton "right" >}}, open the context menu and select {{< menuitem "Related Files" ".script" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the \*.script file of your module. +### Add the View2D to Your UI +Next, we need to add the `View2D` to a Window of your macro module. Right-click on your module {{< mousebutton "right" >}}, open the context menu and select {{< menuitem "Related Files" ".script" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the *.script* file of your module. Add the following to your file: {{< highlight filename=".script" >}} @@ -82,12 +84,12 @@ Window { ``` {{}} -Now open the Python file of your module and define the commands to be called from the \*.script file: +Now open the Python file of your module and define the commands to be called from the *.script* file: {{< highlight filename=".py" >}} ```Python # from mevis import * -# Setup the interface for PythonImage module +# Set up the interface for PythonImage module def setupInterface(): pass @@ -95,11 +97,11 @@ def setupInterface(): def releaseCamera(_): pass -# Start capturing WebCam +# Start capturing webcam def startCapture(): pass -# Stop capturing WebCam +# Stop capturing webcam def stopCapture(): pass @@ -107,7 +109,7 @@ def stopCapture(): {{}} ### Use OpenCV -Your `View2D` is still empty, lets get access to the WebCam and show the video in your module. Open the Python file of your network again and enter the following code: +Your `View2D` is still empty, let's get access to the webcam and show the video in your module. Open the Python file of your network again and enter the following code: {{< highlight filename=".py" >}} ```Python # from mevis import * @@ -117,7 +119,7 @@ import OpenCVUtils _interfaces = [] camera = None -# Setup the interface for PythonImage module +# Set up the interface for PythonImage module def setupInterface(): global _interfaces _interfaces = [] @@ -128,25 +130,25 @@ def setupInterface(): def releaseCamera(_): pass -# Start capturing WebCam +# Start capturing webcam def startCapture(): pass -# Stop capturing WebCam +# Stop capturing webcam def stopCapture(): pass ``` {{}} -We now imported *cv2* and *OpenCVUtils* so that we can use them in Python. Additionally we defined a list of *_interfaces* and a *camera*. The import of *mevis* is not necessary for this example. +We now imported *cv2* and *OpenCVUtils*, so that we can use them in Python. Additionally, we defined a list of *_interfaces* and a *camera*. The import of *mevis* is not necessary for this example. The *setupInterfaces* function is called whenever the *Window* of your module is opened. Here we are getting the interface of the `PythonImage` module and append it to our global list. -### Access the WebCam +### Accessing the Webcam Now we want to start capturing the camera. {{< highlight filename=".py" >}} ```Python -# Start capturing WebCam +# Start capturing webcam def startCapture(): global camera if not camera: @@ -164,22 +166,22 @@ def updateImage(image): ``` {{}} -The *startCapture* function gets the camera from the *cv2* object if not already available. Then it calls the current MeVisLab network context and creates a timer which calls a *grabImage* function every 0.1 seconds. +The *startCapture* function gets the camera from the *cv2* object if not already available. Then, it calls the current MeVisLab network context and creates a timer that calls a *grabImage* function every 0.1 seconds. -The *grabImage* function reads an image from the *camera* and calls *updateImage*. The interface from the `PythonImage` module is used to set the image from the WebCam. The MeVisLab *OpenCVUtils* convert the OpenCV image to the MeVisLab image format *MLImage*. +The *grabImage* function reads an image from the *camera* and calls *updateImage*. The interface from the `PythonImage` module is used to set the image from the webcam. The MeVisLab *OpenCVUtils* converts the OpenCV image to the MeVisLab image format *MLImage*. Next, we define what happens if you click the *Pause* button. {{< highlight filename=".py" >}} ```Python ... -# Stop capturing WebCam +# Stop capturing webcam def stopCapture(): ctx.removeTimers() ... ``` {{}} -As we started a timer in our network context which updates the image every 0.1 seconds, we just stop this timer and the camera is paused. +As we started a timer in our network context that updates the image every 0.1 seconds, we just stop this timer and the camera is paused. In the end, we need to release the camera whenever you close the Window of your macro module. {{< highlight filename=".py" >}} @@ -197,13 +199,13 @@ def releaseCamera(_): ``` {{}} -Again, the timers are removed, all interfaces are reset and the camera is released. The light indicating WebCam usage should turn off. +Again, the timers are removed, all interfaces are reset, and the camera is released. The light indicating webcam usage should turn off. -Opening your macro module via double-click {{< mousebutton "left" >}} should now allow to start and pause your WebCam video in MeVisLab. You can modify your internal network using a `Convolution` filter module or any other module available in MeVisLab for modifying the stream on the fly. +Opening your macro module via double-click {{< mousebutton "left" >}} should now allow to start and pause your webcam video in MeVisLab. You can modify your internal network using a `Convolution` filter module or any other module available in MeVisLab for modifying the stream on-the-fly. ## Summary -* The `PythonImage` module allows to use Python for defining the image output -* OpenCV can be used in MeVisLab via Python scripting -* Images and videos from OpenCV can be used in MeVisLab networks +* The `PythonImage` module allows to use Python for defining the image output. +* OpenCV can be used in MeVisLab via Python scripting. +* Images and videos from OpenCV can be used in MeVisLab networks. {{< networkfile "examples/thirdparty/example1/OpenCVExample.zip" >}} diff --git a/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample2.md b/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample2.md index 835ef010d..9e988fd34 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample2.md +++ b/mevislab.github.io/content/tutorials/thirdparty/opencv/thirdpartyexample2.md @@ -4,32 +4,34 @@ date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false weight: 857 -tags: ["Advanced", "Tutorial", "OpenCV", "Python", "WebCam", "Face Detection"] +tags: ["Advanced", "Tutorial", "OpenCV", "Python", "Webcam", "Face Detection"] menu: main: identifier: "thirdpartyexample2" - title: "Enhance OpenCV WebCam example and build a face detection using MeVisLab, OpenCV and Python." + title: "Enhance OpenCV Webcam Example and Build a Face Detection Using MeVisLab, OpenCV, and Python" weight: 857 parent: "opencv" --- + # Example 2: Face Detection with OpenCV ## Introduction -This example uses the OpenCV WebCam Python script and adds a basic face detection. +This example uses the OpenCV webcam Python script and adds a basic face detection. {{}} The Python code used in this example has been taken from [Towards Data Science](https://towardsdatascience.com/face-detection-in-2-minutes-using-opencv-python-90f89d7c0f81). {{}} -## Steps to do +## Steps to Do + ### Open Example 1 Add the macro module developed in [Example 1](tutorials/thirdparty/opencv/thirdpartyexample1) to your workspace. -### Download trained classifier XML file -Initially you need to download the trained classifier XML file. It is available in the [OpenCV GitHub repository](https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml). Save the file somewhere and remember the path for later usage in Python. +### Download Trained Classifier XML File +Initially, you need to download the trained classifier XML file. It is available in the [OpenCV GitHub repository](https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml). Save the file somewhere and remember the path for later usage in Python. -### Extend Python file -Right click on your module {{< mousebutton "right" >}}, open the context menu and select {{< menuitem "Related Files" ".py" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the Python file of your module. +### Extend Python File +Right-click on your module {{< mousebutton "right" >}}, open the context menu, and select {{< menuitem "Related Files" ".py" >}}. The text editor {{< docuLinks "/Resources/Documentation/Publish/SDK/MeVisLabManual/ch26.html" "MATE">}} opens. You can see the Python file of your module. You have to load the previously downloaded XML file first. {{< highlight filename=".py" >}} @@ -73,12 +75,12 @@ def releaseCamera(_): ``` {{}} -Opening your macro module and pressing *Start* should now open your WebCam stream and an additional OpenCV window which shows a blue rectangle around a detected face. +Opening your macro module and pressing *Start* should now open your webcam stream and an additional OpenCV window, which shows a blue rectangle around a detected face. ![Face Detection in MeVisLab using OpenCV](images/tutorials/thirdparty/bigbang.png "Face Detection in MeVisLab using OpenCV") ## Summary -This is just one example for using OpenCV in MeVisLab. You will find lots of other examples and tutorials online, we just wanted to show one possibility. +* This is just one example for using OpenCV in MeVisLab. You will find lots of other examples and tutorials online, we just wanted to show one possibility. {{}} You can download the Python file [here](examples/thirdparty/example2/FaceDetection.py) diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch.md index 8cbe07768..ffa308ebe 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch.md @@ -12,7 +12,9 @@ menu: weight: 870 parent: "thirdparty" --- + # PyTorch {#pytorch} + ## Introduction [PyTorch](http://www.pytorch.org "pytorch") is a machine learning framework based on the Torch library, used for applications such as Computer Vision and Natural Language Processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. @@ -28,11 +30,12 @@ We are not explaining PyTorch itself. These tutorials are examples for how to in {{}} ## Available Tutorials + ### Install PyTorch by Using the PythonPip Module The first example shows how to install *torch* and *torchvision* by using the MeVisLab module `PythonPip`. This module can be used to install Python packages not integrated into MeVisLab. -### Use Trained PyTorch Networks in MeVisLab -In this example, we are using a pre-trained network from [torch.hub](https://pytorch.org/hub/) to generate an AI based image overlay of a brain parcellation map. +### Use Pretrained PyTorch Networks in MeVisLab +In this example, we are using a pretrained network from [torch.hub](https://pytorch.org/hub/) to generate an AI based image overlay of a brain parcellation map. ### Segment Persons in Webcam Videos The second tutorial adapts the [Example 2: Face Detection with OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2/ "Example 2: Face Detection with OpenCV") to segment a person shown in a webcam stream. The network has been taken from [torchvision](https://pytorch.org/vision/stable/index.html). diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample1.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample1.md index 46de8467a..9a4ce6908 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample1.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample1.md @@ -1,5 +1,5 @@ --- -title: "Example 1: Installing PyTorch using the PythonPip module" +title: "Example 1: Installing PyTorch Using the PythonPip Module" date: 2023-05-16 status: "OK" draft: false @@ -8,10 +8,11 @@ tags: ["Advanced", "Tutorial", "PyTorch", "Python", "PythonPip", "AI"] menu: main: identifier: "pytorchexample1" - title: "Installing PyTorch using the PythonPip module." + title: "Installing PyTorch Using the PythonPip Module" weight: 871 parent: "pytorch" --- + # Example 1: Installing PyTorch using the PythonPip module ## Introduction @@ -27,8 +28,9 @@ The module either allows to install packages into the global MeVisLab installati Installing additional Python packages into MeVisLab by using the `PythonPip` module requires administrative rights if you do not install into a user package. In addition to that, the installed packages are removed when uninstalling MeVisLab. {{}} -### Steps to do -#### The PythonPip module +### Steps to Do + +#### The PythonPip Module Add a `PythonPip` module to your workspace. ![PythonPip module](images/tutorials/thirdparty/pytorch_example1_1.png "PythonPip module") @@ -37,19 +39,19 @@ Double-click {{< mousebutton "left" >}} the module and inspect the panel. ![PythonPip panel](images/tutorials/thirdparty/pytorch_example1_2.png "PythonPip panel") -The panel shows all currently installed Python packages including their version and the MeVisLab package they are saved in. You can see a warning that the target package is set to read-only in case you are selecting a MeVisLab package. Changing to one of your user packages (see [Example 2.1: Package creation](tutorials/basicmechanisms/macromodules/package/) for details) makes the warning disappear. +The panel shows all currently installed Python packages including their version and the MeVisLab package they are saved in. You can see a warning that the target package is set to read-only in the case you are selecting a MeVisLab package. Changing to one of your user packages (see [Example 2.1: Package creation](tutorials/basicmechanisms/macromodules/package/) for details) makes the warning disappear. ![Select user package](images/tutorials/thirdparty/pytorch_example1_3.png "Select user package") {{}} -Additional Information on the `PythonPip` module can be found in [Example 4: Install additional Python packages via PythonPip module](tutorials/basicmechanisms/macromodules/pythonpip "PythonPip module"). +Additional information on the `PythonPip` module can be found in [Example 4: Install additional Python packages via PythonPip module](tutorials/basicmechanisms/macromodules/pythonpip "PythonPip module"). {{}} -#### Install torch and torchvision +#### Install Torch and Torchvision For our tutorials, we need to install *torch* and *torchvision*. Enter *torch torchvision* into the *Command* textbox and press *Install*. {{}} -We are using the CPU version of PyTorch for our tutorials as we want them to be as accessible as possible. If you happen to have a large GPU capacity (and CUDA support) you can also use the GPU version. You can install the necessary packages by using the PyTorch documentation available [here](https://pytorch.org/get-started/locally "PyTorch documentation"). +We are using the CPU version of PyTorch for our tutorials as we want them to be as accessible as possible. If you happen to have a large GPU capacity (and CUDA support), you can also use the GPU version. You can install the necessary packages by using the PyTorch documentation available [here](https://pytorch.org/get-started/locally "PyTorch documentation"). {{}} Continuing with CUDA support: @@ -61,23 +63,23 @@ torch torchvision --index-url https://download.pytorch.org/whl/cu117 {{}} {{}} -If you are behind a proxy server, you may have to set the **HTTP_PROXY** and **HTTPS_PROXY** environment variables to the hostname and port of your proxy. These are used by pip when accessing the internet. +If you are behind a proxy server, you may have to set the **HTTP_PROXY** and **HTTPS_PROXY** environment variables to the hostname and port of your proxy. These are used by *pip* when accessing the internet. -Alternatively you can also add a parameter to *pip install* command: *--proxy https://proxy:port* +Alternatively, you can also add a parameter to *pip install* command: *--proxy https://proxy:port* {{}} ![Install torch and torchvision](images/tutorials/thirdparty/pytorch_example1_4.png "Install torch and torchvision") -After clicking *Install*, the pip console output opens and you can follow the process of the installation. +After clicking *Install*, the *pip* console output opens and you can follow the process of the installation. ![Python pip output](images/tutorials/thirdparty/pytorch_example1_5.png "Python pip output") -After the installation was finished with exit code 0, you should see the new packages in the `PythonPip` module. +After the installation has finished with exit code 0, you should see the new packages in the `PythonPip` module. ![PyTorch installed](images/tutorials/thirdparty/pytorch_example1_6.png "PyTorch installed") ## Summary * *PyTorch* can be installed using the `PythonPip` module. -* There are different versions available (CPU and GPU) depending on the hardware that is used -* Additional steps have to be taken depending on the version one wishes to install -* The module displays newly installed packages as soon as the installation was successful +* There are different versions available (CPU and GPU) depending on the hardware that is used. +* Additional steps have to be taken depending on the version one wishes to install. +* The module displays newly installed packages as soon as the installation was successful. diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md index 5a15afe01..c817bed18 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md @@ -1,5 +1,5 @@ --- -title: "Example 2: Brain Parcellation using PyTorch" +title: "Example 2: Brain Parcellation Using PyTorch" date: 2023-06-30 status: "OK" draft: false @@ -8,21 +8,24 @@ tags: ["Advanced", "Tutorial", "PyTorch", "Python", "PythonPip", "AI"] menu: main: identifier: "pytorchexample2" - title: "Brain Parcellation using PyTorch" + title: "Brain Parcellation Using PyTorch" weight: 873 parent: "pytorch" --- -# Example 2: Brain Parcellation using PyTorch + +# Example 2: Brain Parcellation Using PyTorch ## Introduction -In this example, you are using a pre-trained PyTorch deep learning model (HighRes3DNet) to perform a full brain parcellation. HighRes3DNet is a 3D residual network presented by Li et al. in [On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task](https://link.springer.com/chapter/10.1007/978-3-319-59050-9_28). +In this example, you are using a pretrained PyTorch deep learning model (HighRes3DNet) to perform a full brain parcellation. + +HighRes3DNet is a 3D residual network presented by Li et al. in [On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task](https://link.springer.com/chapter/10.1007/978-3-319-59050-9_28). -## Steps to do +## Steps to Do Add a `LocalImage` module to your workspace and select the file *MRI_Head.dcm*. For PyTorch it is necessary to resample the data to a defined size. Add a `Resample3D` module to the `LocalImage` and open the panel. Change *Keep Constant* to *Voxel Size* and define *Image Size* as 176, 217, 160. ![Resample3D module](images/tutorials/thirdparty/pytorch_example2_1.png "Resample3D module"). -The coordinates in PyTorch are also a little different than in MeVisLab, therefore you have to rotate the image. Add an `OrthoSwapFlip` module and connect it to the `Resample3D` module. Change *View* to *Other* and set *Orientation* to *YXZ*. Also check *Flip horizontal*, *Flip vertical* and *Flip depth*. *Apply* your changes. +The coordinates in PyTorch are also a little bit different than in MeVisLab; therefore, you have to rotate the image. Add an `OrthoSwapFlip` module and connect it to the `Resample3D` module. Change *View* to *Other* and set *Orientation* to *YXZ*. Also check *Flip horizontal*, *Flip vertical*, and *Flip depth*. *Apply* your changes. ![OrthoSwapFlip module](images/tutorials/thirdparty/pytorch_example2_2.png "OrthoSwapFlip module"). @@ -30,19 +33,19 @@ You can use the Output Inspector to see the changes on the images after applying {{< imagegallery 3 "images/tutorials/thirdparty/" "Original" "Resample3D" "OrthoSwapFlip">}} -Add an `OrthoView2D` module to your network and save the *\*.mlab* file. +Add an `OrthoView2D` module to your network and save the *.mlab* file. ![OrthoView2D module](images/tutorials/thirdparty/pytorch_example2_3.png "OrthoView2D module"). -## Integrate PyTorch and scripting -For integrating PyTorch and Python scripting, we need a `PythonImage` module. Add it to your workspace. Right-click {{< mousebutton "right" >}} on the `PythonImage` module and select {{< menuitem "Grouping" "Add to new Group...">}}. Right-click {{< mousebutton "right" >}} your new group and select {{< menuitem "Grouping" "Add to new Group...">}}. Name your new local macro *DemoAI*, select a directory for your project and leave all settings as default. +## Integrate PyTorch and Scripting +For integrating PyTorch and Python scripting, we need a `PythonImage` module. Add it to your workspace. Right-click {{< mousebutton "right" >}} on the `PythonImage` module and select {{< menuitem "Grouping" "Add to new Group...">}}. Right-click {{< mousebutton "right" >}} your new group and select {{< menuitem "Grouping" "Add to new Group...">}}. Name your new local macro *DemoAI*, select a directory for your project, and leave all settings as default. -Our new module does not provide an input or output. +Our new module does not provide any input or output. ![DemoAI local macro](images/tutorials/thirdparty/pytorch_example2_4.png "DemoAI local macro"). -### Adding an interface to the local macro -Right-click {{< mousebutton "right" >}} the local macro and select {{< menuitem "Related Files" "DemoAI.script">}}. MATE opens showing the *\*.script* file of our module. Add an input *Field* of type *Image*, an output *Field* using the *internalName* of the output of our `PythonImage` and a *Trigger* to start the segmentation. +### Adding an Interface to the Local Macro +Right-click {{< mousebutton "right" >}} the local macro and select {{< menuitem "Related Files" "DemoAI.script">}}. MATE opens showing the *.script* file of our module. Add an input *Field* of type *Image*, an output *Field* using the *internalName* of the output of our `PythonImage`, and a *Trigger* to start the segmentation. You should also already add a Python file in the *Commands* section. @@ -70,20 +73,19 @@ In MATE, right-click {{< mousebutton "right" >}} the Project Workspace and add a ![Project Workspace](images/tutorials/thirdparty/pytorch_example2_5.png "Project Workspace"). -Change to MeVisLab IDE, right-click {{< mousebutton "right" >}} the local macro and select {{< menuitem "Reload Definition">}}. Your new input and output interface are now available and you can connect images to your module. +Switch back to MeVisLab IDE, right-click {{< mousebutton "right" >}} the local macro, and select {{< menuitem "Reload Definition">}}. Your new input and output interface is now available and you can connect images to your module. ![DemoAI local macro with interfaces](images/tutorials/thirdparty/pytorch_example2_6.png "DemoAI local macro with interfaces"). -### Extend your network - -We want to show the segmentation results as an overlay on the original image. Add a `SoView2DOverlayMPR` module and connect it to your `DemoAI` macro. Connect the output of the `SoView2DOverlayMPR` to a `SoGroup`. We also need a lookup table for the colors to be used for the overlay. We already prepared a *\*.xml* file you can simply use. Download the [lut.xml](examples/thirdparty/pytorch2/lut.xml) file and save it in your current working directory of the project. +### Extend Your Network +We want to show the segmentation results as an overlay on the original image. Add a `SoView2DOverlayMPR` module and connect it to your `DemoAI` macro. Connect the output of the `SoView2DOverlayMPR` to a `SoGroup`. We also need a lookup table for the colors to be used for the overlay. We already prepared an *.xml* file you can simply use. Download the [lut.xml](examples/thirdparty/pytorch2/lut.xml) file and save it in your current working directory of the project. -Add a `LoadBase` module and connect it to a `SoMLLUT` module. The `SoMLLUT` needs to be connected to the `SoGroup` so that it is applied to our segmentation results. +Add a `LoadBase` module and connect it to a `SoMLLUT` module. The `SoMLLUT` needs to be connected to the `SoGroup`, so that it is applied to our segmentation results. ![Final network](images/tutorials/thirdparty/pytorch_example2_7.png "Final network"). {{}} -If your PC is equipped with less than 16GBs of RAM/working memory we recommend to add a `SubImage` module between the `OrthoSwapFlip` and the `Resample3D` module. You should configure less slices in z-direction to prevent your system from running out of memory. +If your PC is equipped with less than 16GBs of RAM/working memory, we recommend to add a `SubImage` module between the `OrthoSwapFlip` and the `Resample3D` module. You should configure less slices in the z-direction to prevent your system from running out of memory. ![SubImage module](images/tutorials/thirdparty/pytorch_example2_7b.png "SubImage module"). {{}} @@ -92,10 +94,10 @@ Inspect the output of the `LoadBase` module in the Output Inspector to see if th ![LUT in LoadBase](images/tutorials/thirdparty/pytorch_example2_8.png "LUT in LoadBase"). -### Write Python script -You can now execute the pre-trained PyTorch network on your image. Right-click {{< mousebutton "right" >}} the local macro and select {{< menuitem "Related Files" "DemoAI.script">}}. The Python function is supposed to be called whenever the *Trigger* is touched. +### Write Python Script +You can now execute the pretrained PyTorch network on your image. Right-click {{< mousebutton "right" >}} the local macro and select {{< menuitem "Related Files" "DemoAI.script">}}. The Python function is supposed to be called whenever the *Trigger* is touched. -Add the following code to your Commands section: +Add the following code to your *Commands* section: {{< highlight filename="DemoAI.script" >}} ```Stan @@ -145,11 +147,11 @@ def onStart(): {{}} {{}} -When executing your script for the first time, you will get a ScriptError message in MeVisLab console. This only happens because the file of the trained network is missing and downloaded initially. You can ignore the message. +When executing your Python script for the first time, you will get a ScriptError message in the MeVisLab console. This only happens because the file of the trained network is missing and downloaded initially. You can ignore the message. {{}} {{}} -The script uses the CPU, in case you want to use CUDA, you can replace the line *device = torch.device("cpu")* with: *device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')* +The script uses the CPU; in the case you want to use CUDA, you can replace the line *device = torch.device("cpu")* with: *device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')* {{}} The function does the following: @@ -159,19 +161,19 @@ The function does the following: * Load and prepare AI model * Set output image to module output -## Execute the segmentation -Change alpha value of your `SoView2DOverlayMPR` to have a better visualization of the results. +## Execute the Segmentation +Change the alpha value of your `SoView2DOverlayMPR` to have a better visualization of the results. -Change to MeVisLab IDE and select your module `DemoAI`. In *Module Inspector* click *Trigger* for *start* and wait a little until you can see the results. +Switch back to the MeVisLab IDE and select your module `DemoAI`. In *Module Inspector*, click *Trigger* for *start* and wait a little bit until you can see the results. ![Final result](images/tutorials/thirdparty/pytorch_example2_9.png "Final result"). -Without adding a `SubImage` the segmentation results should look like this: +Without adding a `SubImage`, the segmentation results should look like this: ![Results](images/tutorials/thirdparty/pytorch_example2_10.png "Results"). ## Summary -* Pre-trained PyTorch networks can be used directly in MeVisLab via `PythonImage` module +* Pretrained PyTorch networks can be used directly in MeVisLab via `PythonImage` module. {{< networkfile "examples/thirdparty/pytorch2/DemoAI.zip" >}} diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample3.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample3.md index 790f2d9d8..ea57c7492 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample3.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample3.md @@ -1,5 +1,5 @@ --- -title: "Example 3: Segment persons in webcam videos" +title: "Example 3: Segment Persons in Webcam Videos" date: 2023-05-16 status: "OK" draft: false @@ -8,43 +8,44 @@ tags: ["Advanced", "Tutorial", "PyTorch", "Python", "PythonPip", "AI", "Segmenta menu: main: identifier: "pytorchexample3" - title: "Segment persons in webcam videos." + title: "Segment Persons in Webcam Videos" weight: 875 parent: "pytorch" --- -# Example 3: Segment persons in webcam videos + +# Example 3: Segment Persons in Webcam Videos ## Introduction -This tutorial is based on [Example 2: Face Detection with OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2 "Example 2: Face Detection with OpenCV"). You can re-use some of the scripts already developed in the other tutorial. +This tutorial is based on [Example 2: Face Detection with OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2 "Example 2: Face Detection with OpenCV"). You can reuse some of the scripts already developed in the other tutorial. -## Steps to do +## Steps to Do Add the macro module developed in the previous example to your workspace. -![WebCamTest module](images/tutorials/thirdparty/pytorch_example3_1.png "WebCamTest module") +![WebcamTest module](images/tutorials/thirdparty/pytorch_example3_1.png "WebcamTest module") -Open the internal network of the module via middle mouse button {{< mousebutton "middle" >}} and right click {{< mousebutton "right" >}} on the tab of the workspace showing the internal network. Select *Show Enclosing Folder*. +Open the internal network of the module via middle mouse button {{< mousebutton "middle" >}} and right-click {{< mousebutton "right" >}} on the tab of the workspace showing the internal network. Select *Show Enclosing Folder*. ![Show Enclosing Folder](images/tutorials/thirdparty/pytorch_example3_2.png "Show Enclosing Folder") -The file browser opens showing the files of your macro module. Copy the *\*.mlab* file somewhere you can remember. +The file browser opens showing the files of your macro module. Copy the *.mlab* file somewhere you can remember. -### Create the macro module +### Create the Macro Module Open the the Project Wizard via {{< menuitem "File" "Run Project Wizard">}} and select *Macro Module*. Click *Run Wizard*. ![Project Wizard](images/tutorials/thirdparty/pytorch_example3_3.png "Project Wizard") -Define the module properties as shown below, though you can chose your own name. Click *Next*. +Define the module properties as shown below, although you can choose your own name. Click *Next*. ![Module Properties](images/tutorials/thirdparty/pytorch_example3_4.png "Module Properties") -Define the module properties and select the copied *\*.mlab* file. Make sure to add a Python file and click *Next*. +Define the module properties and select the copied *.mlab* file. Make sure to add a Python file and click *Next*. ![Macro Module Properties](images/tutorials/thirdparty/pytorch_example3_5.png "Macro Module Properties") Leave the module field reference as is and click *Create*. Close Project Wizard and select {{< menuitem "Extras" "Reload Module Database (Clear Cache)">}}. -### Re-use script and Python code -Open the script file of the `WebCamTest` module and copy the contents to your new PyTorch module. The result should be something like this: +### Script and Python Code +Open the script file of the `WebcamTest` module and copy the contents to your new PyTorch module. The result should be something like this: {{< highlight filename="PyTorchSegmentationExample.script" >}} ```Stan @@ -101,7 +102,7 @@ _interfaces = [] camera = None face_cascade = cv2.CascadeClassifier('C:/tmp/haarcascade_frontalface_default.xml') -# Setup the interface for PythonImage module +# Set up the interface for PythonImage module def setupInterface(): global _interfaces _interfaces = [] @@ -123,14 +124,14 @@ def grabImage(): def updateImage(image): _interfaces[0].setImage(OpenCVUtils.convertImageToML(image), minMaxValues = [0,255]) -# Start capturing WebCam +# Start capturing webcam def startCapture(): global camera if not camera: camera = cv2.VideoCapture(0) ctx.callWithInterval(0.1, grabImage) -# Stop capturing WebCam +# Stop capturing webcam def stopCapture(): ctx.removeTimers() @@ -148,8 +149,8 @@ def releaseCamera(_): You should now have the complete functionality of the [Example 2: Face Detection with OpenCV](tutorials/thirdparty/opencv/thirdpartyexample2 "Example 2: Face Detection with OpenCV"). -### Adapt the network -For *PyTorch*, we require some additional modules in our network. Open the network file via right-click {{< mousebutton "right" >}} and selecting {{< menuitem "Related Files" "PyTorchSegmentationExample.mlab" >}} of your module and add another `PythonImage` module. Connect a `Resample3D` and an `ImagePropertyConvert` module. +### Adapt the Network +For *PyTorch*, we require some additional modules in our network. Open the internal network of your module and add another `PythonImage` module. Connect a `Resample3D` and an `ImagePropertyConvert` module. In `Resample3D` module, define the *Image Size* 693, 520, 1. Change *VoxelSize* for all dimensions to 1. @@ -159,14 +160,14 @@ Open the Panel of the `ImagePropertyConvert` module and check *World Matrix*. ![ImagePropertyConvert](images/tutorials/thirdparty/pytorch_example3_9.png "ImagePropertyConvert") -Then add a `SoView2DOverlayMPR` module and connect it to the `ImagePropertyConvert` and the `View2D`. Change *Blend Mode* to *Blend*, *Alpha* to something between 0 and 1 and define a color for the overlay. +Then, add a `SoView2DOverlayMPR` module and connect it to the `ImagePropertyConvert` and the `View2D`. Change *Blend Mode* to *Blend*, *Alpha* to something between 0 and 1, and define a color for the overlay. ![SoView2DOverlayMPR](images/tutorials/thirdparty/pytorch_example3_8.png "SoView2DOverlayMPR") -Save the network file. +Save the internal network. -### Remove OpenCV specific code -We want to use PyTorch for segmentation, therefore you need to add all necessary imports. +### Remove OpenCV-specific Code +We want to use PyTorch for segmentation; therefore, you need to add all necessary imports. {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -179,7 +180,7 @@ import torch ``` {{}} -Additionally remove the *face_cascade* parameter from your Python code. This was necessary for detecting a face in OpenCV and is not necessary anymore in PyTorch. The only parameters you need here are: +Additionally, remove the *face_cascade* parameter from your Python code. This was necessary for detecting a face in OpenCV and is not necessary anymore in PyTorch. The only parameters you need here are: {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -188,7 +189,7 @@ camera = None ``` {{}} -You can also remove the OpenCV specific lines in *grabImage*. The function should look like this now: +You can also remove the OpenCV-specific lines in *grabImage*. The function should look like this now: {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -214,7 +215,7 @@ def releaseCamera(_): ``` {{}} -### Implement PyTorch segmentation +### Implement PyTorch Segmentation The first thing we need is a function for starting the camera. It closes the previous segmentation and calls the existing function *startCapture*. {{< highlight filename="PyTorchSegmentationExample.py" >}} @@ -227,7 +228,7 @@ def startWebcam(): ``` {{}} -As this function is not called in our User Interface, we need to update the \*.*script* file. Change the first Button to below script: +As this function is not called in our user interface, we need to update the *.script* file. Change the first *Button* to below script: {{< highlight filename="PyTorchSegmentationExample.script" >}} ```Stan @@ -238,7 +239,7 @@ Button { ``` {{}} -Now your new function *startWebcam* is called whenever touching the left button. As a next step, define a Python function *segmentSnapshot*. We are using a pre-trained network from torchvision. In case you want to use other PyTorch possibilities, you can find lots of examples on their [website](https://pytorch.org/tutorials/). +Now, your new function *startWebcam* is called whenever touching the left button. As a next step, define a Python function *segmentSnapshot*. We are using a pretrained network from Torchvision. In the case you want to use other PyTorch possibilities, you can find lots of examples on their [website](https://pytorch.org/tutorials/). {{< highlight filename="PyTorchSegmentationExample.py" >}} ```Python @@ -278,7 +279,7 @@ def segmentSnapshot(): ``` {{}} -In order to call this function, we have to change the command of the right button by adapting the *\*.script* file. +In order to call this function, we have to change the command of the right button by adapting the *.script* file. {{< highlight filename="PyTorchSegmentationExample.script" >}} ```Stan @@ -289,7 +290,7 @@ Button { ``` {{}} -In step 5 we selected the class *person*. Whenever you click *Segment Snapshot*, PyTorch will try to segment all persons in the video. +In step 5, we selected the class *person*. Whenever you click *Segment Snapshot*, PyTorch will try to segment all persons in the video. {{}} The following classes are available: @@ -315,7 +316,7 @@ The following classes are available: * tvmonitor {{}} -The final result of the segmentation should be a semi-transparent red overlay of the persons segmented in your webcam stream. +The final result of the segmentation should be a semitransparent red overlay of the persons segmented in your webcam stream. ![Final Segmentation result](images/tutorials/thirdparty/pytorch_example3_10.png "Final Segmentation result") diff --git a/mevislab.github.io/content/tutorials/visualization.md b/mevislab.github.io/content/tutorials/visualization.md index 7b34d346f..39398caa3 100644 --- a/mevislab.github.io/content/tutorials/visualization.md +++ b/mevislab.github.io/content/tutorials/visualization.md @@ -8,13 +8,14 @@ tags: ["Beginner", "Tutorial", "Visualization", "2D", "3D"] menu: main: identifier: "visualization" - title: "Examples for different possibilities of visualizations in MeVisLab." + title: "Examples for Different Possibilities of Visualizations in MeVisLab" weight: 550 parent: "tutorials" --- + # Visualization in MeVisLab {#TutorialVisualization} -## Introduction +## Introduction Images and data objects can be rendered in 2D and 3D and interacted with in several ways using a set of tools available through MeVisLab. In this chapter in particular, we will focus on simple image interaction with two- and three-dimensional visualizations. @@ -23,13 +24,11 @@ Not only pixel- and voxel-based data, but also scene objects and 3D scenes can b {{}} ## View2D and View3D - -An easy way to display data and images in 2D and 3D is by using the Modules `View2D` and `View3D`. What can be done with these viewers? +An easy way to display data and images in 2D and 3D is by using them modules `View2D` and `View3D`. What can be done with these viewers? ![View2D and View3D](images/tutorials/visualization/V0.png "View2D and View3D") ### View2D - 1. Scroll through the slices using the mouse wheel {{< mousebutton "middle" >}} and/or middle mouse button {{< mousebutton "middle" >}}. 2. Change the contrast of the image by clicking the right mouse button {{< mousebutton "right" >}} and moving the mouse. @@ -45,7 +44,6 @@ The `View2DExtensions` module provides additional ways to interact with an image {{}} ### View3D - 1. Zoom in and out using the mouse wheel {{< mousebutton "middle" >}}. 2. Drag the 3D objects using the middle mouse button {{< mousebutton "middle" >}}. diff --git a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md index 8b1646148..3ef5c38bf 100644 --- a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md +++ b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md @@ -8,10 +8,11 @@ tags: ["Advanced", "Tutorial", "Visualization", "3D", "Volume Rendering", "Path menu: main: identifier: "pathtracer_example1" - title: "Comparison between Volume Rendering and MeVisLab Path Tracer" + title: "Comparison Between Volume Rendering and MeVisLab Path Tracer" weight: 578 parent: "visualization_example6" --- + # Example 6.1: Volume Rendering vs. Path Tracer {{< youtube "E0H87Cimu_M">}} @@ -30,8 +31,9 @@ The MeVis Path Tracer requires an NVIDIA graphics card with CUDA support. In ord As a first step for comparison, you are creating a 3D scene with two spheres using the already known volume rendering. ### Volume Rendering -#### Create 3D objects -Add three `WEMInitialize` modules for one *Cube* and two *Icosphere* to your workspace and connect each of them to a `SoWEMRenderer`. Set *instanceName* of the `WEMInitialize` modules to *Ccube*, *Sphere1*, and *Sphere2*. Set *instanceName* of the `SoWEMRenderer` modules to and *RenderCube*, *RenderSphere1*, and *RenderSphere2*. + +#### Create 3D Objects +Add three `WEMInitialize` modules for one *Cube* and two *Icosphere* to your workspace and connect each of them to a `SoWEMRenderer`. Set *instanceName* of the `WEMInitialize` modules to *Cube*, *Sphere1*, and *Sphere2*. Set *instanceName* of the `SoWEMRenderer` modules to and *RenderCube*, *RenderSphere1*, and *RenderSphere2*. For *RenderSphere1*, define a *Diffuse Color* *yellow* and set *Face Alpha* to *0.5*. The *RenderCube* remains as is and the *RenderSphere2* is defined as *Diffuse Color* *red* and *Face Alpha* *0.5*. @@ -79,10 +81,10 @@ Finally, you should group all modules belonging to your volume rendering. ![Volume Rendering Network](images/tutorials/visualization/pathtracer/Example1_8.png "Volume Rendering Network") ### Path Tracing -For the Path Tracer, you can just re-use our 3D objects from volume rendering. This helps us to compare the rendering results. +For the Path Tracer, you can just reuse our 3D objects from volume rendering. This helps us to compare the rendering results. #### Rendering -Path Tracer modules fully integrate into MeVisLab Open Inventor, therefore the general principles and the necessary modules are not completely different. Add a `SoGroup` module to your workspace and connect it to your 3D objects from `SoWEMRenderer`. A `SoBackground` as in volume rendering network is not necessary but you add a `SoPathTracerMaterial` and connect it to the `SoGroup`. You can leave all settings as default for now. +Path Tracer modules fully integrate into MeVisLab Open Inventor; therefore, the general principles and the necessary modules are not completely different. Add a `SoGroup` module to your workspace and connect it to your 3D objects from `SoWEMRenderer`. A `SoBackground` as in volume rendering network is not necessary but you add a `SoPathTracerMaterial` and connect it to the `SoGroup`. You can leave all settings as default for now. ![Path Tracer Material](images/tutorials/visualization/pathtracer/Example1_9.png "Path Tracer Material") @@ -111,7 +113,7 @@ Finally, group your Path Tracer modules to another group named *Path Tracing*. ![Side by Side](images/tutorials/visualization/pathtracer/Example1_15.png "Side by Side") ### Share the Same Camera -Finally, you want to have the same camera perspective in both viewers so that you can see the differences. Add a `SoPerspectiveCamera` module to your workspace and connect it to the volume rendering and the Path Tracer network. The Path Tracer network additionally needs a SoGroup, see below for connection details. You have to toggle *detectCamera* in both of your `SoCameraInteraction` modules in order to synchronize the view for both `SoRenderArea` viewers. +Finally, you want to have the same camera perspective in both viewers, so that you can see the differences. Add a `SoPerspectiveCamera` module to your workspace and connect it to the volume rendering and the Path Tracer network. The Path Tracer network additionally needs a SoGroup, see below for connection details. You have to toggle *detectCamera* in both of your `SoCameraInteraction` modules in order to synchronize the view for both `SoRenderArea` viewers. ![Camera Synchronization](images/tutorials/visualization/pathtracer/Example1_16.png "Camera Synchronization") diff --git a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample2.md b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample2.md index def5d9119..0208395b8 100644 --- a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample2.md +++ b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample2.md @@ -1,5 +1,5 @@ --- -title: "Example 6.2: Visualization using Path Tracer" +title: "Example 6.2: Visualization Using Path Tracer" date: "2024-01-02" status: "open" draft: false @@ -8,10 +8,11 @@ tags: ["Advanced", "Tutorial", "Visualization", "3D", "Path Tracer"] menu: main: identifier: "pathtracer_example2" - title: "Comparison between Volume Rendering and MeVisLab Path Tracer" + title: "Comparison Between Volume Rendering and MeVisLab Path Tracer" weight: 579 parent: "visualization_example6" --- + # Example 6.2: Visualization Using SoPathTracer ## Introduction @@ -25,6 +26,7 @@ The MeVis Path Tracer requires an NVIDIA graphics card with CUDA support. In ord {{}} ## Steps to Do + ### Develop Your Network Download and open the [images](examples/visualization/example6/Volume_1.mlimage) by using a `LocalImage` module. Connect it to a `View2D` to visually inspect its contents. @@ -40,15 +42,15 @@ It's essential to consistently position the `SoPathTracer `module on the right s ![SoPathTracerVolume & SoPathTracer](images/tutorials/visualization/pathtracer/V6.2_2.png "SoPathTracerVolume & SoPathTracer") -If you check your `SoExaminerViewer` you will see a black box. We need to define a LUT for the gray values first. +If you check your `SoExaminerViewer`, you will see a black box. We need to define a LUT for the gray values first. ![SoExaminerViewer](images/tutorials/visualization/pathtracer/V6.2_3.png "SoExaminerViewer") -Now connect the `SoLUTEditor` module to your `SoPathTracerVolume` as illustrated down below and you will be able to see the knee. +Now, connect the `SoLUTEditor` module to your `SoPathTracerVolume` as illustrated down below and you will be able to see the knee. ![SoLUTEditor](images/tutorials/visualization/pathtracer/SoLUTEditor1.png "SoLUTEditor") -Add a `MinMaxScan` module to the `LocalImage` module and open the panel. The module shows the minimal and maximal gray values of the volume. +Add a `MinMaxScan` module to the `LocalImage` module and open the panel. The module shows the actual minimal and maximal gray values of the volume. Open the panel of the `SoLUTEditor` module and define Range between *0* and *2047* as calculated by the `MinMaxScan`. @@ -56,9 +58,9 @@ Open the panel of the `SoLUTEditor` module and define Range between *0* and *204 Next, add lights to your scene. Connect a `SoPathTracerAreaLight` and a `SoPathTracerBackgroundLight` module to your `SoExaminerViewer` to improve scene lighting. -The `SoPathTracerAreaLight` module provides a physically based area light that illuminates the scene of a `SoPathTracer`. The lights can be rectangular or discs and have an area, color, and intensity. They can be positioned with spherical coordinates around the bounding box of the renderer, or they can be position in world or camera space. +The `SoPathTracerAreaLight` module provides a physically-based area light that illuminates the scene of a `SoPathTracer`. The lights can be rectangular or discs and have an area, color, and intensity. They can be positioned with spherical coordinates around the bounding box of the renderer, or they can be position in world or camera space. -The `SoPathTracerBackgroundLight` module provides a background light for the `SoPathTracer`. It supports setting a top, middle, and bottom color or alternatively, it support image based lighting (IBL) using a sphere or cube map. Only one background light can be active for a given `SoPathTracer`. +The `SoPathTracerBackgroundLight` module provides a background light for the `SoPathTracer`. It supports setting a top, middle, and bottom color or alternatively, it support image-based lighting (IBL) using a sphere or cube map. Only one background light can be active for a given `SoPathTracer`. ![Lights](images/tutorials/visualization/pathtracer/Lights2.png "Lights") @@ -72,13 +74,13 @@ Either define your desired colors for your LUT in the *Editor* tab manually as s ![LUT](images/tutorials/visualization/pathtracer/V6.2_LUTandSoExaminerViewer.png "LUT") #### Load Example LUT from File -As an alternative, you can replace the `SoLUTEditor` with a `LUTLoad` and load this [XML file](examples/visualization/example6/LUT_Original.xml) to use a pre-defined LUT. +As an alternative, you can replace the `SoLUTEditor` with a `LUTLoad` and load this [XML file](examples/visualization/example6/LUT_Original.xml) to use a predefined LUT. ![LUTLoad](images/tutorials/visualization/pathtracer/LUTLoad.png "LUTLoad") Now, let's enhance your rendering further by using the `SoPathTracerMaterial` module. This module provides essential material properties for geometry and volumes within the `SoPathTracer` scene. -Add a `SoPathTracerMaterial` module to your `SoPathTracerVolume`. Open it’s panel and navigate to the tab *Surface Brdf*. Change the *Diffuse* color for altering the visual appearance of surfaces. The *Diffuse* color determines how light interacts with the surface, influencing its overall color and brightness. Set *Specular* to *0.5*, *Shininess* to *1.0*, and *Specular Intensity* to *0.5*. +Add a `SoPathTracerMaterial` module to your `SoPathTracerVolume`. Open its panel and navigate to the tab *Surface Brdf*. Change the *Diffuse* color for altering the visual appearance of surfaces. The *Diffuse* color determines how light interacts with the surface, influencing its overall color and brightness. Set *Specular* to *0.5*, *Shininess* to *1.0*, and *Specular Intensity* to *0.5*. ![SoPathTracerMaterial](images/tutorials/visualization/pathtracer/SoPathTracerMaterial_Knee.png "SoPathTracerMaterial") @@ -97,13 +99,13 @@ Load the [Bones mask](examples/visualization/example6/edited_Bones.mlimage) by u ![Bones mask](images/tutorials/visualization/pathtracer/View2D_Bones.png "Bones mask") -Start by disabling the visibility of your first volume by toggeling `SoPathTracerVolume` *Enabled* field to off. This helps improve the rendering of the bones itself and makes it easier to define colors for your LUT. +Start by disabling the visibility of your first volume by toggeling `SoPathTracerVolume` *Enabled* field to off. This helps to improve the rendering of the bones itself and makes it easier to define colors for your LUT. #### Load Example LUT from File Once again, you can decide to define the LUT yourself in `SoLUTEditor` module, or load a prepared XML File in a `LUTLoad` module as provided [here](examples/visualization/example6/LUT_Bones.xml). #### Manually Define LUT -If you want to define your own LUT, connect a `MinMaxScan` module to your `LocalImage1` and define Range for the `SoLUTEditor` as already done before. +If you want to define your own LUT, connect a `MinMaxScan` module to your `LocalImage1` and define the *Range* for the `SoLUTEditor` as already done before. ![MinMaxScan of Bones mask](images/tutorials/visualization/pathtracer/MinMaxScan_Bones.png "MinMaxScan of Bones mask") @@ -143,7 +145,7 @@ The resulting rendering in `SoExaminerViewer` might look different depending on ![Final Resul](images/tutorials/visualization/pathtracer/FinalResult2.png "Final Result with Enhanced Visualization") ## Summary: -* You can achieve photorealistic renderings using `SoPathTracer` and associated modules. +* You can generate photorealistic renderings using `SoPathTracer` and associated modules. * Render volumes efficiently in `SoPathTracer` scenes with `SoPathTracerVolume`, enabling diverse rendering options, LUT adjustments, lights and material enhancements. * Enhance your scene's look by adjusting materials and colors interactively using `SoPathTracerMaterial` and `SoLUTEditor`. * Use lighting modules such as `SoPathTracerAreaLight` and `SoPathTracerBackgroundLight` to optimize the illumination of your rendered scenes. diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample1.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample1.md index fe42918e9..cfec591a1 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample1.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample1.md @@ -1,5 +1,5 @@ --- -title: "Example 1: Synchronous view of two images" +title: "Example 1: Synchronous View of Two Images" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,17 +8,19 @@ tags: ["Beginner", "Tutorial", "Visualization", "2D"] menu: main: identifier: "visualization_example1" - title: "Use the SynchroView2D module for visualizing the same slice(s) of two images" + title: "Use the SynchroView2D Module for Visualizing the Same Slice(s) of Two Images" weight: 555 parent: "visualization" --- + # Example 1: Synchronous View of Two Images {#VisualizationExample1} + ## Introduction In this example we like to use the module `SynchroView2D` to be able to inspect two different images simultaneously. The module `SynchroView2D` provides two 2D viewers that are synchronized. -As in Tutorial [Chapter 1 - Basic Mechanics of MeVisLab](tutorials/basicmechanisms/#TutorialParameterConnection), the processed and the unprocessed image can be displayed simultaneously. Scrolling through one image automatically changes the slices of both viewers, so slices with the same slice number are shown in both images. +As in tutorial [Chapter 1 - Basic Mechanics of MeVisLab](tutorials/basicmechanisms/#TutorialParameterConnection), the processed and the unprocessed image can be displayed simultaneously. Scrolling through one image automatically changes the slices of both viewers, so slices with the same slice number are shown in both images. The difference is that we are now using an already existing module named `SynchroView2D`. @@ -29,12 +31,13 @@ The `SynchroView2D` module is explained {{< docuLinks "/Standard/Documentation/P {{}} ## Steps to Do + ### Develop Your Network Start the example by adding the module `LocalImage` to your workspace to load the example image *Tumor1_Head_t1.small.tif*. Next, add and connect the following modules as shown. ![SynchroView2D](images/tutorials/visualization/V1_01.png "SynchroView2D Viewer") ## Summary -* Multiple images can be synchronized by the `SynchroView2D` module +* Multiple images can be synchronized by the `SynchroView2D` module. {{< networkfile "examples/visualization/example1/VisualizationExample1.mlab" >}} diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample2.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample2.md index e27b5ceaa..fb5448747 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample2.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample2.md @@ -1,5 +1,5 @@ --- -title: "Example 2: Creating a magnifier" +title: "Example 2: Creating a Magnifier" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,10 +8,11 @@ tags: ["Beginner", "Tutorial", "Visualization", "2D", "Magnifier"] menu: main: identifier: "visualization_example2" - title: "Display an image in different viewing directions and mark locations in the image for creating a Magnifier from a rectangle" + title: "Display an Image in Different Viewing Directions and Mark Locations in the Image for Creating a Magnifier From a Rectangle" weight: 560 parent: "visualization" --- + # Example 2: Creating a Magnifier {#TutorialVisualizationExample2} {{< youtube "lfq_TkWOuCo" >}} @@ -19,13 +20,14 @@ menu: ## Introduction Medical images are typically displayed in three different viewing directions (see image): coronal, axial, and sagittal. -Using the viewer `OrthoView2D` you are able to decide which viewing direction you like to use. In addition to that, you have the opportunity to display all three orthogonal viewing directions simultaneously. Here, we like to display an image of the head in all three viewing directions and mark positions in the image. +Using the viewer `OrthoView2D`, you are able to decide which viewing direction you like to use. In addition to that, you have the opportunity to display all three orthogonal viewing directions simultaneously. Here, we like to display an image of the head in all three viewing directions and mark positions in the image. ![Body Planes](images/tutorials/visualization/V2_00.png "Body Planes") ## Steps to Do + ### Develop Your Network -In this example, use the module `LocalImage` to load the example *image MRI_Head.tif*. Now, connect the module `OrthoView2D` with the loaded image. The image is displayed in three orthogonal viewing directions. The yellow marker displays the same voxel in all three images. You can scroll through the slices in all three viewing directions. +In this example, use the module `LocalImage` to load the example *image MRI_Head.tif*. Now, connect the module `OrthoView2D` to the loaded image. The image is displayed in three orthogonal viewing directions. The yellow marker displays the same voxel in all three images. You can scroll through the slices in all three viewing directions. {{}} @@ -37,7 +39,7 @@ In the case your image is black, change the *Window* and *Center* values by movi ### SoView2DPosition Next, we add the module `SoView2DPosition` (an Open Inventor module). -The module enables the selection of an image position via mouse click {{< mousebutton "left" >}}. The last clicked location in the viewer is marked in white. If you now scroll through the slices, both, the last clicked location and the current image location are shown. +The module enables the selection of an image position via mouse click {{< mousebutton "left" >}}. The last clicked location in the viewer is marked in white. If you now scroll through the slices, both the last clicked location and the current image location are shown. ![SoView2DPosition](images/tutorials/visualization/V2_02.png "SoView2DPosition") @@ -46,19 +48,18 @@ Instead of points, we like to mark areas. In order to do that, replace the modul ![SoView2DRectangle](images/tutorials/visualization/V2_03.png "SoView2DRectangle") -### Using a rectangle to build a magnifier - -We like to use the module `SoView2DRectangle` to create a magnifier. In order to do that add the following modules to your workspace and connect them as shown below. We need to connect the module `SoView2DRectangle` to a hidden input connector of the module `SynchroView2D`. To be able to do this, click on your workspace and afterwards press {{< keyboard "SPACE" >}}. You can see that `SynchroView2D` possesses Open Inventor input connectors. You can connect your module `SoView2DRectangle` to one of these connectors. +### Using a Rectangle to Build a Magnifier +We like to use the module `SoView2DRectangle` to create a magnifier. In order to do that, add the following modules to your workspace and connect them as shown below. We need to connect the module `SoView2DRectangle` to a hidden input connector of the module `SynchroView2D`. To be able to do this, click on your workspace and afterward press {{< keyboard "SPACE" >}}. You can see that `SynchroView2D` possesses Open Inventor input connectors. You can connect your module `SoView2DRectangle` to one of these connectors. ![Hidden Inputs of SynchroView2D](images/tutorials/visualization/V2_05.png "Hidden Inputs of SynchroView2D") ![Connect Hidden Inputs of SynchroView2D](images/tutorials/visualization/V2_06.png "Connect Hidden Inputs of SynchroView2D") -In addition to that, add two types of the module `DecomposeVector3` to your network. In MeVisLab, different data types exist, for example, vectors, or single variables, which contain the data type float or integer. This module can be used to convert field values of type vector (in this case a vector consisting of three entries) into three single coordinates. You will see in the next step why this module can be useful. +In addition to that, add two instances of the module `DecomposeVector3` to your network. In MeVisLab, different data types exist, for example, vectors, or single variables, which contain the data type float or integer. This module can be used to convert field values of type vector (in this case, a vector consisting of three entries) into three single coordinates. You will see in the next step why this module can be useful. ![DecomposeVector3](images/tutorials/visualization/V2_07.png "DecomposeVector3") -We like to use the module `SubImage` to select a section of a slice, which is then displayed in the viewer. The idea is to display a magnified section of one slice next to the whole slice in the module `SynchroView2D`. In order to do that, we need to tell the module `SubImage` which section to display in the viewer. The section is selected using the module `SoView2DRectangle`. As a last step, we need to transmit the coordinates of the chosen rectangle to the module `SubImage`. To do that, we will build some parameter connections. +We like to use the module `SubImage` to select a section of a slice, which is then displayed in the viewer. The idea is to display a magnified section of one slice next to the whole slice in the module `SynchroView2D`. In order to do that, we need to tell the module `SubImage` which section to display in the viewer. The section is selected by using the module `SoView2DRectangle`. As a last step, we need to transmit the coordinates of the chosen rectangle to the module `SubImage`. To do that, we will build some parameter connections. ![SubImage](images/tutorials/visualization/V2_08.png "SubImage") @@ -66,7 +67,7 @@ Now, open the panels of the modules `SoView2DRectangle`, `DecomposeVector3`, and We rename the `DecomposeVector3` modules (press {{< keyboard "F2" >}} to do that) here for a better overview. -In the panel of the module `Rectangle` in the box Position you can see the position of the rectangle given in two 3D vectors. +In the panel of the module `Rectangle` in the box *Position*, you can see the position of the rectangle given in two 3D vectors. We like to use the modules `DecomposeVector3` to extract the single x, y, and z values of the vector. For that, create a parameter connection from the field *Start Wold Pos* to the vector of the module we named `StartWorldPos_Rectangle` and create a connection from the field *End World Pos* to the vector of module `EndWorldPos_Rectangle`. The decomposed coordinates can be now used for further parameter connections. @@ -75,7 +76,7 @@ We like to use the modules `DecomposeVector3` to extract the single x, y, and z Open the panel of the module `SubImage`. Select the *Mode World Start & End* (*Image Axis Aligned*). Enable the function *Auto apply*. {{}} -Make sure to also check *Auto-correct for negative subimage extents* so that you can draw rectangles from left to right and from right to left. +Make sure to also check *Auto-correct for negative subimage extents*, so that you can draw rectangles from left to right and from right to left. {{}} ![World Coordinates](images/tutorials/visualization/V2_10.png "World Coordinates") diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample3.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample3.md index 1280e7d99..2dd12c3d3 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample3.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample3.md @@ -8,10 +8,11 @@ tags: ["Beginner", "Tutorial", "Visualization", "2D", "Overlays", "Masks"] menu: main: identifier: "visualization_example3" - title: "How to blend images and masks over each other " + title: "How to Blend Images and Masks Over Each Other" weight: 565 parent: "visualization" --- + # Example 3: How to Blend Images Over Each Other {#TutorialVisualizationExample3} {{< youtube "e8iFGp-St0c" >}} @@ -20,6 +21,7 @@ menu: In this example we will show you how to blend a 2D image over another one. With the help of the module `SoView2DOverlay` we will create an overlay, which allows us to highlight all bones in the scan. ## Steps to Do + ### Develop Your Network Start this example by adding the shown modules, connecting the modules to form a network and loading the example image *Bone.tiff*. @@ -33,7 +35,7 @@ The `Threshold` module is explained {{< docuLinks "/Standard/Documentation/Publi [//]: <> (MVL-653) -The module `Threshold` compares the value of each voxel of the image with a customized threshold. In this case: If the value of the chosen voxel is lower than the threshold, the voxel value is replaced by the minimum value of the image. If the value of the chosen voxel is higher than the threshold, the voxel value is replaced by the maximum value of the image. With this, we can construct a binary image that divides the image into bone (white) and no bone (black). +The module `Threshold` compares the value of each voxel of the image with a customizable threshold. In this case: If the value of the chosen voxel is lower than the threshold, the voxel value is replaced by the minimum value of the image. If the value of the chosen voxel is higher than the threshold, the voxel value is replaced by the maximum value of the image. With this, we can construct a binary image that divides the image into bone (white) and no bone (black). Select output of the `Threshold` module to see the binary image in Output Inspector. diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample4.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample4.md index c1bb03cfc..25cd442b3 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample4.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample4.md @@ -1,5 +1,5 @@ --- -title: "Example 4: Display 2D images in Open Inventor SoRenderArea" +title: "Example 4: Display 2D Images in Open Inventor SoRenderArea" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,10 +8,11 @@ tags: ["Beginner", "Tutorial", "Visualization", "2D", "3D", "Open Inventor", "Sn menu: main: identifier: "visualization_example4" - title: "Example for displaying images in Open Inventor SoRenderArea" + title: "Example for Displaying Images in Open Inventor SoRenderArea" weight: 570 parent: "visualization" --- + # Example 4: Display Images Converted to Open Inventor Scene Objects {#TutorialVisualizationExample4} {{< youtube "WaD6zuvVNek" >}} @@ -28,6 +29,7 @@ More information about the SoView2D family can be found {{< docuLinks "/Resource [//]: <> (MVL-653) ## Steps to Do + ### Develop Your Network We will start the example by creating an overlay again. Add the following modules and connect them as shown. Select a *Threshold* and a *Comparison Operator* for the module `Threshold` as in the previous example. The module `SoView2D` converts the image into a scene object. The image as well as the overlay is rendered and displayed by the module `SoRenderArea`. @@ -38,13 +40,13 @@ You may have noticed that you are not able to scroll through the slices. This fu ![View2DExtensions](images/tutorials/visualization/V4_02.png "View2DExtensions") -### Add Screenshot Gallery to Viewing Area -With the help of the module `SoRenderArea` you can record screenshots and movies. Before we do that, open {{< menuitem "View" "Views" "Screenshot Gallery" >}}, to add the Screenshot Gallery to your viewing area. +### Add Screenshot Gallery to Views Area +With the help of the module `SoRenderArea` you can record screenshots and movies. Before we do that, open {{< menuitem "View" "Views" "Screenshot Gallery" >}}, to add the Screenshot Gallery to your views area. ![Screenshot Gallery](images/tutorials/visualization/V4_03.png "Screenshot Gallery") ### Create Screenshots and Movies -If you now select your favorite slice of the bone in the Viewer `SoRenderArea` and press {{< keyboard "F11" >}}, a screenshot is taken and displayed in the Screenshot Gallery. For recording a movie, press {{< keyboard "F9" >}} to start the movie and {{< keyboard "F10" >}} to stop recording. You can find the movie in the Screenshot Gallery. +If you now select your favorite slice of the bone in the viewer `SoRenderArea` and press {{< keyboard "F11" >}}, a screenshot is taken and displayed in the Screenshot Gallery. For recording a movie, press {{< keyboard "F9" >}} to start the movie and {{< keyboard "F10" >}} to stop recording. You can find the movie in the Screenshot Gallery. ![Record Movies and Snapshots](images/tutorials/visualization/V4_05.png "Record Movies and Snapshots") diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample5.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample5.md index 8e37af46b..507d0dfe0 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample5.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample5.md @@ -1,5 +1,5 @@ --- -title: "Example 5: Volume rendering and interactions" +title: "Example 5: Volume Rendering and Interactions" date: 2022-06-15T08:56:33+02:00 status: "OK" draft: false @@ -8,10 +8,11 @@ tags: ["Beginner", "Tutorial", "Visualization", "3D", "Volume Rendering", "GVR", menu: main: identifier: "visualization_example5" - title: "Volume rendering with lookup table (LUT) rotating automatically." + title: "Volume Rendering With Lookup Table (LUT) Rotating Automatically." weight: 575 parent: "visualization" --- + # Example 5: Volume Rendering and Interactions {#TutorialVisualizationExample6} {{< youtube "QViPqXs2LHc" >}} @@ -20,6 +21,7 @@ menu: In this example we like to convert a scan of a head into a 3D scene object. The scene object allows to add some textures, interactions, and animations. ## Steps to Do + ### Develop Your Network Implement the following network and open the image *$(DemoDataPath)/BrainMultiModal/ProbandT1.tif*. @@ -34,7 +36,7 @@ Additional information about Volume Rendering can be found here: {{< docuLinks " [//]: <> (MVL-653) ### Change LUT -We like to add a surface color to the head. In order to do that, we add the module `SoLUTEditor`, which adds an RGBA lookup table (LUT) to the scene. Connecting this module to `SoExaminerViewer` left to the connection between `SoGVRRenderer` and `SoExaminerViewer` (remember the order in which Open Inventor modules are executed) allows you to set the surface color of the head. +We like to add a surface color to the head. In order to do that, we add the module `SoLUTEditor`, which adds an RGBA lookup table (LUT) to the scene. Connecting this module to `SoExaminerViewer` to the left of the connection between `SoGVRRenderer` and `SoExaminerViewer` (remember the order in which Open Inventor modules are executed) allows you to set the surface color of the head. ![SoLUTEditor](images/tutorials/visualization/V6_02.png "SoLUTEditor") @@ -56,9 +58,9 @@ Open the panels of both modules and select the axis the image should rotate arou ![Time and Angle](images/tutorials/visualization/V6_06.png "Time and Angle") ## Exercises -1. Change rotation speed -2. change rotation angle -3. Pause rotation on pressing {{< keyboard "SPACE" >}} +1. Change the rotation speed. +2. change the rotation angle. +3. Pause the rotation on pressing {{< keyboard "SPACE" >}}. ## Summary * The module `SoGVRVolumeRenderer` renders paged images like DICOM files in a GVR. diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md index bdf359e3a..ac212a979 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md @@ -8,10 +8,11 @@ tags: ["Advanced", "Tutorial", "Visualization", "3D", "Volume Rendering", "Path menu: main: identifier: "visualization_example6" - title: "Example usage of the MeVis Path Tracer" + title: "Example Usage of the MeVis Path Tracer" weight: 577 parent: "visualization" --- + # Example 6: MeVis Path Tracer