⚡ Improve Engine Performance and Implementation#578
Draft
shaneahmed wants to merge 195 commits intodevelopfrom
Draft
⚡ Improve Engine Performance and Implementation#578shaneahmed wants to merge 195 commits intodevelopfrom
Engine Performance and Implementation#578shaneahmed wants to merge 195 commits intodevelopfrom
Conversation
- Use `pyproject.toml` for `bdist_wheel` configuration
- Improve `Engines` performance and implementation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #578 +/- ##
===========================================
- Coverage 99.41% 95.50% -3.92%
===========================================
Files 72 80 +8
Lines 9540 10361 +821
Branches 1267 1360 +93
===========================================
+ Hits 9484 9895 +411
- Misses 29 429 +400
- Partials 27 37 +10 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
- Refactor engines_abc.py
Engines Performance and ImplementationEngine Performance and Implementation
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
## Summary of Changes ### Major Additions - **Dask Integration:** - Added `dask` as a dependency and integrated Dask arrays and lazy computation throughout the engine and patch predictor code. - Added Dask-based merging, chunking, and memory-aware processing for large images and WSIs. - **Zarr Output Support:** - Added support for saving model predictions and intermediate results directly to Zarr format. - New CLI options and internal logic for Zarr output, including memory thresholding and chunked writes. - **SemanticSegmentor Engine:** - Added a new `SemanticSegmentor` engine with Dask/Zarr support and new test coverage (`test_semantic_segmentor.py`). - Added CLI entrypoint for `semantic_segmentor` and removed the old `semantic_segment` CLI. - **Enhanced CLI and Config:** - Added CLI options for memory threshold, unified worker options, and improved mask handling. - Updated YAML configs and sample data for new models and test images. - **Utilities and Validation:** - Added utility functions for minimal dtype casting, patch/stride validation, and improved error handling (e.g., `DimensionMismatchError`). - Improved annotation store conversion for Dask arrays and Zarr-backed outputs. - **Changes to `kwarg`** - Add `memory-threshold` - Unified `num-loader-workers` and `num-postproc-workers` into `num-workers` - Removed `cache_mode` as cache mode is automatically handled. --- ### Major Removals/Refactors - **Removed Old CLI and Redundant Code:** - Deleted the old `semantic_segment.py` CLI and replaced it with `semantic_segmentor.py`. - Removed legacy cache mode and patch prediction Zarr store tests. - **Refactored Model and Dataset APIs:** - Unified and simplified model inference APIs to always return arrays (not dicts) for batch outputs. - Refactored dataset classes to enforce patch shape validation and remove legacy “mode” logic. - **Test Cleanup:** - Removed or updated tests that relied on old APIs or cache mode. - Refactored test assertions for new output types and Dask array handling. - **API Consistency:** - Standardized function and argument names across engines, CLI, and utility modules. - Updated docstrings and type hints for clarity and consistency. --- ### Notable File Changes - **New:** - `tiatoolbox/cli/semantic_segmentor.py` - `tests/engines/test_semantic_segmentor.py` - **Removed:** - `tiatoolbox/cli/semantic_segment.py` - Old cache mode and patch Zarr store tests - **Heavily Modified:** - `engine_abc.py`, `patch_predictor.py`, `semantic_segmentor.py` - CLI modules and test suites - Dataset and utility modules for Dask/Zarr compatibility --- ### Impact - Enables scalable, parallel, and memory-efficient inference and output saving for large images. - Simplifies downstream analysis by supporting Zarr as a native output format. - Lays the groundwork for further Dask-based optimizations in TIAToolbox. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
## 🚀Summary This PR introduces a new **[GrandQC Tissue Detection Model](https://github.com/cpath-ukk/grandqc/tree/main)** for digital pathology quality control and integrates **EfficientNet-based encoder architecture** into the TIAToolbox framework. --- ## ✨Key Changes - **New Model Architecture** - Added `grandqc.py` implementing a UNet++ decoder with EfficientNet encoder for tissue segmentation. - Includes preprocessing (JPEG compression + ImageNet normalization), postprocessing (argmin-based mask generation), and batch inference utilities. - **EfficientNet Encoder** - Added `timm_efficientnet.py` providing configurable EfficientNet encoders with dilation support and custom input channels. - **Pretrained Model Config** - Updated `pretrained_model.yaml` to register `grandqc_tissue_detection_mpp10` with associated IO configuration. - Corrected `IOSegmentorConfig` references and adjusted resolutions for SCCNN models. - **Testing** - Added comprehensive unit tests for: - `GrandQCModel` functionality, preprocessing/postprocessing, and decoder blocks. - EfficientNet encoder utilities and scaling logic. ## Impact - Enables high-resolution tissue detection for WSI quality control using state-of-the-art architectures. - Improves flexibility for segmentation tasks with EfficientNet encoders. - Enhances code quality and consistency through updated linting and formatting tools. ## Tasks - [x] Re-host GrandQC model weights on TIA Hugging Face - [x] Update `pretrained_model.yaml` - [x] Update `requirements.txt` - [x] Define GrandQC model architecture - [x] Add example usage - [x] Remove segmentation-models-pytorch dependency - [x] Wait for response from GrandQC authors - [x] Add tests - [x] Tidy up --------- Co-authored-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
# 🚀 Summary This PR introduces a new **`DeepFeatureExtractor` engine** to the TIAToolbox framework, enabling extraction of intermediate CNN feature representations from whole slide images (WSIs) or image patches. These features can be used for downstream tasks such as clustering, visualization, or training other models. The update also includes: - A **command-line interface (CLI)** for the new engine. - Extended **CLI utilities** for flexible input/output configurations. - Comprehensive **unit tests** covering patch-based and WSI-based workflows, multi-GPU support, and CLI functionality. - Integration with TIAToolbox’s model registry and CLI ecosystem. --- ## ✨ Key Features ### **New Engine: `DeepFeatureExtractor`** - Extracts intermediate CNN features from WSIs or patches. - Outputs feature embeddings and spatial coordinates in **Zarr** or **dict** format. - Implements **memory-aware caching** for large-scale WSI processing. - Compatible with: - TIAToolbox pretrained models. - Torchvision CNN backbones (e.g., ResNet, DenseNet, MobileNet). - **All timm architectures via `timm.list_models()`**, including HuggingFace-hosted models. - Supports both **patch-mode** and **WSI-mode** workflows. ### **CLI Integration** - Adds `deep-feature-extractor` command to TIAToolbox CLI. - Supports options for: - Input/output paths and file types. - Model selection (`resnet18`, `efficientnet_b0`, timm-based backbones, etc.). - Patch extraction parameters (`patch_input_shape`, `stride_shape`, `input_resolutions`). - Batch size, device selection, memory threshold, overwrite behavior. - Flexible JSON-based CLI options for resolutions and class mappings. ### **Extended CLI Utilities** - New reusable options: - `--input-resolutions`, `--output-resolutions` (JSON list of dicts). - `--patch-input-shape`, `--stride-shape`, `--scale-factor`. - `--class-dict` for mapping class indices to names. - `--overwrite` and `--output-file` for fine-grained control. ### **Unit Tests** - **Engine Tests**: - Patch-based and WSI-based feature extraction. - Validation of Zarr outputs (features and coordinates). - Multi-GPU functionality. - **Model Compatibility**: - Tests with `CNNBackbone` and `TimmBackbone` models. - **CLI Tests**: - Single-file and parameterized runs. - Validation of JSON parsing for CLI options. ### **Codebase Integration** - Registers `DeepFeatureExtractor` in `tiatoolbox.models` and engine registry. - Adds CLI command in `tiatoolbox.cli.__init__.py`. - Updates architecture utilities to support timm-based backbones and HuggingFace models. - Introduces dictionaries for Torch and timm backbones (`torch_cnn_backbone_dict`, `timm_arch_dict`).
# 🚀 Summary This PR introduces a new **`NucleusDetector` engine** to the TIAToolbox framework, enabling detection of nuclei from whole slide images (WSIs) or image patches using models such as **`MapDe`** and **`SCCNN`**. It supersedes PR #538 by leveraging **`dask`** for efficient, parallelized post-processing and result merging. The update also includes: --- ## ✨ Key Features ### **New Engine: `NucleusDetector`** - Detects nuclei centroids and probabilities from WSIs or patches. - Produces a **detection map** aligned with segmentation dimensions. - Serializes detections into **detection arrays** (`[[x], [y], [type], [probs]]`). - Supports multiple output backends: - **SQLiteStore** (chunked storage for WSI/patch). - **Dictionary** (flat or patch-indexed). - **Zarr** (arrays for coordinates, classes, probabilities). - Compatible with nucleus detection models: - **MapDe** (implemented). - **SCCNN** (integration in progress/debugging). - Supports both **patch-mode** and **WSI-mode** workflows. ### Technical Implementation The detection pipeline operates as follows: 1. **Segmentation**: A WSI-level segmentation map (dask array) is generated using `SemanticSegmentor.infer_wsi()`. 2. **Parallel Post-processing**: For WSI inference. Use `dask.array.map_overlap` to apply the model's post-processing function across the entire segmentation map. This allows the function to execute in parallel on chunks, after which the results are automatically merged back into a unified **"detection_map"**, which is saved as zarr in a cache directory for further processing. 3. **Detection Map**: * Maintains the same dimensions as the segmentation map. * Nuclei centroids contain the detection probability values (defaults to `1` if the model does not produce probabilities). 4. **Serialization**: The "detection_map" is converted into **"detection_arrays"** (format: `[[x], [y], [type], [probs]]`) representing the detected nuclei. These records are then saved into `SQLiteStore` (chunk-by-chunk), `zarr`, or a `dict` (patch mode only). ### Output Formats #### SQLiteStore * **WSI Mode**: Returns a single `SQLiteStore`. * **Patch Mode**: Returns one `SQLiteStore` per patch. * **Format**: ```python Annotation(Point(x,y), properties={'type': 'nuclei', 'probs': 0.9}) ``` #### Dictionary * **WSI Mode**: ```python { 'x': [...], 'y': [...], 'classes': [...], 'probs': [...] } ``` * **Patch Mode** (One sub-dictionary per patch index): ```python { 0: { 'x': [...], 'y': [...], 'classes': [...], 'probs': [...] }, 1: { ... } } ``` #### Zarr * **WSI Mode**: ```python { 'x': [...], 'y': [...], 'classes': [...], 'probs': [...] } ``` * **Patch Mode**: Each key maps to a list of `da.array` objects, where each array corresponds to a patch. ```python { 'x': [[...], ...], 'y': [[...], ...], 'classes': [[...], ...], 'probs': [[...], ...] } ``` ### **Codebase Integration** - Registers `NucleusDetector` in `tiatoolbox.models` and engine registry. - Refactors detection logic from PR #538 into modular components. - Updates `MapDe` implementation to use the new engine. - Begins integration of `SCCNN` with `NucleusDetector`. - Adds utilities for serialization into SQLite, dict, and zarr formats. - Introduces unit tests for detection workflows. - Removes unused parameters `prediction_shape` and `prediction_dtype` from `post_process_patches()` and `post_process_wsi()` functions in all engines. - `post_process_patches()` and `post_process_wsi()` now takes `raw_predictions` instead of `raw_predictions["probabilities"]`. ### Tasks - [x] Port code from PR #538 to supersede and close it. - [x] Add `NucleusDetector` engine. - [x] Update existing detection models (`MapDe` implementation complete; `SCCNN` implementation in progress/debugging). - [x] Add unit tests. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com> Co-authored-by: Jiaqi Lv <jiaqilv@Jiaqis-MacBook-Pro.local>
## Summary This PR updates the patch‑prediction example to align with the new `PatchPredictor` engine and fixes a long‑standing issue in `EngineABC` related to model‑attribute retrieval when using `DataParallel`. --- ## What’s Changed ### 🔧 Example Notebook Updates - Updated **`examples/05-patch-prediction.ipynb`** to use the new `PatchPredictor` engine API. - Added a new **“Visualize in TIAViz”** section, allowing readers to directly inspect prediction results inside **TIAViz** for a smoother, more interactive workflow. ### 🐛 EngineABC Bug Fix - Fixed a bug in **`EngineABC`** where model attributes were incorrectly retrieved from a `DataParallel` wrapper. - Introduced `_get_model_attr()` to safely unwrap the underlying model when needed. - This resolves multi‑GPU crashes caused by attributes living on the wrapped module instead of the actual model. --- ## Why This Matters - Ensures the patch‑prediction example stays up‑to‑date with the latest engine design. - Improves multi‑GPU stability and prevents confusing attribute‑access errors. - Enhances the user experience by integrating TIAViz visualization directly into the example workflow. --- ## Testing - Verified that the updated notebook runs end‑to‑end with the new engine. - Confirmed that multi‑GPU training and inference no longer crash when accessing model attributes. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com>
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
# Conflicts: # tests/models/test_dataset.py # tests/models/test_patch_predictor.py # tiatoolbox/data/remote_samples.yaml
This pull request updates sample data URLs in some of the notebooks and all the tests to use the Hugging Face dataset repository instead of the previous TIA server. **Migration of sample data URLs to Hugging Face:** * `examples/02-stain-normalization.ipynb` * `examples/03-tissue-masking.ipynb` * `examples/04-patch-extraction.ipynb` * `tests/test_utils.py`, `tests/test_wsireader.py` * Updated example usage in the docstring of `tiatoolbox/utils/tiff_to_fsspec.py` to use the new Hugging Face sample WSI URL.
## 🐛 Fix Division by Count in `SemanticSegmentor` This PR fixes a core issue in patch merging that caused incorrect normalization of segmentation outputs, and includes several related improvements for consistency and correctness. ### Key Fixes ### 1. Correct patch‑merge count accumulation - The `count` array was indexed incorrectly, so only the first patch row was normalized properly. - Updated indexing ensures accurate per‑pixel accumulation across all rows. ### 2. Add `class_dict` to all models - Introduces `class_dict` in `ModelABC`. - Ensures `SemanticSegmentor` can reliably use class dictionaries when none are passed explicitly. ### 3. Improve output type handling - `SemanticSegmentor.run()` now correctly indicates that it may return a `list[Path]` when multiple `.db` files are produced. ### 4. More flexible probability storage - `store_probabilities()` now accepts a `name` parameter, enabling multiple probability datasets in Zarr outputs. ### 5. Test updates - Adjusted expected nucleus counts and probability means to reflect corrected normalization. - Updated CLI tests to use remote samples and ensure proper cleanup.
This pull request introduces significant improvements to memory management and efficiency in the semantic segmentation engine, especially for large whole-slide image (WSI) processing. The main changes focus on incremental processing and disk-backed storage to avoid excessive RAM usage, as well as more robust cleanup of temporary files. There are also adjustments to test tolerances and some bug fixes in array handling. **Memory management and efficiency improvements:** * The `prepare_full_batch` function now dynamically decides whether to use in-memory NumPy arrays or disk-backed Zarr arrays for large batch outputs, based on available system memory and a configurable threshold. This prevents memory spikes when processing large WSIs. [[1]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0R1382-R1384) [[2]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0R1404-R1411) [[3]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0L1374-R1482) [[4]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0L1394-R1503) * The `save_to_cache` function has been refactored to incrementally write Dask array blocks to Zarr on disk, avoiding materializing large arrays in memory and reducing peak RAM usage. [[1]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0L1106-R1128) [[2]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0L1128-R1198) * Memory usage checks in `infer_wsi` now use up-to-date available memory rather than an initial snapshot, and intermediate results are spilled to disk when thresholds are exceeded. [[1]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0L452) [[2]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0R496-R498) [[3]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0R520-R524) **Robustness and cleanup:** * Temporary Zarr directories and files created during processing are now properly cleaned up after use, preventing disk space leaks. [[1]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0R589-R591) [[2]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0L720-R730) **Bug fixes and test adjustments:** * Fixed array type handling in `merge_batch_to_canvas` to ensure compatibility with both NumPy and Dask arrays. * Corrected logic in `merge_horizontal` to compute spans and concatenate outputs only for the current row, improving correctness and efficiency. [[1]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0L1071-R1089) [[2]](diffhunk://#diff-81374c91b9cee6bb401e9f511e940b670d5a9aac2cac6795894e91d7c0be13a0L1089-R1106) * Relaxed the upper bound on mean prediction values in several tests to account for increased variability with the new memory management approach. [[1]](diffhunk://#diff-387eda9c01a22d07d8c2f5164aa149c6e118fa794d82da6094887e48a833c1b0L383-R383) [[2]](diffhunk://#diff-387eda9c01a22d07d8c2f5164aa149c6e118fa794d82da6094887e48a833c1b0L409-R409) [[3]](diffhunk://#diff-387eda9c01a22d07d8c2f5164aa149c6e118fa794d82da6094887e48a833c1b0L434-R434) --- **Previous Problem:** I encountered out-of-memory issues and Python kept crashing when processing a relatively large WSI. The example slide I was trying to run was: `https://huggingface.co/datasets/TIACentre/TIAToolBox_Remote_Samples/blob/main/sample_wsis/D_P000019_PAS_CPG.tif`. The code I was trying to run was: ``` segmentor = SemanticSegmentor(model="fcn_resnet50_unet-bcss") out = segmentor.run( images=[Path(wsi_path)], patch_mode=False, device="cuda", save_dir=output_path, overwrite=True, output_type="annotationstore", auto_get_mask=True, memory_threshold=25, num_workers=0, batch_size=8, ) ``` Before this PR, the code kept crashing on my workstation, which has 32GBs of RAM, memory spiked to 100% just before it crashed.
# Summary This PR modernizes the *Semantic Segmentation* example notebook to align with the current TIAToolbox APIs and recommended workflows. The update enhances data handling, model execution, output formats, and documentation to provide a clearer and more robust end‑to‑end segmentation example. --- ## ✅ Updated Segmentation Pipeline & Workflow The notebook now uses the latest `SemanticSegmentor.run()` API, replacing the deprecated `predict()` method. The updated workflow: - Defines segmentation settings through `model=`, `num_workers=`, and explicit `input_resolutions`. - Introduces `patch_mode=False` and `auto_get_mask=False` for clearer behavior around WSI/tile processing. - Returns richer output, including probability maps, and writes results to efficient **Zarr** format by default. - Provides a cleaner, more modular setup for patch shapes, strides, and device configuration. --- ## 📁 Data Handling Improvements All sample images, WSIs, and pretrained weights are now downloaded from **Hugging Face Hub**, replacing outdated direct‑URL downloads. The new method: - Ensures long‑term dataset stability - Supports resumable and cache‑efficient downloads - Stores all assets in a dedicated `./tmp` directory for reproducibility --- ## 📊 Enhanced Output & Logging The notebook now produces: - **Zarr‑based outputs** for scalable reading/writing - Detailed dimension reporting for raw predictions, processed predictions, and probability maps - Improved info/warning messages, including GPU compatibility notices Interactive Jupyter widgets have been added to visualize progress during: - Image and WSI downloads - Patch inference - Row merging and result assembly --- ## 📘 Documentation & Narrative Updates The explanatory text has been expanded and modernized to reflect the new APIs: - Rewritten descriptions of key parameters (`model`, `num_workers`, `input_resolutions`, `patch_mode`, `auto_get_mask`, `return_probabilities`) - Updated examples and rationale for patch size and stride selection - Clarified output structure and how to use results post‑processing - Revised statements to emphasize the notebook’s ability to process **thousands of WSIs** efficiently --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com> Co-authored-by: Jiaqi-Lv <60471431+Jiaqi-Lv@users.noreply.github.com> Co-authored-by: Jiaqi Lv <jiaqilv@Jiaqis-MacBook-Pro.local> Co-authored-by: Jiaqi Lv <lvjiaqi9@gmail.com> Co-authored-by: adamshephard <39619155+adamshephard@users.noreply.github.com>
# 📘 Summary This PR updates the `11-import-foundation-models.ipynb` example to align with the latest TIAToolbox APIs and modern best practices. The notebook now uses Hugging Face Hub for dataset retrieval, Zarr for feature storage, the updated `DeepFeatureExtractor` pipeline, and improved IO configuration. Several cells and metadata entries were cleaned up for a more consistent, reproducible, and user-friendly demo. --- # 🔧 What’s Changed ### **Data Access** - Replaced deprecated `download_data()` usage with `hf_hub_download()` for fetching sample WSIs. - Automatically creates a local `./tmp` directory if missing. ### **Imports & Dependencies** - Added `dask.array` and `zarr` to support lazy, chunked feature loading. - Updated TIAToolbox imports: - `DeepFeatureExtractor` from `engine.deep_feature_extractor` - `IOPatchPredictorConfig` from `engine.io_config` ### **Model & Pipeline Updates** - Removed explicit `TimmBackbone` usage; now uses `model="UNI"` directly in `DeepFeatureExtractor`. - Replaced `IOSegmentorConfig` with the updated `IOPatchPredictorConfig`. - Inference call migrated from `predict()` to `run()` using the new API. ### **Output Format Changes** - Switched from storing features as `.npy` files to a consolidated Zarr store. - Updated later steps to read features and coordinates using `zarr.open()` + `dask.array.from_zarr()`. ### **Notebook Cleanup** - Changed several cell tags from `remove-cell` to `hide-output` for cleaner diffs and readability. - Cleared execution counts and removed unnecessary output noise. - Added a cleanup cell to delete temporary directories. - Updated kernel metadata to Python 3.10 and renamed environment to `tiatoolbox-dev`. ### **Documentation Edits** - Corrected model-selection guidance for UNI, Prov‑GigaPath, and H‑optimus‑0. - Updated text to reference the newer `model` argument and correct IO config naming. --------- Co-authored-by: Shan E Ahmed Raza <13048456+shaneahmed@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Enginesperformance and implementationmypyType Checks forcli/common.pyPatchPredictorEngine based onEngineABCreturn_probabilitiesoption to Paramsmerge_predictionsoption inPatchPredictorengine.post_process_cache_modewhich allows running the algorithm onWSIinfer_wsifor WSI inferencesave_wsi_outputas this is not required after post processing.merge_predictionsand fixes docstring in EngineABCRunParamscompile_modelis now moved to EngineABC init_calculate_scale_factorclass_dictdefinition._get_zarr_arrayis now a public functionget_zarr_arrayinmiscpatch_predictions_as_annotationsruns the loop onpatch_coordsinstead ofclass_probs