The MHub.ai project at Harvard has developed methods to execute machine learning models on medical images in an easy to use and standardized way. There is already a Slicer plugin for running MHub.ai format models. For this project, we propose to add two models of different types to the MHub library.
Objective A. Test a MONAI-based deep learning model in MHub and validate the instructions for new developers to follow.
Objective B. Evaluate how well the MHub approach works for supporting pathology models in addition to radiology models.
Step 1. Port one of the pre-trained MONAIAutoSeg3D radiology models developed at Queens (by Andros Lasso et al.) for execution using the MHub framework as a docker container. Test the MHub I/O converters to read a DICOM image and reformat as needed from the input. Write out a DICOM segmentation object as the result.
Step 2. Start converting a published pathology DNN model (Rhabdomyosarcoma segmentation) for the MHub framework. This will Evaluate how well the MHub approach works for supporting pathology models in addition to radiology models. For example, can the same base Docker image work for pathology?
We selected two of the MONAIAutoSeg3D models from the Slicer Extension and wrapped them using the MHub.ai framework as an exercise to learn the MHub approach. As part of this process, we wrote a converter to produce the class descriptions used by MHub to describe model outputs from the original model descriptions. This approach could be used to convert other models later.
We started adapting a trained Rhabdomyosarcoma pathology model for MHub. the first part of the MHub pipeline works in our prototype but we arent processing the model ooutputs correctly yet.
No response
MONAI AutoSeg3D: https://github.com/Project-MONAI/tutorials/tree/main/auto3dseg
Slicer Extension: https://github.com/lassoan/SlicerMONAIAuto3DSeg
pathology model: https://github.com/knowledgevis/rms-infer-code-standalone