katielink commited on
Commit
613cc39
1 Parent(s): 95b4f91

Update evalaute doc, GPU usage details, and dataset preparation instructions

Browse files
Files changed (3) hide show
  1. README.md +30 -1
  2. configs/metadata.json +2 -1
  3. docs/README.md +30 -1
README.md CHANGED
@@ -31,7 +31,21 @@ The training set is the 104 whole-body structures from the TotalSegmentator rele
31
 
32
  ### Preprocessing
33
 
34
- To use the bundle, users need to download the data and merge all annotated labels into one NIFTI file. Each file contains 0-104 values, each value represents one anatomy class. A sample set is provided with this [link](https://drive.google.com/file/d/1DtDmERVMjks1HooUhggOKAuDm0YIEunG/view?usp=share_link).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ## Training Configuration
37
 
@@ -46,6 +60,21 @@ The training was performed with the following:
46
  - Learning Rate: 1e-4
47
  - Loss: DiceCELoss
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ### Memory Consumption
50
 
51
  - Dataset Manager: CacheDataset
 
31
 
32
  ### Preprocessing
33
 
34
+ To use the bundle, users need to download the data and merge all annotated labels into one NIFTI file. Each file contains 0-104 values, each value represents one anatomy class. We provide sample datasets and step-by-step instructions on how to get prepared:
35
+
36
+ Instruction on how to start with the prepared sample dataset:
37
+
38
+ 1. Download the sample set with this [link](https://drive.google.com/file/d/1DtDmERVMjks1HooUhggOKAuDm0YIEunG/view?usp=share_link).
39
+ 2. Unzip the dataset into a workspace folder.
40
+ 3. There will be three sub-folders, each with several preprocessed CT volumes:
41
+ - imagesTr: 20 samples of training scans and validation scans.
42
+ - labelsTr: 20 samples of pre-processed label files.
43
+ - imagesTs: 5 samples of sample testing scans.
44
+ 4. Usage: users can add `--dataset_dir <totalSegmentator_mergedLabel_samples>` to the bundle run command to specify the data path.
45
+
46
+ Instruction on how to merge labels with the raw dataset:
47
+
48
+ - There are 104 binary masks associated with each CT scan, each mask corresponds to anatomy. These pixel-level labels are class-exclusive, users can assign each anatomy a class number then merge to a single NIFTI file as the ground truth label file. The order of anatomies can be found [here](https://github.com/Project-MONAI/model-zoo/blob/dev/models/wholeBody_ct_segmentation/configs/metadata.json).
49
 
50
  ## Training Configuration
51
 
 
60
  - Learning Rate: 1e-4
61
  - Loss: DiceCELoss
62
 
63
+ ## Evaluation Configuration
64
+
65
+ The model predicts 105 channels output at the same time using softmax and argmax. It requires higher GPU memory when calculating
66
+ metrics between predicted masked and ground truth. The consumption of hardware requirements, such as GPU memory is dependent on the input CT volume size.
67
+
68
+ The recommended evaluation configuration and the metrics were acquired with the following hardware:
69
+
70
+ - GPU: equal to or larger than 48 GB of GPU memory
71
+ - Model: high resolution model pre-trained at a slice thickness of 1.5 mm.
72
+
73
+ Note: there are two pre-trained models provided. The default is the high resolution model, evaluation pipeline at slice thickness of **1.5mm**,
74
+ users can use the lower resolution model if out of memory (OOM) occurs, which the model is pre-trained with CT scans at a slice thickness of **3.0mm**.
75
+
76
+ Users can also use the inference pipeline for predicted masks, we provide detailed GPU memory consumption in the following sections.
77
+
78
  ### Memory Consumption
79
 
80
  - Dataset Manager: CacheDataset
configs/metadata.json CHANGED
@@ -1,7 +1,8 @@
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
- "version": "0.1.7",
4
  "changelog": {
 
5
  "0.1.7": "remove error dollar symbol in readme",
6
  "0.1.6": "add RAM usage with CacheDataset and GPU consumtion warning",
7
  "0.1.5": "fix mgpu finalize issue",
 
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
+ "version": "0.1.8",
4
  "changelog": {
5
+ "0.1.8": "Update evalaute doc, GPU usage details, and dataset preparation instructions",
6
  "0.1.7": "remove error dollar symbol in readme",
7
  "0.1.6": "add RAM usage with CacheDataset and GPU consumtion warning",
8
  "0.1.5": "fix mgpu finalize issue",
docs/README.md CHANGED
@@ -24,7 +24,21 @@ The training set is the 104 whole-body structures from the TotalSegmentator rele
24
 
25
  ### Preprocessing
26
 
27
- To use the bundle, users need to download the data and merge all annotated labels into one NIFTI file. Each file contains 0-104 values, each value represents one anatomy class. A sample set is provided with this [link](https://drive.google.com/file/d/1DtDmERVMjks1HooUhggOKAuDm0YIEunG/view?usp=share_link).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  ## Training Configuration
30
 
@@ -39,6 +53,21 @@ The training was performed with the following:
39
  - Learning Rate: 1e-4
40
  - Loss: DiceCELoss
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ### Memory Consumption
43
 
44
  - Dataset Manager: CacheDataset
 
24
 
25
  ### Preprocessing
26
 
27
+ To use the bundle, users need to download the data and merge all annotated labels into one NIFTI file. Each file contains 0-104 values, each value represents one anatomy class. We provide sample datasets and step-by-step instructions on how to get prepared:
28
+
29
+ Instruction on how to start with the prepared sample dataset:
30
+
31
+ 1. Download the sample set with this [link](https://drive.google.com/file/d/1DtDmERVMjks1HooUhggOKAuDm0YIEunG/view?usp=share_link).
32
+ 2. Unzip the dataset into a workspace folder.
33
+ 3. There will be three sub-folders, each with several preprocessed CT volumes:
34
+ - imagesTr: 20 samples of training scans and validation scans.
35
+ - labelsTr: 20 samples of pre-processed label files.
36
+ - imagesTs: 5 samples of sample testing scans.
37
+ 4. Usage: users can add `--dataset_dir <totalSegmentator_mergedLabel_samples>` to the bundle run command to specify the data path.
38
+
39
+ Instruction on how to merge labels with the raw dataset:
40
+
41
+ - There are 104 binary masks associated with each CT scan, each mask corresponds to anatomy. These pixel-level labels are class-exclusive, users can assign each anatomy a class number then merge to a single NIFTI file as the ground truth label file. The order of anatomies can be found [here](https://github.com/Project-MONAI/model-zoo/blob/dev/models/wholeBody_ct_segmentation/configs/metadata.json).
42
 
43
  ## Training Configuration
44
 
 
53
  - Learning Rate: 1e-4
54
  - Loss: DiceCELoss
55
 
56
+ ## Evaluation Configuration
57
+
58
+ The model predicts 105 channels output at the same time using softmax and argmax. It requires higher GPU memory when calculating
59
+ metrics between predicted masked and ground truth. The consumption of hardware requirements, such as GPU memory is dependent on the input CT volume size.
60
+
61
+ The recommended evaluation configuration and the metrics were acquired with the following hardware:
62
+
63
+ - GPU: equal to or larger than 48 GB of GPU memory
64
+ - Model: high resolution model pre-trained at a slice thickness of 1.5 mm.
65
+
66
+ Note: there are two pre-trained models provided. The default is the high resolution model, evaluation pipeline at slice thickness of **1.5mm**,
67
+ users can use the lower resolution model if out of memory (OOM) occurs, which the model is pre-trained with CT scans at a slice thickness of **3.0mm**.
68
+
69
+ Users can also use the inference pipeline for predicted masks, we provide detailed GPU memory consumption in the following sections.
70
+
71
  ### Memory Consumption
72
 
73
  - Dataset Manager: CacheDataset