MICCAI workshop - team abstracts¶
14.55: Team Aksell (speaker: Shujun He; 19th place QWK 0.9274)
Prostate cancer grade assessment of whole-slide images using, tile
segmentation, self-attention, and multi-tasking learningDeveloping
accurate models to automatically diagnose prostate cancer is of crucial
importance as prostate cancer is the second most common cancer among
males worldwide. Here we describe a deep learning approach for automatic
prostate cancer diagnosis that incorporates tile segmentation on
low-resolution images, self-attention, and multi-tasking learning. Our
methods are conceptually simple but effective, leading to accurate
diagnosis of prostate cancer (private test quadratic weighted kappa =
0.927), despite noisy and imbalanced training data. In addition to
accurate diagnosis, our multi-tasking deep learning model is
interpretable due to usage of self-attention, and will only improve as
more multi-labeled data accumulates.
15.05: Team Dmitry A. Grechka (speaker: Dmitry Grechka; 17th place
QWK 0.9283)
How to predict the ISUP grade of prostate biopsies scans with a
combination of CNN and RNN
Presenting the approach to building an automated system for grading
the microscopy scans of biopsied tissue. The system consists of
convolutional neural network (CNN) and a recurrent neural network (RNN)
chained together to predict the ISUP grade group of a tissue sample.
High resolution microscopy scan is split into a grid of smaller square
tiles. Each tile containing the tissue is mapped into a feature vector
by applying the CNN (DenseNet121). Then feature vectors (presented as a
sequence) are passed to RNN (GRU units) to evaluate the presence of
cancerous tissue and to assign a corresponding grade group.
15.15: Team UCLA Computational Diagnostics Lab (speaker: Wenyuan
Li; QWK 0,9286)
Gleason grading of biopsies using an attention-based multi-resolution
model ensembled with LGBM and XGBoost
We developed an automated prostate Gleason grading algorithm based
on an attention-based multi-resolution model ensembled with lgbm and
xgboost. Our model is trained on patch-based tissue samples extracted
from whole slide images (WSI). A two-stage attention-based multiple
instance learning (MIL) model using weakly supervised region of interest
(ROI) detection was developed for ISUP-grade prediction. It is trained
on multiple resolutions, with the lower resolution to identify
suspicious regions that are further examined at higher resolution. To
make the model more robust, we ensemble the MIL model with LGBM and
xgboost models, whose feature extractors are trained to predict the
primary and secondary Gleason scores.
15.25: Team lafoss (speaker: Maxim Shugaev; 11th place QWK
0.9301)
Concatenate tile pooling approach for end-to-end Gleason grading of
biopsies
End-to-end system for automatic assessment of ISUP grade based on
biopsy images is built. A novel Concatenate Tile pooling approach has
been developed and allowed us to perform training based on labels
assigned to entire images while using a tile-based approach. The
computational cost of training is reduced by more than 10 times in
comparison with training on full images. The training efficiency is
further improved by use of a newly proposed tile cutout method.
Progressive label self-distillation and removal of noisy labels is
applied to handle the training data with a substantial level of label
noise (0.853 QWK).
16.10: Team ChienYiChi (speaker: Jianyi Ji; 8th place QWK 0.9323)
Gleason grading of biopsies using attention and NetVLAD
Whole Slide Images (WSI) of biopsies have billions of pixels.The
common way to deal with this kind of image is to extract some good
tiles from it. To efficiently select potential tiles from WSI, we train
a model with an attention layer over these tile candidates. Top k of the
tiles are processed by another CNN model. Finally, all features
aggregate with a NetVLAD layer before the final output.
16.20: Team BarelyBears (Speaker: Hiroshi Yoshihara; 6th place QWK
0.9326)
A label noise robust ISUP grading system using multi-instance
learning.
We developed an automated ISUP grading system, which is an ensemble
model of four multi-instance learning (MIL) networks. A MIL network
consists of a feature extractor which extracts features from patches
obtained from a whole slide image, and a head which concatenates and
pools all the features and predicts the ISUP grade. Various backbones
were used in the feature extractors. Networks were trained with Online
Uncertainty Sample Mining, or with Mixup in order to improve robustness
to label noises. The ensemble model trained noise-robustly showed better
performance compared to the model trained ordinarily.
16.30: Team Save The Prostate (speaker: Rui Guo; 2nd place QWK
0.9377)
Gleason grading of biopsies with tile-based inputs
Deep learning methods have shown promising results in diagnosing
prostate cancer (PCa). However, due to the various input shapes and
giga-pixel resolution of patient biopsies, traditional training methods
require extensive and costly computational resources. Our team has
utilized a tile-based method to address these issues, which requires
significantly low computational resources while maintaining a state of
the art performance. Our approach includes a combination of 3 different
methods: (1) Multi-Instance Learning (MIL) based CNN with
attention-selected high-resolution input. (2) MIL based CNN with
Squeeze-and-Excite Module across all tiles. (3) Deep CNN with
re-composed single rectangular images from tiles. Additionally, we
successfully tackled high-level label noise in training data by
utilizing robust loss functions and pseudo labels.
16.40: Team PND (speaker: Yusuke Fujimoto; 1st place QWK 0.9409)
Gleason grading of biopsies with simple label noise reduction
technique
We propose a simple label noise reduction technique, where we predict
the Gleason score based on hold-out training, and remove noisy label
data with a large gap between prediction and the ground truth.
While our method is very simple, yet useful; it can be easily applied to
other tasks and models. We were able to win the competition by applying
this method to simple EfficientNet based networks.