What’s going on?
We just got a homework from CSE 190 class, which wants us to slove the camelyon16 contest. To be honest, i think it is a difficult task since i have no knowledge about this before. So i am gonna check things out to see waht we can do!
Detecting Cancer Metastases on Gigapixel Pathology Images By google
They present a framework to detect and localize tumors as small as 100 x 100 pixels in Gigapixel microscopy images sized 100000x100000 pixels, leveraging a CNN architecure and obtaining state-of-art results on the Camelyon 16 dataset. At 8 false positives per image, we detect 92.4% of the tumors, relative to 82.7% by the previous best automated approach, where human expert can only achieve 73.2%. AUC above 97%, even discovered 2 slides that were erroneously labeled normal. In a word, i think their approach could considerably reduce false negative rates in Metastases detection
About the Model !!
Something (that not so useful :-), Seriously! useful)
A central component of breast cancer staging involves the microscopic
examination of lymph nodes adjacent to the breast for evidence that the cancer
has spread, or metastasized. This process really requires master and is really time-consuming.
CNN is GOOD! (bla bla)
Related work (Winner)
- achieved a sensitivity of
75% at 8 FP per slide and a slide-level classification AUC of 92.5%. - Trained a Inception(V1,GoogLeNet) model on a pre-sampled set of patches, then trained a random forest on 28 features
- Then a second Inception model was trained on harder examples.
- This team later improved these metrics to 82.7% and 99.4% respectively using color normalization,additional data augmenta-
tion, and lowering the inference stride from 64 to 4
Methods
Overview
Given a Gigapixel image (slide), we are going to classify the image and localize the tumors. Because of the large size of the slide and the limited number of slides , using patches from the slide . Then perform inference over patches in a sliding window across the slide, generating a tumor probability heatmap.
Inside
Utilize the Inception(V3) architecture with inputs sized 299x299 to assess the value of initializing from existing models pre-trained on another domain. For each patch, predict the label of the center region 128x128.
Since the Gigapixel image is so large, corresponding to tumor patch percentages ranging from 0.01% to 70% (median 2%). So select normal and tumor with same probability. Next, select a slide that patches uniformly at random, unilike other methods, pre-sampling patches from the slide.
Data augmentations
- rotate the input patch by 4 x 90’.apply a left-right flip and repeat the rotations.
- use TensorFlow’s image library(tensorflow.image.randomX) to perturb color: brightness with a maximum delta
of 64/255, saturation with a maximum delta of 0.25, hue with a maximum deltaof 0.04, and contrast with a maximum delta of 0.75. - add jitter to the patch extraction process such that each patch has a small x,y offset of up to 8 pixels.
- We run inference across the slide in a sliding window with a stride of 128
to match the center region’s size. For each patch, we apply the rotations and
left-right flip to obtain predictions for each of the 8 orientations, and average
the 8 predictions.Parameters and Details
Trained our networks with stochastic gradient descent in TensorFlow, with 8 replicas each running on a NVIDIA Pascal GPU with asynchronous gradient updates and batch size of 32 per replica. Used RMSProp with momentum of 0.9, decay of 0.9 and eposilon = 1.0. The initial
learning rate was 0.05, with a decay of 0.5 every 2 million examples. For refining
a model pretrained on ImageNet, we used an initial learning rate of 0.002.