ExB Research and Development, Germany

Authors:

Christian Hass, Urko Sanchez, Ivan Vasilev, Tony Mey, and Elia Bruni

Abstract:

Our method consists of a tile-wise binary classification based on deep residual networks (ResNet). We split each whole slide image into non-overlapping tiles and assign the class tumor to a tile if the tile contains tumor pixels, non-tumor otherwise. We use this setting to train a deep neural network on the corresponding binary classification task. The images from the Camelyon16 dataset contain large areas of tissue-free background, which are highly inefficient and time consuming for training. We apply a simple preprocessing algorithm to remove them. Additionally, we normalize our images to have zero mean and unit variance, which has been proven to optimize the learning process. The architecture of our model consists of a 34-layers deep ResNet. Our experiments show that ResNets constantly outperformed more traditional convolutional neural networks at this task by large margins of 6-9% classification accuracy. ROC and FROC scores were also significantly worse. Within the context of residual networks, we experimented with depths ranging from 18 to 101 layers. Results show that networks deeper than 34 layers tend to decrease their performance. We ensamble the results of two different networks as well as apply simple postprocessing in order to further increase the performance of our algorithm.

Results:

The following figure shows the receiver operating characteristic (ROC) curve of the method.

The following figure shows the free-response receiver operating characteristic (FROC) curve of the method.

The table below presents the average sensitivity of the developed system at 6 predefined false positive rates: 1/4, 1/2, 1, 2, 4, and 8 FPs per whole slide image.

FPs/WSI 1/4 1/2 1 2 4 8
Sensitivity0.4580.5070.5160.5200.5330.533