SEMANTIC SEGMENTATION OF MICROSCOPIC BLOOD IMAGE DATA USING SELF-TRAINING TO AUGMENT SMALL TRAINING SETS AND ITS APPLICATION FOR COUNTING CELLS
MetadataShow full item record
Semantic segmentation is a computer vision task of assigning a label describing the content to each pixel in an image. There has been a lot of progress in this area using deep neural networks with an encoder-decoder structure. However, these methods usual place reliance on learning from a lot of data. In this thesis, we study the performance of semantic segmentation networks trained on small labeled training sets. This is thereby studied in the context of the application of detecting and counting red blood cells in microscopic images of a recently developed lensless microscope. In addition, we use a specifically designed generic dataset to investigate performance more systematically. We first study the performance breakdown with the sizes of the training sets using a synthetic 2D-Gaussian dataset. Then for our microscopic blood images, we evaluate a method similar to an Expectation-Maximization approach to improve performance with limited labeled training data through a self-training procedure. In this self-training procedure, we add unlabeled data to the training set using the model's own prediction as pseudo-labels for the unlabeled data. We compare several methods of producing pseudo-labels and show that only one of them improved lightly the segmentation performance. Indeed, most of the methods lead to a deterioration of the accuracy. However, we also noticed that these pseudo-labels that lowered the IoU accuracy lead to a rapid intensity change in the per-pixel prediction map at locations associated with edges. Based on this finding we propose a new counting algorithm and show that this method results in a testing error rate of 6-9% on counting red blood cells.