Repository logo
 

EVOLVING OPTIMAL AUGMENTATION POLICIES FOR SELF-SUPERVISED LEARNING ALGORITHMS

Date

2023-06-28

Authors

Barrett, Noah

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

In recent years, self-supervised learning has shown remarkable promise for expanding the capabilities of deep-learning-based computer vision models. In many self-supervised learning approaches, specifically those that employ a Siamese Network, data augmentation is a core component of the algorithm. However, typically a standard set of augmentations are employed without further investigation into improving the augmentation strategy used. This thesis aims to address this issue by taking a step forward to better understand the impact of data augmentation on cutting-edge computer vision based self-supervised learning algorithms. Inspired by supervised augmentation optimization approaches, this thesis explores the possibility of further optimizing four SOTA self-supervised learning algorithms, BYOL, SwAV, NNCLR, and SimSiam, by improving augmentation operators used in the pretext task. Using a Genetic Algorithm, it was possible to learn augmentation policies which yielded higher performance than the original augmentation policies for all four self-supervised learning algorithms, on two datasets, SVHN and CIFAR-10. This thesis shows that improving the augmentation policies used in computer vision based self-supervised learning algorithms is a fruitful direction for further improving on the cutting-edge performance yielded from this family of algorithms.

Description

Keywords

Augmentation Optimization, Self-Supervised Learning, Computer Vision, Genetic Algorithm

Citation