Limitations and Breakthroughs in Self-Supervised Representation Learning: A Mutual Information Perspective and a Boosted Augmentation-Free Approach
| dc.contributor.author | Sabby, Akhlaqur Rahman | |
| dc.contributor.copyright-release | Not Applicable | |
| dc.contributor.degree | Master of Computer Science | |
| dc.contributor.department | Faculty of Computer Science | |
| dc.contributor.ethics-approval | Not Applicable | |
| dc.contributor.external-examiner | N/A | |
| dc.contributor.manuscripts | Not Applicable | |
| dc.contributor.thesis-reader | Evangelos Milios | |
| dc.contributor.thesis-reader | Hassan Sajjad | |
| dc.contributor.thesis-supervisor | Ga Wu | |
| dc.date.accessioned | 2025-08-26T17:29:03Z | |
| dc.date.available | 2025-08-26T17:29:03Z | |
| dc.date.defence | 2025-07-30 | |
| dc.date.issued | 2025-08-25 | |
| dc.description.abstract | Self-supervised representation learning (SSRL) has demonstrated remarkable empirical success, yet its underlying principles remain insufficiently understood. In the augmentation-dependent setting, where models are trained using multiple transformed views of the same input, recent works have aimed to unify diverse SSRL methods by examining their information-theoretic objectives or summarizing their heuristics for preventing representation collapse. However, architectural elements like the predictor network, stop-gradient operation, and statistical regularizer are often viewed as empirically motivated additions. In this work, we adopt a first-principles approach and investigate whether a learning objective of an SSRL algorithm dictates its possible optimization strategies and model design choices. In particular, by starting from a variational mutual information (MI) lower bound, we derive two training paradigms, namely Self-Distillation MI (SDMI) and Joint MI (JMI), each imposing distinct structural constraints and covering a set of existing SSRL algorithms. SDMI inherently requires alternating optimization, making stop-gradient operations theoretically essential. In contrast, JMI admits joint optimization through symmetric architectures without requiring components like the predictor or stop-gradient. Under the proposed formulation, predictor networks in SDMI and statistical regularizers in JMI emerge as tractable surrogates for the MI objective. We show that many existing SSRL methods are specific instances or approximations of these two paradigms. This thesis work provides a theoretical explanation behind the choices of different architectural components of existing SSRL methods, beyond heuristic conveniences. In a separate line of investigation, we address the comparatively underexplored problem of augmentation-free SSRL, where the absence of handcrafted data transformations poses unique challenges to representation quality and diversity. To this end, we propose Boosted Representation Learning (BRL), a novel framework that incrementally trains the encoder using a sequence of fixed, diverse target networks inspired by the principles of boosting. Each target in BRL is trained to capture distinct, complementary aspects of the input data, and once trained, it is frozen and used to supervise the encoder. This progressive construction of supervision enables the encoder to learn increasingly expressive representations without any data augmentations. Through extensive ablations, we analyze the effects of target diversity, initialization, magnitude regularization, and encoder dynamics. Experiments on CIFAR-10 show that BRL surpasses existing augmentation-free methods, providing a principled alternative in domains where augmentations are impractical or unavailable. Together, this thesis provides a unified theoretical foundation for augmentation-dependent SSRL and introduces a principled augmentation-free alternative, advancing our understanding and broadening the applicability of self-supervised learning. | |
| dc.identifier.uri | https://hdl.handle.net/10222/85398 | |
| dc.language.iso | en | |
| dc.subject | Machine Learning | |
| dc.subject | Deep Learning | |
| dc.subject | Self-Supervised Learning | |
| dc.subject | Representation Learning | |
| dc.subject | Mutual Information | |
| dc.title | Limitations and Breakthroughs in Self-Supervised Representation Learning: A Mutual Information Perspective and a Boosted Augmentation-Free Approach |
