Repository logo

Limitations and Breakthroughs in Self-Supervised Representation Learning: A Mutual Information Perspective and a Boosted Augmentation-Free Approach

dc.contributor.authorSabby, Akhlaqur Rahman
dc.contributor.copyright-releaseNot Applicable
dc.contributor.degreeMaster of Computer Science
dc.contributor.departmentFaculty of Computer Science
dc.contributor.ethics-approvalNot Applicable
dc.contributor.external-examinerN/A
dc.contributor.manuscriptsNot Applicable
dc.contributor.thesis-readerEvangelos Milios
dc.contributor.thesis-readerHassan Sajjad
dc.contributor.thesis-supervisorGa Wu
dc.date.accessioned2025-08-26T17:29:03Z
dc.date.available2025-08-26T17:29:03Z
dc.date.defence2025-07-30
dc.date.issued2025-08-25
dc.description.abstractSelf-supervised representation learning (SSRL) has demonstrated remarkable empirical success, yet its underlying principles remain insufficiently understood. In the augmentation-dependent setting, where models are trained using multiple transformed views of the same input, recent works have aimed to unify diverse SSRL methods by examining their information-theoretic objectives or summarizing their heuristics for preventing representation collapse. However, architectural elements like the predictor network, stop-gradient operation, and statistical regularizer are often viewed as empirically motivated additions. In this work, we adopt a first-principles approach and investigate whether a learning objective of an SSRL algorithm dictates its possible optimization strategies and model design choices. In particular, by starting from a variational mutual information (MI) lower bound, we derive two training paradigms, namely Self-Distillation MI (SDMI) and Joint MI (JMI), each imposing distinct structural constraints and covering a set of existing SSRL algorithms. SDMI inherently requires alternating optimization, making stop-gradient operations theoretically essential. In contrast, JMI admits joint optimization through symmetric architectures without requiring components like the predictor or stop-gradient. Under the proposed formulation, predictor networks in SDMI and statistical regularizers in JMI emerge as tractable surrogates for the MI objective. We show that many existing SSRL methods are specific instances or approximations of these two paradigms. This thesis work provides a theoretical explanation behind the choices of different architectural components of existing SSRL methods, beyond heuristic conveniences. In a separate line of investigation, we address the comparatively underexplored problem of augmentation-free SSRL, where the absence of handcrafted data transformations poses unique challenges to representation quality and diversity. To this end, we propose Boosted Representation Learning (BRL), a novel framework that incrementally trains the encoder using a sequence of fixed, diverse target networks inspired by the principles of boosting. Each target in BRL is trained to capture distinct, complementary aspects of the input data, and once trained, it is frozen and used to supervise the encoder. This progressive construction of supervision enables the encoder to learn increasingly expressive representations without any data augmentations. Through extensive ablations, we analyze the effects of target diversity, initialization, magnitude regularization, and encoder dynamics. Experiments on CIFAR-10 show that BRL surpasses existing augmentation-free methods, providing a principled alternative in domains where augmentations are impractical or unavailable. Together, this thesis provides a unified theoretical foundation for augmentation-dependent SSRL and introduces a principled augmentation-free alternative, advancing our understanding and broadening the applicability of self-supervised learning.
dc.identifier.urihttps://hdl.handle.net/10222/85398
dc.language.isoen
dc.subjectMachine Learning
dc.subjectDeep Learning
dc.subjectSelf-Supervised Learning
dc.subjectRepresentation Learning
dc.subjectMutual Information
dc.titleLimitations and Breakthroughs in Self-Supervised Representation Learning: A Mutual Information Perspective and a Boosted Augmentation-Free Approach

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AkhlaqurRahmanSabby2025.pdf
Size:
37.65 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.12 KB
Format:
Item-specific license agreed upon to submission
Description: