Learning Adaptive Deep Representations for Few-to-Medium Shot Image Classification
dc.contributor.author | Jiang, Xiang | |
dc.contributor.copyright-release | Not Applicable | en_US |
dc.contributor.degree | Doctor of Philosophy | en_US |
dc.contributor.department | Faculty of Computer Science | en_US |
dc.contributor.ethics-approval | Not Applicable | en_US |
dc.contributor.external-examiner | Mark Schmidt | en_US |
dc.contributor.graduate-coordinator | Michael McAllister | en_US |
dc.contributor.manuscripts | Not Applicable | en_US |
dc.contributor.thesis-reader | Luís Torgo | en_US |
dc.contributor.thesis-reader | Thomas Trappenberg | en_US |
dc.contributor.thesis-reader | Sageev Oore | en_US |
dc.contributor.thesis-supervisor | Stan Matwin | en_US |
dc.contributor.thesis-supervisor | Daniel Silver | en_US |
dc.date.accessioned | 2021-02-22T14:23:07Z | |
dc.date.available | 2021-02-22T14:23:07Z | |
dc.date.defence | 2021-01-29 | |
dc.date.issued | 2021-02-22T14:23:07Z | |
dc.description.abstract | In real-world applications, the environment in which a machine learning system is deployed tends to change due to many factors, such as sample selection bias, prior probability mismatch, and domain shift. This makes it difficult to reliably generalize deep learning models from the training set to real-world scenarios. In addition, data scarcity frequently arises from a large number of applications where annotating data is expensive or requires specialized expertise. As machine learning applications progress into more complex tasks that require models with magnitudes higher Vapnik–Chervonenkis dimensions, more labeled training data are necessary to maintain the same upper bound for the test error. To this end, there is an ever-increasing need for sample efficient learning systems that can adapt to changing environments. This thesis aims to study the generalization of deep learning models in the presence of distribution mismatch and data scarcity. We first study unsupervised domain adaptation, an emerging field of semi-supervised learning that aims to address domain shift with labeled data in the source domain and unlabeled data in the target domain. We propose implicit class-conditioned domain alignment to address between-domain class distribution shift. A theoretical analysis is provided to justify the proposed method by decomposing the empirical domain divergence into class-aligned and class-misaligned divergence, and we show that class-misaligned divergence is detrimental to domain adaptation. We show that our method offers consistent improvements for different adversarial adaptation algorithms. We also propose two meta-learning methods to bridge the gap between gradient and metric-based methods. The first proposal is Conditional class-Aware Meta-Learning where we introduce a metric space to modulate the image representation of a model, resulting in better separated feature representations. Motivated by the discrepancy of the number of training examples between few-shot and real-world medical datasets, the second proposal is to extend few-shot learning to few-to-medium-shot learning. The proposed Task Adaptive Metric Space uses gradient-based fine-tuning to adjust parameters of the metric space to provide more flexibility to metric-based methods. The method adjusts the metric space to better reflect examples of a new medical classification task. | en_US |
dc.identifier.uri | http://hdl.handle.net/10222/80257 | |
dc.language.iso | en | en_US |
dc.subject | domain adaptation | en_US |
dc.subject | deep learning | en_US |
dc.subject | learning-to-learn | en_US |
dc.title | Learning Adaptive Deep Representations for Few-to-Medium Shot Image Classification | en_US |