Show simple item record

dc.contributor.authorTaylor-Melanson, Willam
dc.date.accessioned2023-12-15T20:32:48Z
dc.date.available2023-12-15T20:32:48Z
dc.date.issued2023-12-14
dc.identifier.urihttp://hdl.handle.net/10222/83289
dc.description.abstractThis thesis evaluates the effectiveness of two recent works in the area of nonlinear causal modelling, DeepSCM and ImageCFGen, in their ability to explain image classifiers and model audio data. First, techniques are presented for generating local counterfactual explanations of classifiers using DeepSCM and ImageCFGen models, and quantitative comparisons are made with techniques from the OmnixAI explanation toolkit. The metrics used to evaluate these explanation techniques on the Morpho-MNIST dataset indicate that the proposed methods of model explanation are more interpretable than those in the OmnixAI toolkit. Second, a causal graph is constructed on top of the attributes of the Audio-MNIST speech dataset in order to train DeepSCM and ImageCFGen models. To evaluate the models on this speech dataset, classifiers are trained and used to measure the consistency of attributes in observational and counterfactual data generated by DeepSCM and ImageCFGen. DeepSCM outperforms the standard ImageCFGen model on this task, but after fine-tuning the ImageCFGen model shows similar levels of agreement with attribute classifiers when compared with DeepSCM. In addition to attribute classifiers, a speaker classifier is trained to measure the ability of the causal models to maintain a speaker's voice when computing speech counterfactuals. The counterfactual models are compared with interventional models which do not perform abduction in order to provide a baseline to the experiment. DeepSCM is the only model which significantly improves over the interventional baseline, suggesting this model may be preferred over ImageCFGen to establish a causal model with the ability to produce believable speech counterfactuals. Finally, a dataset of North American Right Whale (NARW) calls is investigated, and a similar evaluation using attribute classifiers is performed which demonstrates the ability of these models to manipulate audio data.en_US
dc.language.isoenen_US
dc.subjectDeep learningen_US
dc.subjectCausalityen_US
dc.subjectExplainable AIen_US
dc.titleGenerative Causal Modelling Techniques for Visual Model Explanation and Counterfactual Audio Generationen_US
dc.date.defence2023-10-26
dc.contributor.departmentFaculty of Computer Scienceen_US
dc.contributor.degreeMaster of Computer Scienceen_US
dc.contributor.external-examinern/aen_US
dc.contributor.thesis-readerEvangelos Miliosen_US
dc.contributor.thesis-readerAbraham Nunesen_US
dc.contributor.thesis-supervisorThomas Trappenbergen_US
dc.contributor.ethics-approvalNot Applicableen_US
dc.contributor.manuscriptsNot Applicableen_US
dc.contributor.copyright-releaseNot Applicableen_US
 Find Full text

Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record