Repository logo

Domain-Adaptive YOLOv9 for Foggy-Weather Object Detection Using Partial Spatial Self-Attention

dc.contributor.authorXiao, Ziqi
dc.contributor.copyright-releaseNot Applicable
dc.contributor.degreeMaster of Applied Science
dc.contributor.departmentDepartment of Electrical & Computer Engineering
dc.contributor.ethics-approvalReceived
dc.contributor.external-examinerDr. Issam Hammad
dc.contributor.manuscriptsNot Applicable
dc.contributor.thesis-readerDr. Hamed Aly
dc.contributor.thesis-supervisorDr. Jason Gu
dc.contributor.thesis-supervisorDr. Yuan Ma
dc.date.accessioned2026-04-15T18:34:32Z
dc.date.available2026-04-15T18:34:32Z
dc.date.defence2026-04-13
dc.date.issued2026-04-15
dc.description.abstractCross-domain object detection remains challenging because a detector trained on a labeled source domain often generalizes poorly to a target domain with different visual characteristics. This problem is especially evident under adverse weather conditions, where visibility degradation changes contrast, texture, and object boundaries while target-domain annotations are typically unavailable. This thesis develops a domain-adaptive YOLOv9 framework for foggy-weather object detection. The method combines image-level appearance adaptation with feature-level refinement. At the image level, Contrastive Unpaired Translation (CUT) is used to translate labeled source images into pseudo target-style samples while preserving the original annotations. At the feature level, a Partial Spatial Self-Attention (PSSA) module is introduced to refine deep feature representations through spatial contextual modeling over only part of the channel dimension. The proposed framework is evaluated on the Cityscapes$\rightarrow$Foggy Cityscapes benchmark. Experimental results show that both components improve target-domain performance, but their contributions are not identical. CUT produces the larger gain by reducing the appearance discrepancy between the source and target domains, while PSSA provides an additional improvement through deep feature refinement. When the two components are combined, the resulting detector achieves the best overall performance among the evaluated configurations. These results show that substantial improvement under foggy cross-domain conditions can be obtained without abandoning the inference structure of a one-stage detector. The thesis therefore provides a practical domain-adaptive detection framework that improves robustness under adverse visibility while remaining compatible with deployment-oriented YOLO-style detection.
dc.identifier.urihttps://hdl.handle.net/10222/86000
dc.language.isoen
dc.subjectFoggy Cityscapes
dc.subjectCityscapes
dc.subjectCUT
dc.subjectPartial Spatial Self-Attention (PSSA)
dc.subjectYOLOv9
dc.subjectunsupervised domain adaptation
dc.subjectCross-domain detection
dc.titleDomain-Adaptive YOLOv9 for Foggy-Weather Object Detection Using Partial Spatial Self-Attention

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ZiqiXiao2026.pdf
Size:
3.03 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.12 KB
Format:
Item-specific license agreed upon to submission
Description: