Show simple item record

dc.contributor.authorZhang, Jie
dc.date.accessioned2022-08-31T17:04:04Z
dc.date.available2022-08-31T17:04:04Z
dc.date.issued2022-08-31
dc.identifier.urihttp://hdl.handle.net/10222/81952
dc.description.abstractLiDAR and camera can be an excellent complement to the advantages in an autonomous vehicle system. Various fusion methods have been developed for sensor fusion. Due to information lost, the autonomous driving system cannot navigate complex driving scenarios. When integrating the camera and LiDAR data, to account for loss of some detail of characters when using late fusion, we could choose a convolution neural network to fuse the features. However, the current sensor fusion method has low efficiency for the actual self-driving task due to the complex scenarios. To improve the efficiency and effectiveness of context fusion in high density traffic, we propose a new fusion method and architecture to combine the multi-model information after extracting the features from the LiDAR and camera. This new method is able to pay extra attention to features we want by allocating the weight during the feature extractor level.en_US
dc.language.isoenen_US
dc.subjectsensor fusionen_US
dc.subjectautonomous vehiclesen_US
dc.titleLiDAR and Camera Fusion in Autonomous Vehiclesen_US
dc.date.defence2022-08-23
dc.contributor.departmentDepartment of Electrical & Computer Engineeringen_US
dc.contributor.degreeMaster of Applied Scienceen_US
dc.contributor.external-examinern/aen_US
dc.contributor.graduate-coordinatorSieben, Vincent Jen_US
dc.contributor.thesis-readerDr. Kamal El-Sankaryen_US
dc.contributor.thesis-readerDr. Srinivas Sampallien_US
dc.contributor.thesis-supervisorDr. Jason Guen_US
dc.contributor.ethics-approvalNot Applicableen_US
dc.contributor.manuscriptsNot Applicableen_US
dc.contributor.copyright-releaseNot Applicableen_US
 Find Full text

Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record