dc.contributor.author | Zhang, Jie | |
dc.date.accessioned | 2022-08-31T17:04:04Z | |
dc.date.available | 2022-08-31T17:04:04Z | |
dc.date.issued | 2022-08-31 | |
dc.identifier.uri | http://hdl.handle.net/10222/81952 | |
dc.description.abstract | LiDAR and camera can be an excellent complement to the advantages in an
autonomous vehicle system. Various fusion methods have been developed for sensor
fusion. Due to information lost, the autonomous driving system cannot navigate
complex driving scenarios. When integrating the camera and LiDAR data, to account
for loss of some detail of characters when using late fusion, we could choose a
convolution neural network to fuse the features. However, the current sensor fusion
method has low efficiency for the actual self-driving task due to the complex
scenarios. To improve the efficiency and effectiveness of context fusion in high
density traffic, we propose a new fusion method and architecture to combine the
multi-model information after extracting the features from the LiDAR and camera.
This new method is able to pay extra attention to features we want by allocating the
weight during the feature extractor level. | en_US |
dc.language.iso | en | en_US |
dc.subject | sensor fusion | en_US |
dc.subject | autonomous vehicles | en_US |
dc.title | LiDAR and Camera Fusion in Autonomous Vehicles | en_US |
dc.date.defence | 2022-08-23 | |
dc.contributor.department | Department of Electrical & Computer Engineering | en_US |
dc.contributor.degree | Master of Applied Science | en_US |
dc.contributor.external-examiner | n/a | en_US |
dc.contributor.graduate-coordinator | Sieben, Vincent J | en_US |
dc.contributor.thesis-reader | Dr. Kamal El-Sankary | en_US |
dc.contributor.thesis-reader | Dr. Srinivas Sampalli | en_US |
dc.contributor.thesis-supervisor | Dr. Jason Gu | en_US |
dc.contributor.ethics-approval | Not Applicable | en_US |
dc.contributor.manuscripts | Not Applicable | en_US |
dc.contributor.copyright-release | Not Applicable | en_US |