Zhang, Jie2022-08-312022-08-312022-08-31http://hdl.handle.net/10222/81952LiDAR and camera can be an excellent complement to the advantages in an autonomous vehicle system. Various fusion methods have been developed for sensor fusion. Due to information lost, the autonomous driving system cannot navigate complex driving scenarios. When integrating the camera and LiDAR data, to account for loss of some detail of characters when using late fusion, we could choose a convolution neural network to fuse the features. However, the current sensor fusion method has low efficiency for the actual self-driving task due to the complex scenarios. To improve the efficiency and effectiveness of context fusion in high density traffic, we propose a new fusion method and architecture to combine the multi-model information after extracting the features from the LiDAR and camera. This new method is able to pay extra attention to features we want by allocating the weight during the feature extractor level.ensensor fusionautonomous vehiclesLiDAR and Camera Fusion in Autonomous Vehicles