Download PDFOpen PDF in browser

BVNet: A 3D End-to-End Model Based on Point Cloud

EasyChair Preprint no. 4474

13 pagesDate: October 26, 2020


Point cloud LiDAR data are increasingly used for detecting road situations for autonomous driving. The most important issues here are the detection accuracy and the processing time. In this study, we propose a new model for improving the detecting performance based on point cloud. A well-known difficulty in processing 3D point cloud is that the point data are unordered. To address this problem, we define 3D point cloud features in the grid cells of the bird’s view according to the distribution of the points. In particular, we introduce the average and standard deviation of the heights as well as a distance-related density of the points as new features inside a cell. The resulting feature map is fed into a conventional neural network to obtain the outcomes, thus realizing an end-to-end real-time detection framework, called BVNet (Bird’s-View-Net). The proposed model is tested on the KITTI benchmark suit and the results show considerable improvement for the detection accuracy compared with the models without the newly introduced features.

Keyphrases: 3D object detection, autonomous driving, CNN, feature extraction, KITTI, point cloud

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Nuo Cheng and Xiaohan Li and Shengguang Lei and Pu Li},
  title = {BVNet: A 3D End-to-End Model Based on Point Cloud},
  howpublished = {EasyChair Preprint no. 4474},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser