No. | Video | Title・Author (Affiliation) |
---|---|---|
165 | ◯ |
Improving Multi-Camera Bird's-Eye-View Perception Accuracy through Stereo Matching Shuntaro Tsuchiya・Yui Tanaka・Takeru Ninomiya・Hideaki Kido (Hitachi)・Kota Irie・Yoshitaka Okuyama (Hitachi Astemo) Development of BEV (Bird Eye View) model that integrates multi-camera images in a bird's-eye view space is in progress. However, image-based recognition is still challenging due to low ranging accuracy. This report shows that using the relative distance obtained by stereo matching techniques improves the accuracy of 3D recognition by BEV model. |
166 | ◯ |
Implementing Localization using Deep Learning with LiDAR Point Clouds Kengo Kawahara・Keisuke Yoneda・Ryo Yanase・Amane Kinoshita・Naoki Suganuma (Kanazawa University) To achieve safe autonomous driving, self-localization is essential. This study proposes a matching method that retains the features of LiDAR point clouds in a Pillar structure and converts them into pseudo-images, using deep learning to estimate the vehicle's position as a likelihood distribution. The proposed method aims to achieve more robust localization compared to conventional point cloud matching methods. |
167 | ◯ |
Simulation of Infrastructure LiDAR using CARLA and Pedestrian Detection with Deep Learning Riku Nikaido・Keigo Hariya・Keisuke Yoneda・Naoki Suganuma (Kanazawa University) LiDAR is widely used in autonomous driving perception and as an infrastructure sensor in urban environments. In this paper, we simulate a stationary LiDAR system specialized for pedestrian detection using CARLA. As a detection method, we leverage multi-frame 3D point clouds for object detection. Furthermore, the study aims a robust detection model capable of accurately identifying pedestrians in sparse point cloud data. |