• Session No.139 Intelligent Vehicle
  • October 25Hagi Conference Hall9:30-11:35
  • Chair: Hiroki Nakamura (Japan Automobile Research Institute)
No. Title・Author (Affiliation)
1

Effect of Cooperative Systems Utilizing Roadside Sensors on Automated Driving at Intersections

Hiroshi Yoshitake (Tokyo Institute of Technology)・Wataru Kugimiya (The University of Tokyo)・Motoki Shino (Tokyo Institute of Technology)

The effect of cooperative systems utilizing roadside sensors on automated driving was evaluated as a start toward realizing safe and efficient automated driving in mixed traffic. The evaluation results using numerical simulation targeting automated buses traveling at intersections revealed that the cooperative systems contribute to maintaining safety in improving the efficiency of automated driving.

2

Calculation Method of Safe Speed for Automated Buses on Straight Roads Considering Sensing Characteristics

Taichi Sawanobori (Tokyo Institute of Technology)・Takaki Yoshikawa (The University of Tokyo)・Hiroshi Yoshitake (Tokyo Institute of Technology)・Yoshio Matsuura・Masaya Segawa (Advanced Smart Mobility)・Motoki Shino (Tokyo Institute of Technology)

In order to operate an automated bus safely in various environments, we devised a method for calculating the safe speed that can avoid collisions with pedestrians when driving on straight roads. We calculated the safe speed by focusing on the time it takes for a pedestrian to enter the detection range and the time it takes for a pedestrian to be recognized after entering the detection range and by considering the sensing characteristics of the automated bus. The safety speed was applied to a real-world environment, and its usefulness was investigated.

3

Visualization of the Basis for Judgment of Object Recognition Models by Sensor Fusion of LiDAR and Cameras

Yuusuke Nishio・Tsubasa Hirakawa・Takayoshi Yamashita・Hironobu Fujiyoshi (Chubu University)

Detecting distant vehicles is essential for safe automated driving. Therefore, we constructed a multimodal 3D object detector using point clouds and images of traffic scenes reproduced by a simulator. By using a perturbation-based importance visualization method for the detector, we analyze which of the point cloud and image has a higher contribution rate in various scenes.

4

Camera-based Tightly-coupled Fusion for 3D Object Detection

Xiaoyu Wang・Yoshitaka Okuyama・Kota Irie (Hitachi Astemo)

Most current 3D object detection methods’ performances drop dramatically on pseudo-LiDAR from disparity estimation for the characteristics that the error of stereo depth estimation grows quadratically with depth. In this work, we propose to implement a tightly coupled fusion that estimates the corresponding 3D bounding box for each 2D detection bounding box by maximum a posteriori estimator that considers (1) the variation of the transformation matrix between camera coordinates and ego-vehicle coordinates, and (2) the distribution of the transformed pseudo-LiDAR point cloud on logarithmic coordinates. The experiments show that our approach achieves remarkable accuracy improvement in 3D object detection.

5

Evaluating the Accuracy of Object Detection Models in Scenes where Pedestrians are Present at Intersections

Yotaro Suzuki・Hidenori Itaya・Tsubasa Hirakawa・Takayoshi Yamashita・Hironobu Fujiyoshi (Chubu University)

Evaluation of object detection models for automated vehicles requires a large amount of evaluation data. However, collecting evaluation data in a real environment is extremely costly. Therefore, it is expected that a variety of evaluation data can be created by using a computer graphics environment. In this study, an evaluation scene is created using a DIVP simulator to evaluate the detection accuracy of an object detection model.

Back to Top