Outdoor Challenges
We employ the KITTI autonomous driving benchmark suite to define our outdoor challenges for RMRC 2013. We now briefly describe the reconstruction and recognition challenges.
Stereo
The stereo matching benchmark consists of 194 training image pairs and 195 test image pairs. The latter 195 images will be used to evaluate participans in the challenge.
Our evaluation server computes the average number of bad pixels for all non-occluded or occluded (=all groundtruth) pixels. All methods are evaluated in the dense regime. If a sparse point cloud is submitted, simple background interpolation is employed. We require that all methods use the same parameter set for all test pairs. The data for the challenge as well as development kit can be obtained here.
Optical Flow
The optical flow benchmark consists of 194 training image pairs and 195 test image pairs. The latter 195 images will be used to evaluate participans in the challenge.
Our evaluation server computes the average number of bad pixels for all non-occluded or occluded (=all groundtruth) pixels. All methods are evaluated in the dense regime. If a sparse point cloud is submitted, simple background interpolation is employed. We require that all methods use the same parameter set for all test pairs. The data for the challenge as well as development kit can be obtained here.
Visual Odometry
The visual odometry benchmark consists of 22 stereo sequences. We provide 11 sequences (00-10) with ground truth trajectories for training and 11 sequences (11-21) without ground truth for evaluation. The latter 11 sequences will be used to evaluate participans in the challenge.
For this challenge you may provide results using monocular/stereo visual odometry. Note that your method has to be fully automatic, (e.g., no manual loop-closure tagging is allowed). The parameters should be the same for all sequences. The data for the challenge as well as development kit can be obtained in the here.
2D Object Detection and Orientation Estimation
The object detection and object orientation challenge consists of 7481 training images and 7518 test images, comprising a total of 80.256 labeled objects. The latter 7518 test images will be used to evaluate participans in the challenge. The challenge contains 3 classes: cars, pedestrians and bicycles.
We evaluate object detection performance using the PASCAL criteria and object detection and orientation estimation performance using the measure discussed in our CVPR 2012 publication. For cars we require an overlap of 70%, while for pedestrians and cyclists we require an overlap of 50% for a detection. We compute precision-recall curves for object detection and orientation-similarity-recall curves for joint object detection and orientation estimation. In the latter case not only the object 2D bounding box has to be located correctly, but also the orientation estimate in bird's eye view is evaluated. To rank the methods we compute average precision and average orientation similiarity. The data for the challenge as well as development kit can be obtained in the here.
Tracking
The tracking challenge consists of 21 training and 29 testing sequence for a total of 8008 training images and 11905 test images. The classes will be evaluated: car, pedestrians and cyclists.
We evaluate submitted results using common tracking metrics i.e., CLEAR MOT and Mostly-Tracked/Partly-Tracked/Mostly-Los. Since there is no single metric, we will not rank the different competitors. The data for the challenge as well as development kit can be obtained in the here.
For any questions, contact andreas (dot) geiger (at) tue (dot) mpg (dot) de