|Databases for the evaluation of the performances of active stereo systems are still missing. The stereo geometry of the existing database is fixed, and characterized by parallel axis cameras. By using the software environment we developed, it is possible to collect a large number of data in different situations: e.g. vergent stereo cameras with different fixation points and orientation of the eyes, optic flow maps obtained for different ego-motion velocities, or different gaze orientation.The true disparity and optic flow maps can be stored together with the 3D data from which they have been generated and the corresponding image sequences. These data can be used for future algorithm benchmarking also by other researchers in the Computer Vision community.If you need more information about the benchmarking datasets please contact me by email.|
The datasets in this page can be downloaded and used by Computer Vision researchers. We grant permission to use and publish all images, disparity and optic flow maps on this website. However, if you use our datasets, we request that you cite the appropriate paper.
Sequence of stereo pairs with ground truth disparity and optic flow maps
The following sequence shows the active exploration of an indoor scene, representing a desktop and different objects at various distances, acquired by using a laser scanner. The simulator aims to mimic the behavior of a human-like robotic system acting in the peripersonal space. Accordingly, the interocular distance between the two cameras is set to 6 cm and the distance between the cameras and the center of the scene is about 80 cm. The fixation points have been chosen arbitrary, thus simulating an active exploration of the scene, and in their proximity the disparity between the left and the right projections is zero, while getting far from the fixation point.The dataset is composed by 10 left/right stereo pairs and the corresponding horizontal and vertical ground truth disparity maps. Click here to download the dataset.
|Horizontal disparity||Vertical disparity|
|The simulator can be used to obtain sequences acquired by a moving observer. The position and the orientation of the head can be changed, in order to mimic the navigation in the virtual environment. For the sake of simplicity, the ocular movements are not considered and the visual axes are kept parallel.Each dataset is composed by 8 frames and the optic flow ground truth map, with respect to the central frame.|
|The robot has velocity along Z axis, only. Click here to download the dataset.|
|The robot has velocity along Z axis and along positive X axis. Click here to download the dataset.|
|The robot has velocity along Z axis and along negative X axis. Click here to download the dataset.|
|The robot has velocity along Z axis and rotates around Y axis. Click here to download the dataset.|
|Evaluation of time to contact and orientation of the surfacesThe tool has been used to simulate a standard moving camera. A set of sequences to benchmark the algorithms for motion interpretation (i.e. time-to-contact evaluation and reconstruction of the orientation of the surfaces) has been created.Reference paper:M. Chessa, F. Solari, S.P. Sabatini (2013) Adjustable Linear Models for Optic Flow based Obstacle Avoidance. Computer Vision and Image Understanding 117(6), pp. 603-619. [doi]Please write to Manuela Chessa to obtain the sequences and the ground truth parameters.Table 1. Description of the synthetic sequences. The linear distances are expressed in meters, the angular quantities in radians, and the time in frames. The columns “radius” and “heading” indicate the curve radius and if the obstacle is in the heading direction or not, respectively. The number of frames for each sequence ranges from 25 to 250.|