Virtual Reality Tool for Active Vision

Virtual Reality (VR) can be used as a tool to analyze the interactions between the visual system of a robotic agent and the environment, with the aim of designing the algorithms to solve the visual tasks necessary to properly behave into the 3D world. The novelty of our approach lies in the use of the VR as a tool to simulate the behavior of vision systems. The visual system of a robot (e.g., an autonomous vehicle, an active vision system, or a driving assistance system) and its interplay with the environment can be modeled through the geometrical relationships between the virtual stereo cameras and the virtual 3D world. Differently from conventional applications, where VR is used for the perceptual rendering of the visual information to a human observer, in the proposed approach, a virtual world is rendered to simulate the actual projections on the cameras of a robotic system. In this way, machine vision algorithms can be quantitatively validated by using the ground truth data provided by the knowledge of both the structure of the environment and the vision system.

The tool developed by our laboratory is described in:

Manuela Chessa, Fabio Solari and Silvio P. Sabatini (2011). Virtual Reality to Simulate Visual Tasks for Robotic Systems, Virtual Reality, Jae-Jin Kim (Ed.), ISBN: 978-953-307-518-1, InTech, Available from: Virtual Reality to Simulate Visual Tasks for Robotic Systems

Other related papers:

M. Chessa, F. Solari and S.P. Sabatini – A virtual reality simulator for active stereo vision systems – VISAPP2009 [pdf]

The datasets used in our paper can be downloaded and used by Computer Vision researchers. We grant permission to use and publish all images, disparity and optic flow maps on this website. However, if you use our datasets, we request that you cite the appropriate paper.