The goal of our research is to design perception devices dedicated to improving driving safety. This device is intended to be part of new driving assistance systems aiming at increasing road safety. Many safety systems which are emerging nowadays in vehicles use "distance to obstacle" information obtained from telemeters such as radars, laser scanners or ultrasounds. These systems achieve great precision in locating objects relative to the sensors, but are not able to provide their localisation compared to the road or the lane. This is the reason why some systems are using passive sensors like video integrated inside vehicles. A device founded partially on computer vision would compensate for this deficiency. But the localisation computed by vision needs to be analysed in term of precision. This paper explores differences in localisation accuracy between systems involving only one camera (monocular vision) and systems involving two cameras (stereo vision). A complete study of the errors found in an in-depth reconstruction are shown. For the covering abstract see ITRD E825082.
Abstract