Depth Data Error Modeling of the ZED 3D Vision Sensor from Stereolabs
Abstract
The ZED camera is binocular vision system that can be used to provide a 3D perception of the world. It can be applied in autonomous robot navigation, virtual reality, tracking, motion analysis and so on. This paper proposes a mathematical error model for depth data estimated by the ZED camera with its several resolutions of operation. For doing that, the ZED is attached to a Nvidia Jetson TK1 board providing an embedded system that is used for processing raw data acquired by ZED from a 3D checkerboard. Corners are extracted from the checkerboard using RGB data, and a 3D reconstruction is done for these points using disparity data calculated from the ZED camera, coming up with a partially ordered, and regularly distributed (in 3D space) point cloud of corners with given coordinates, which are computed by the device software. These corners also have their ideal world (3D) positions known with respect to the coordinate frame origin that is empirically set in the pattern. Both given (computed) coordinates from the camera’s data and known (ideal) coordinates of a corner can, thus, be compared for estimating the error between the given and ideal point locations of the detected corner cloud. Subsequently, using a curve fitting technique, we obtain the equations that model the RMS (Root Mean Square) error. This procedure is repeated for several resolutions of the ZED sensor, and at several distances. Results showed its best effectiveness with a maximum distance of approximately sixteen meters, in real time, which allows its use in robotic or other online applications.
Keywords
Sensor Systems, 3D and StereoPublished
Downloads
Copyright (c) 2018 Luis Enrique Ortiz Fernandes, Viviana Elizabeth Cabrera Avila, Luiz M G Goncalves
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.