Indoor/outdoor navigation system based on possibilistic traversable area segmentation for visually impaired people
Abstract
Autonomous collision avoidance for visually impaired people requires a specific processing for an accurate definition of traversable area. Processing of a real time image sequence for traversable area segmentation is quite mandatory. Low cost systems suggest use of poor quality cameras. However, real time low cost camera suffers from great variability of traversable area appearance at indoor as well as outdoor environments. Taking into account ambiguity affecting object and traversable area appearance induced by reflections, illumination variations, occlusions (, etc...), an accurate segmentation of traversable area in such conditions remains a challenge. Moreover, indoor and outdoor environments add additional variability to traversable areas. In this paper, we present a real-time approach for fast traversable area segmentation from image sequence recorded by a low-cost monocular camera for navigation system. Taking into account all kinds of variability in the image, we apply possibility theory for modeling information ambiguity. An efficient way of updating the traversable area model in each environment condition is to consider traversable area samples from the same processed image for building its possibility maps. Then fusing these maps allows making a fair model definition of the traversable area. Performance of the proposed system was evaluated on public databases, with indoor and outdoor environments. Experimental results show that this method is challenging leading to higher segmentation rates.Keywords
Traversable area segmentation, indoor/outdoor Environment, possibility theory, fusion, reference window, visually impaired peoplePublished
2016-08-01
Downloads
Download data is not yet available.
Copyright (c) 2016 Jihen Frikha, Dorra Sellami, Imen Khanfir Kallel
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.