Background: Deep learning systems have improved performance of devices through more accurate object detection in a significant number of areas, for medical aid in general, and also for navigational aids for the visually impaired. Systems addressing different needs are available, and many manage effectively the detection of static obstacles. Purpose: This research provides a review of deep learning systems used with navigational tools for the visually Impaired and a framework for guidance for future research. Methods: We compare current deep learning systems used with navigational tools for the visually impaired and compile a taxonomy of indispensable features for systems. Results: Challenges to detection. Our taxonomy of improved navigational systems shows that it is sufficiently robust to be generally applied. Conclusion: This critical analysis is, to the best of our knowledge, the first of its kind and will provide a much-needed overview of the field.Implication for Rehabilitation Deep learning systems can provide lost cost solutions for the visually impaired. Of these, convolutional neural networks (CNN) and fully convolutional neural networks (FCN) show great promise in terms of the development of multifunctional technology for the visually impaired (i.e., being less specific task oriented). CNN have also potential for overcoming challenges caused by moving and occluded objects. This work has also highlighted a need for greater emphasis on feedback to the visually impaired which for many technologies is limited.
|Number of pages||9|
|Journal||Disability and Rehabilitation: Assistive Technology|
|Publication status||E-pub ahead of print - 06 Nov 2019|