Abstract:
India is currently home to around 12 million blind people out of 39 million globally - one-third of the world's blind population resides in India. People with low vision or complete blindness find it hard to navigate themselves outside familiar environments. Tasks like simply walking down a crowded street may pose a great challenge. Because of this, many people with low vision will bring a sighted friend or family member to help navigate unknown environments. It calls for the need for a device to pose as a guide for such users. An interesting task that is necessary to be performed is safety from animals in an outdoor environment.
This thesis presents the design and the implementation of an animal detection module for MAVI (Mobility Assistant for Visually Impaired), an outdoor navigation system. The goal is to let the user be aware of animals (cows and dogs) present around them. The work involves measuring accuracy, performance and power/energy on Zedboard (embedded system) by varying configuration of various parameters of OpenCV functions. It includes cross- compilation and building of environment for Zedboard arm development and profiling algorithms. Both traditional Computer Vision techniques and Deep Learning methods for animal detection have been applied.
The exploration has been done while being mindful of other processes and applications that will be running alongside with this application. The speed and memory requirements of the proposed methods are also checked. The module provides useful parameters to a controller for monitoring performance and energy consumption according to the circumstances and needs of a VI person. The complete flow to animal detection is presented. A novel dataset of Indian cows with proper annotations that are suitable for evaluation of object detection techniques has also been released. A major challenge for such a module is to perform well with the diversity and complexities that arise in the Indian scenario due to non-standard practices.