Accurate, robust, and infrastructure-free pedestrian positioning and navigation systems have gained significant attention in recent years due to their diverse applications. GPS is ineffective indoors and fixed infrastructure-based indoor navigation systems, such as beacons or Wi-Fi networks, pose practical and cost challenges. To address this, there's a growing demand for self-reliant navigation systems that seamlessly function indoors and outdoors. These systems often utilize sensor fusion and machine learning for precise and adaptive navigation. They can be integrated into standard smartphones, providing a portable and comprehensive navigation tool.
A specific beneficiary group for such systems includes individuals with visual impairments who rely on tools such as long canes or guide dogs. Self-reliant navigation systems can be customized for their specific needs, enhancing mobility and independence indoors and outdoors. Existing research has primarily focused on sighted individuals, however, there is an increasing interest in understanding the unique challenges faced by visually impaired individuals and optimizing systems for more inclusive and effective solutions.
This dissertation addresses this need by developing a Pedestrian Dead Reckoning (PDR) system for inertial navigation using smartphones to assist visually impaired individuals in indoor settings. Such PDR system requires two important components: step detection and step length estimation.
For step detection within our system, an LSTM-based network was developed, trained, and tested on the WeAllWalk data set, which includes inertial data gathered from ten blind and five sighted walkers. The achieved results on this data set surpassed existing benchmarks, highlighting the crucial role of selecting from the walker community for training data plays in determining results. Furthermore, the PDR system, incorporating this step detector method, outperformed the state-of-the-art learning-based model, RoNIN, in path reconstruction on the WeAllWalk data set.
For step length estimation, a model consisting of an LSTM layer followed by four fully connected layers was implemented. The same network scheme was used to predict either step length or walking speed (allowing for integration over a step period to calculate step length). In the initial step, data was collected from twelve sighted participants who traversed four routes with varying stride lengths. Results from sighted participants suggest that step length can be predicted more reliably than average walking speed over each step. Subsequently, the model was trained and tested on data from seven blind participants. The obtained results highlighted the different gait patterns among sighted and blind walkers, emphasizing the importance of designing systems for assisted navigation based on data from the visually impaired community.
Finally, an iOS application named WayFinding was designed to aid indoor navigation for blind travelers. The developed step detector module was integrated into this app. However, for this study, a calibrated step length was used instead of the step length estimator. WayFinding enables an individual to determine and follow a route through building corridors to reach a certain destination, assuming the app has access to the building's floor plan. This app exclusively utilizes the inertial sensors of the smartphone, requiring no infrastructure modifications, such as the installation and support of BLE beacons. A watch-based user interface and speech-based notifications enable hands-free interaction for blind users. A user study involving seven blind participants was conducted in our campus buildings to assess the system's performance. All participants successfully navigated the pre-defined routes and provided positive feedback during the post-experiment interviews and questionnaires.

Event Host: Fatemeh Elyasi, Ph.D. Candidate, Computer Science and Engineering

Advisor: Roberto Manduchi

Event Details

See Who Is Interested

0 people are interested in this event

User Activity

No recent activity