Oladipupo, Ridwan2026-04-172026-04-172026-04-17https://hdl.handle.net/10222/86038Over 2.2 billion blind and low vision people globally struggle to navigate multi-level buildings independently. This thesis addresses this problem through three studies. First, we tested low-cost cameras for computer vision applications. Camera C5 per- formed well (0.96 accuracy, $27.99 cost), but all cameras required proper lighting to work. Without light, detection failed completely. Second, we interviewed 20 blind and low vision users about navigating buildings. Most (85%) could not find elevators in unfamiliar buildings without help, and 100% relied on sighted guides, even though they had good navigation skills. The problem was lack of information about where elevators were located. Third, we develop SmartEye, a prototype navigation system achieving 78.96% usability and 91.7% recommendation rates. However, evaluation reveals a critical “last-meter navigation gap”: accurate elevator detection alone fails to ensure successful call button location (r = 0.092). All 12 participants identified hands-free operation as non-negotiable, with guide dog and white cane users reporting physical impossibility of operating handheld devices while maintaining mobility aids. Key findings reveal that effective multi-level navigation requires: (1) hands-free wear- able form factors compatible with existing mobility aids, (2) multimodal audio and haptic feedback, (3) fine-grained directional guidance beyond proximity detection, and (4) adequate environmental lighting. Computer vision can help blind people find elevators in unfamiliar buildings, which is the first step toward accessible multi-level navigation.enaccessibilityblind and low visionindoor navigationmulti-level buildingselevator detectiocomputer visionwearable technologyassistive technologySmartEyehands-free navigationmultimodal feedbacklast-meter navigationTowards Understanding Multilevel Building Navigation for the Blind and Low Vision Individuals