References
- World Health Organization. “Assistive technology,” WHO, 2018. Assistive technology (accessed Mar. 20, 2022).
- World Health Organization. “Blindness and vision impairment,” WHO, 2021. https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment (accessed Feb. 20, 2022).
- L. S. Ambati, O. F. El-Gayar, and N. Nawar. “Influence of the digital divide and socio-economic factors on prevalence of diabetes,” Issues Inf. Syst., vol. 21, no. 4, 2020, pp. 103–113, 2020. doi: 10.48009/4_iis_2020_103-113.
- C. Guo et al. “Prevalence, causes and social factors of visual impairment among Chinese adults: based on a national survey,” Int. J. Environ. Res. Public Health, vol. 14, no. 9, 2017, p. 1034. doi: 10.3390/ijerph14091034.
- C. Albus. “Psychological and social factors in coronary heart disease,” Ann. Med., vol. 42, no. 7, 2010, pp. 487–494. doi: 10.3109/07853890.201 0.515605.
- O. F. El-Gayar, L. S. Ambati, and N. Nawar. “Wearables, artificial intelligence, and the future of healthcare,” 2020, pp. 104–129. doi: 10.4018/978-1-5225-9687-5.ch005.
- P. Pandey and R. Litoriya. “An activity vigilance system for elderly based on fuzzy probability transformations,” J. Intell. Fuzzy Syst., vol. 36, no. 3, 2019, pp. 2481–2494. doi: 10.3233/JIFS-181146.
- P. Pandey and R. Litoriya. “Ensuring elderly well being during COVID-19 by using IoT,” Disaster Med. Public Health Prep., vol. 16, no. 2, 2020, pp. 763–766. doi: 10.1017/dmp.2020.390.
- L. S. Ambati, O. F. El-Gayar, and N. Nawar. “Design principles for multiple Sclerosis mobile selfmanagement applications: A patient-centric perspective,” 2021.
- Z. Zou et al. “Object detection in 20 years: A survey,” 2019. http://arxiv.org/abs/1905.05055.
- L. S. Ambati and O. F. El-Gayar. “Human activity recognition: A comparison of machine learning approaches,” J. Midwest Assoc. Inf. Syst., vol. 2021, no. 1, 2021. doi: 10.17705/3jmwa.000065.
- V. Iyer et al. “Virtual assistant for the visually impaired,” 2020 5th International Conference on Communication and Electronics Systems (ICCES), 2020, pp. 1057–1062. doi: 10.1109/ICCES487 66.2020.9137874.
- R. Saffoury et al. “Blind path obstacle detector using smartphone camera and line laser emitter,” 2016 1st International Conference on Technology and Innovation in Sports, Health and Wellbeing (TISHW), 2016, pp. 1–7. doi: 10.1109/TI SHW.2016.7847770.
- A. Mohanta et al. “Application for the visually impaired people with voice assistant,” Int. J. Innov. Technol. Explor. Eng., vol. 9, no. 6, 2020, pp. 495–497. doi: 10.35940/ijitee.F3789.049620.
- V. Sharma, V. M. Singh, and S. Thanneeru. “Virtual assistant for visually impaired,” SSRN Electron. J., 2020. doi: 10.2139/ssrn.3580035.
- A. M. Weeratunga et al. “Project Nethra - an intelligent assistant for the visually disabled to interact with internet services,” 2015 IEEE 10th International Conference on Industrial and Information Systems (ICIIS), 2015, pp. 55–59. doi: 10.1109/ICIINFS.2015.7398985.
- N. Kumaran et al. “Intelligent personal assistant - implementing voice commands enabling speech recognition,” 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), 2020, pp. 1–5. doi: 10.1109/ICSCAN49426.2020.9262279.
- V. Kepuska and G. Bohouta. “Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home),” 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), 2018, pp. 99–103. doi: 10.1109/CCWC.2018.8301638.
- G. Iannizzotto et al. “A vision and speech enabled, customizable, virtual assistant for smart environments,” 2018 11th International Conference on Human System Interaction (HSI), 2018, pp. 50–56. doi: 10.1109/HSI.2018.84 31232.
- R. G. Praveen and R. P. Paily. “Blind navigation assistance for visually impaired based on local depth hypothesis from a single image,” Procedia Eng., vol. 64, 2013, pp. 351–360. doi: 10.1016/j.proeng.2013.09.107.
- M. W. Rahman et al. “The architectural design of smart blind assistant using IoT with deep learning paradigm,” Internet of Things, vol. 13, 2021, p. 100344. doi: 10.1016/j.iot.2020.100344.
- J. Redmon et al. “You only look once: unified, realtime object detection,”2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788. doi: 10.1109/CVPR.2016.91.
- J.-M. Perez-Rua et al. “Incremental few-shot object detection,” 2020. http://arxiv.org/abs/2003.04668.
- T.-Y. Lin et al. “Microsoft COCO: common objects in context,” 2014, pp. 740–755. doi: 10.1007/978-3-319-10602-1_48.
- J. Redmon and A. Farhadi. “YOLOv3: An incremental improvement,” 2018. doi: arXiv: 1804.02767.
