Fan Zhang, Junwei Cao, et al.
IEEE TETC
To enable people with visual impairments (PVI) to explore shopping malls, it is important to provide information for selecting destinations and obtaining information based on the individual's interests. We achieved this through conversational interaction by integrating a large language model (LLM) with a navigation system. ChitChatGuide allows users to plan a tour through contextual conversations, receive personalized descriptions of surroundings based on transit time, and make inquiries during navigation. We conducted a study in a shopping mall with 11 PVI, and the results reveal that the system allowed them to explore the facility with increased enjoyment. The LLM-based conversational interaction, by understanding vague and context-based questions, enabled the participants to explore unfamiliar environments effectively. The personalized and in-situ information generated by the LLM was both useful and enjoyable. Considering the limitations we identified, we discuss the criteria for integrating LLMs into navigation systems to enhance the exploration experiences of PVI.
Fan Zhang, Junwei Cao, et al.
IEEE TETC
Dorit Nuzman, David Maze, et al.
SYSTOR 2011
Paul Ung-Joon Lee, Shumin Zhai
Int. J. Hum. Comput. Stud.
Dimitrios Christofidellis, Giorgio Giannone, et al.
MRS Spring Meeting 2023