Jakita O. Thomas, Eric Mibuari, et al.
CHI 2011
To enable people with visual impairments (PVI) to explore shopping malls, it is important to provide information for selecting destinations and obtaining information based on the individual's interests. We achieved this through conversational interaction by integrating a large language model (LLM) with a navigation system. ChitChatGuide allows users to plan a tour through contextual conversations, receive personalized descriptions of surroundings based on transit time, and make inquiries during navigation. We conducted a study in a shopping mall with 11 PVI, and the results reveal that the system allowed them to explore the facility with increased enjoyment. The LLM-based conversational interaction, by understanding vague and context-based questions, enabled the participants to explore unfamiliar environments effectively. The personalized and in-situ information generated by the LLM was both useful and enjoyable. Considering the limitations we identified, we discuss the criteria for integrating LLMs into navigation systems to enhance the exploration experiences of PVI.
Jakita O. Thomas, Eric Mibuari, et al.
CHI 2011
Shang-Ling Hsu, Raj Sanjay Shah, et al.
Proceedings of the ACM on Human Computer Interaction
Erik Wittern, Jim Laredo, et al.
ICWS 2014
Christine Robson, Sean Kandel, et al.
CHI 2011