Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. IIT Gandhinagar
  3. Computer Science and Engineering
  4. CSE Publications
  5. Design and evaluation of a multimodal elevator system with gaze and multilingual voice controls
 
  • Details

Design and evaluation of a multimodal elevator system with gaze and multilingual voice controls

Source
16th International Conference of Human-Computer Interaction (HCI) Design & Research (India HCI 2025)
Date Issued
2025-11
Author(s)
Ramanand
Meena, Yogesh Kumar  
DOI
10.1145/3768633.3770133
Abstract
Inclusivity and accessibility for everyone are essential steps toward smart urban infrastructure. Traditional elevator controls that depend on touch have limitations, including hygiene risks and accessibility issues. To address the limitations posed by traditional touch control systems and to enhance inclusivity and accessibility, we introduced a multimodal approach in this work by combining touch, multilingual voice (Hindi, Gujarati, English), and eye tracking modality. To demonstrate the working and the advantages of the proposed multimodal approach for the elevator system, we developed a digital prototype with touch, gaze, and multilingual voice control and an edge-computing-based multilingual voice control hardware prototype. These prototypes demonstrate floor selection, door operation, and emergency operations. We conducted a user study with 30 healthy participants (10 participants in each language) to assess the interaction efficiency of these input modalities across both digital and hardware interfaces. Experimental results show that touch enables the fastest user interaction, making it ideal for real-time system design, while eye-tracking and voice offer valuable alternatives for diverse contexts. Overall, the proposed prototypes received an excellent grade on the adjective rating scale using the system usability scale and a low weighted rating on the NASA task load index for all interaction modalities, demonstrating the potential for creating accessible and inclusive interfaces in elevator systems and beyond. These findings deliver actionable insights for modality selections while developing a multimodal elevator system. In addition, we also presented some issues that have surfaced in the form of misinterpretation of words from one language to another and gaze shift from the actual UI element, which should be accompanied by better UI designs.
URI
http://repository.iitgn.ac.in/handle/IITG2025/33635
Subjects
Human computer interaction
Large Language Models
Prompt Mutation
Text Input
User interface toolkits
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify