Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. IIT Gandhinagar
  3. Computer Science and Engineering
  4. CSE Publications
  5. Adaptive text inputs: contextual hint-text generation for enhancing mobile apps accessibility using text-to-text transformer language models and Q-learning
 
  • Details

Adaptive text inputs: contextual hint-text generation for enhancing mobile apps accessibility using text-to-text transformer language models and Q-learning

Source
16th International Conference of Human-Computer Interaction (HCI) Design & Research (India HCI 2025)
Date Issued
2025-11
Author(s)
Shukla, Sanvi
Meena, Yogesh Kumar  
DOI
10.1145/3768633.3770136
Abstract
Mobile apps are essential in modern life, yet the lack of hint texts in input fields severely impacts accessibility for the visually impaired users in navigating digital interfaces. Current solutions are often static, rule-based, or require access to source code, which limits their adaptability and large-scale implementation. To address this, we introduce the HintQT5 concept by combining the fine-tuned transformer language models and Q-learning to dynamically generate adaptive, context-aware hint text. HintQT5 extracts graphical user interface hierarchies, constructs tailored prompts, and iteratively refines suggestions based on user feedback and error messages. We evaluated six text-to-text transformer language models – GPT-2, T5-small, FLAN-T5 with three variants, and TinyLLaMA-1.1B using few-shot prompting and fine-tuning on a dataset of over 2,000 entries consisting of prompts and their corresponding hints. We then used the Q-learning method on two small-sized fine-tuned models. HintQT5, utilising the best performing fine-tuned T5 model, outperforms the recent state-of-the-art HintDroid model by +5.99%, +11.27%, +14.81%, +21.21%, +31.48%, and +24.16% across BLEU-1 to BLEU-4, ROUGE-L, and METEOR metrics, respectively. We developed a Flutter-based app interface to demonstrate real-time hint generation and tested HintQT5 on ten open-source Android applications from GitHub to dynamically generate context-aware hints in real time. We achieved an average performance of 89.48 ± 1.04 across all ten apps for generating the quality hint text. Overall, HintQT5 shows significant improvements in performance and low variability across real-world apps, highlighting its potential as a practical solution for designing adaptive, accessible, and inclusive mobile interfaces at scale.
Publication link
https://dl.acm.org/doi/pdf/10.1145/3768633.3770136
URI
http://repository.iitgn.ac.in/handle/IITG2025/33637
Subjects
Human-Computer Interaction
Mobile Accessibility
Hint Text Generation
T5-small Language model
Reinforcement Learning
Assistive Technologies
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify