Adaptive text inputs: contextual hint-text generation for enhancing mobile apps accessibility using text-to-text transformer language models and Q-learning
Source
16th International Conference of Human-Computer Interaction (HCI) Design & Research (India HCI 2025)
Date Issued
2025-11
Author(s)
Shukla, Sanvi
Abstract
Mobile apps are essential in modern life, yet the lack of hint texts in input fields severely impacts accessibility for the visually impaired users in navigating digital interfaces. Current solutions are often static, rule-based, or require access to source code, which limits their adaptability and large-scale implementation. To address this, we introduce the HintQT5 concept by combining the fine-tuned transformer language models and Q-learning to dynamically generate adaptive, context-aware hint text. HintQT5 extracts graphical user interface hierarchies, constructs tailored prompts, and iteratively refines suggestions based on user feedback and error messages. We evaluated six text-to-text transformer language models – GPT-2, T5-small, FLAN-T5 with three variants, and TinyLLaMA-1.1B using few-shot prompting and fine-tuning on a dataset of over 2,000 entries consisting of prompts and their corresponding hints. We then used the Q-learning method on two small-sized fine-tuned models. HintQT5, utilising the best performing fine-tuned T5 model, outperforms the recent state-of-the-art HintDroid model by +5.99%, +11.27%, +14.81%, +21.21%, +31.48%, and +24.16% across BLEU-1 to BLEU-4, ROUGE-L, and METEOR metrics, respectively. We developed a Flutter-based app interface to demonstrate real-time hint generation and tested HintQT5 on ten open-source Android applications from GitHub to dynamically generate context-aware hints in real time. We achieved an average performance of 89.48 ± 1.04 across all ten apps for generating the quality hint text. Overall, HintQT5 shows significant improvements in performance and low variability across real-world apps, highlighting its potential as a practical solution for designing adaptive, accessible, and inclusive mobile interfaces at scale.
Subjects
Human-Computer Interaction
Mobile Accessibility
Hint Text Generation
T5-small Language model
Reinforcement Learning
Assistive Technologies
