Rajpura, ParamParamRajpura2026-04-222026-04-222026-04-1310.1145/3772363.3799222https://repository.iitgn.ac.in/handle/IITG2025/35132Brain-computer interfaces (BCIs) for stroke rehabilitation promise accessible, personalized therapy in resource-constrained settings. However, current systems often lack patient-facing explanations that enable users self-correction, calibrate trust, and support autonomous use. This research aims to develop human-centered explainable AI (XAI) frameworks that integrate neuroscientific validity, algorithmic interpretability, and participatory design with stroke survivors experiencing cognitive and linguistic impairments. The XAI4BCI design space is established based on completed works, and video-based methods are created to gather requirements from stroke survivors with moderate-to-severe aphasia. Initial formative co-design workshops revealed varying explainability needs among stakeholders. Ongoing work focuses on deploying adaptive XAI systems in rehabilitation settings to assess how transparency, actionability, and trust calibration influence adherence, self-efficacy, and rehabilitation outcomes. This research contributes transferable methods for inclusive AI design, theoretical frameworks for patient-facing XAI, and empirical evidence for neurotechnology deployment serving marginalized populations in the Global South healthcare contexts.en-USHuman-centered artificial intelligenceExplainable AIStroke rehabilitationBrain-computer interfacesHuman-centered explainable AI for brain-computer interface-driven rehabilitationConference Paper