Beniwal, HimanshuVenkat, ReddybathuniKumar, RohitSrivibhav, BirudugaddaJain, DakshDoddi, PavanDhande, EshwarAnanth, AdithyaKuldeepKubadia, HeerSharda, PrathamSingh, MayankBeniwal, HimanshuHimanshuBeniwalVenkat, ReddybathuniReddybathuniVenkatKumar, RohitRohitKumarSrivibhav, BirudugaddaBirudugaddaSrivibhavJain, DakshDakshJainDoddi, PavanPavanDoddiDhande, EshwarEshwarDhandeAnanth, AdithyaAdithyaAnanthKuldeepKubadia, HeerHeerKubadiaSharda, PrathamPrathamShardaSingh, MayankMayankSingh2025-08-282025-08-282025-03-01http://arxiv.org/abs/2503.23088http://repository.iitgn.ac.in/handle/IITG2025/19877This work introduces UnityAI-Guard, a framework for binary toxicity classification targeting low-resource Indian languages. While existing systems predominantly cater to high-resource languages, UnityAI-Guard addresses this critical gap by developing state-of-the-art models for identifying toxic content across diverse Brahmic/Indic scripts. Our approach achieves an impressive average F1-score of 84.23% across seven languages, leveraging a dataset of 888k training instances and 35k manually verified test instances. By advancing multilingual content moderation for linguistically diverse regions, UnityAI-Guard also provides public API access to foster broader adoption and application.en-USUNITYAI-GUARD: pioneering toxicity detection across low-resource Indian languagese-Print123456789/435