Patra, PronoyPronoyPatraDamle, SankarshanSankarshanDamlePadala, ManishaManishaPadalaGujar, SujitSujitGujar2026-02-252026-02-252026-02-012331-842210.48550/arXiv.2602.14476https://repository.iitgn.ac.in/handle/IITG2025/34677We study the problem of selecting large language models (LLMs) for user queries in settings where multiple LLM providers submit the cost of solving a query. From the users' perspective, choosing an optimal model is a sequential, query-dependent decision problem: high-capacity models offer more reliable outputs but are costlier, while lightweight models are faster and cheaper. We formalize this interaction as a reverse auction design problem with contextual online learning, where the user adaptively discovers which model performs best while eliciting costs from competing LLM providers. Existing multi-armed bandit (MAB) mechanisms focus on forward auctions and social welfare, leaving open the challenges of reverse auctions, provider-optimal outcomes, and contextual adaptation. We address these gaps by designing a resampling-based procedure that generalizes truthful forward MAB mechanisms to reverse auctions and prove that any monotone allocation rule with this procedure is truthful. Using this, we propose a contextual MAB algorithm that learns query-dependent model quality with sublinear regret. Our framework unifies mechanism design and adaptive learning, enabling efficient, truthful, and query-aware LLM selection.en-USContextual MABAuction DesignOptimal AuctionLearningTruthful reverse auctions for adaptive selection via contextual multi-armed banditse-Print