Beniwal, HimanshuNandagopan D., KowsikSingh, MayankarXiv2025-08-282025-08-282024-02-012331-842210.48550/arXiv.2402.11997http://repository.iitgn.ac.in/handle/IITG2025/19827Large Language Models (LLMs) are increasingly becoming ubiquitous, yet their ability to reason about and retain temporal information remains limited. This hinders their application in real-world scenarios where understanding the sequential nature of events is crucial. This paper experiments with state-of-the-art models on a novel, large-scale temporal dataset, \textbf{TempUN}, to reveal significant limitations in temporal retention and reasoning abilities. Interestingly, closed-source models indicate knowledge gaps more frequently, potentially suggesting a trade-off between uncertainty awareness and incorrect responses. Further, exploring various fine-tuning approaches yielded no major performance improvements. The associated dataset and code are available at the following URL (this https URL).en-USRemember this event that year? assessing temporal information and reasoning in large language modelse-Printe-Print123456789/435