Lost in Time: A New Temporal Benchmark for VideoLLMs

dc.contributor.affiliationUniversidade de Santiago de Compostela. Centro de Investigación en Tecnoloxías Intelixentes da USC (CiTIUS)
dc.contributor.authorCores Costa, Daniel
dc.contributor.authorDorkenwald, Michael
dc.contributor.authorMucientes Molina, Manuel
dc.contributor.authorSnoek, Cess G. M.
dc.contributor.authorAsano, Yuki M.
dc.date.accessioned2026-02-03T10:14:19Z
dc.date.available2026-02-03T10:14:19Z
dc.date.issued2025-11-25
dc.description.abstractLarge language models have demonstrated impressive performance when integrated with vision models even enabling video understanding. However, evaluating video models presents its own unique challenges, for which several benchmarks have been proposed. In this paper, we show that the currently most used video-language benchmarks can be solved without requiring much temporal reasoning. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than video reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative. As a solution, we propose TVBench, a novel open-source video multiple-choice question-answering benchmark, and demonstrate through extensive evaluations that it requires a high level of temporal understanding. Surprisingly, we find that many recent video-language models perform similarly to random performance on TVBench, with only a few models such as Aria, Qwen2-VL, and Tarsier surpassing this baseline.
dc.description.sponsorshipThis work has received financial support from the Agencia Estatal de Investigación (Spain) (PID2023-149549NB-I00), the Xunta de Galicia - Conselleria de Educación, Ciencia, Universidades e Formación (Centro de investigación de Galicia accreditation 2024-2027 ED431G-2023/04 and the European Union (European Regional Development Fund - ERDF). It is also financially supported by Qualcomm Technologies Inc., the University of Amsterdam, and the Top Consortia for Knowledge and Innovation (TKIs) allowance from the Netherlands Ministry of Economic Affairs and Climate Policy.
dc.identifier.citationCores Costa, D., Dorkenwald, M., Mucientes, M., Snoek, C. G. M., Asano, Y. M. (2025). Lost in Time: A New Temporal Benchmark for VideoLLMs. In: 36th British Machine Vision Conference 2025. BMVC. https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_857/paper.pdf
dc.identifier.urihttps://hdl.handle.net/10347/45641
dc.language.isoeng
dc.publisherThe British Machine Vision Association (BMVA)
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2023-149549NB-I00/ES/APROVECHANDO LA INTELIGENCIA ARTIFICIAL PARA UNA MONITORIZACION PREDICTIVA ROBUSTA EN MINERIA DE PROCESOS
dc.relation.publisherversionhttps://bmvc2025.bmva.org/proceedings/857/
dc.rights© 2025. The copyright of this document resides with its authors.
dc.rights.accessRightsopen access
dc.subject.classification120304 Inteligencia artificial
dc.titleLost in Time: A New Temporal Benchmark for VideoLLMs
dc.typebook part
dspace.entity.typePublication
relation.isAuthorOfPublication3daa2166-1c2d-4b3d-bbb0-3d0036bd8cf2
relation.isAuthorOfPublication21112b72-72a3-4a96-bda4-065e7e2bb262
relation.isAuthorOfPublication.latestForDiscovery3daa2166-1c2d-4b3d-bbb0-3d0036bd8cf2

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2025_bmvc_cores_lost.pdf
Size:
1.14 MB
Format:
Adobe Portable Document Format