EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Video large language models (Vid-LLMs) have shown strong capabilities in understanding video content. However, their reliance on dense video token representations introduces substantial memory and computational overhead in both prefilling and decoding. To mitigate the information loss of recent video token reduction methods and accelerate the decoding stage of Vid-LLMs losslessly, we introduce SpecVLM, a training-free speculative decoding (SD) framework tailored for Vid-LLMs that incorporates staged video token pruning. Building on our novel finding that the draft model's speculation exhibits low sensitivity to video token pruning, SpecVLM prunes up to 90\% of video tokens, enabling efficient speculation without sacrificing accuracy. To achieve this, it performs a two-stage pruning process: Stage I selects highly informative tokens guided by attention signals from the verifier (target model), while Stage II prunes remaining redundant ones in a spatially uniform manner. Extensive experiments on four video understanding benchmarks demonstrate the effectiveness and robustness of SpecVLM, which achieves up to 2.68times decoding speedup for LLaVA-OneVision-72B and 2.11times speedup for Qwen2.5-VL-32B. Code is available in the supplementary materials.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

GATEAU: Selecting Influential Samples for Long Context Alignment
poster

GATEAU: Selecting Influential Samples for Long Context Alignment

EMNLP 2025

+6Kaikai An
Kaikai An and 8 other authors

06 November 2025