EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

We present Cacheback Decoding, a training-free model-agnostic speculative decoding method. Cacheback leverages only a Least Recently Used (LRU) cache of token n-grams to generate draft sequences. Despite its minimalist design, it achieves state-of-the-art performance among comparable methods.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning
poster

MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning

EMNLP 2025

+2Archiki PrasadMohit Bansal
Mohit Bansal and 4 other authors

06 November 2025