Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
We present Cacheback Decoding, a training-free model-agnostic speculative decoding method. Cacheback leverages only a Least Recently Used (LRU) cache of token n-grams to generate draft sequences. Despite its minimalist design, it achieves state-of-the-art performance among comparable methods.