EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

In this work, we propose FoRA-UA, a novel method that, using only 1--5% of the standard LoRA's parameters, achieves state-of-the-art performance across a wide range of tasks. Specifically, we explore scenarios with extremely limited parameter budgets and derive two key insights: (1) fix-sized sparse frequency representations approximate small matrices more accurately; and (2) with a fixed number of trainable parameters, introducing a smaller intermediate representation to approximate larger matrices results in lower construction error. These findings form the foundation of our FoRA-UA method. By inserting a small intermediate parameter set, we achieve greater model compression without sacrificing performance. We evaluate FoRA-UA across diverse tasks, including natural language understanding (NLU), natural language generation (NLG), instruction tuning, and image classification, demonstrating strong generalisation and robustness under extreme compression.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

DAMON: A Dialogue-Aware MCTS Framework for Jailbreaking Large Language Models
poster

DAMON: A Dialogue-Aware MCTS Framework for Jailbreaking Large Language Models

EMNLP 2025

+3Xiaojun Wan
Xinyu Hu and 5 other authors

06 November 2025