EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Large language models (LLMs) have improved significantly in their reasoning through extensive training on massive datasets. However, relying solely on additional data for improvement is becoming increasingly impractical, highlighting the need for models to autonomously enhance their reasoning without external supervision. In this paper, we propose textbfDebate, Train, Evolve (DTE), a novel ground truth-free training framework that uses multi-agent debate traces to evolve a single language model. We also introduce a new prompting strategy textbfReflect-Critique-Refine, to improve debate quality by explicitly instructing agents to critique and refine their reasoning. Extensive evaluations on textbffive reasoning benchmarks with textbfsix open-weight models show that our DTE framework achieve substantial improvements, with an average accuracy gain of textbf8.92\% on the challenging GSM-PLUS dataset. Furthermore, we observe strong cross-domain generalization, with an average accuracy gain of textbf5.8\% on all other benchmarks, suggesting that our method captures general reasoning capabilities.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

N-CORE: N-View Consistency Regularization for Disentangled Representation Learning in Nonverbal Vocalizations
poster

N-CORE: N-View Consistency Regularization for Disentangled Representation Learning in Nonverbal Vocalizations

EMNLP 2025

Kristina T. Johnson and 1 other author

06 November 2025