EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Although preference optimization methods have improved reasoning performance in Large Language Models (LLMs), they often lack transparency regarding why one reasoning outcome is preferred over another. This limitation is especially critical in Automated Student Answer Scoring (ASAS), where explainability is essential to justify assessment outcomes. Verbal reinforcement learning offers the potential to generate explicit reflection, but it tends to produce superficial critiques that can harm assessment performance. Existing LLMs also struggle to reliably detect subtle reasoning errors in ASAS tasks. Moreover, manually identifying intermediate reasoning errors is expensive and difficult to scale. To address these challenges, we introduce a textbfcontrastive reflection synthesis pipeline that generates precise verbal feedback by identifying discrepancies in structure reasoning graph paths. Leveraging these synthetic reflection data, we propose textttDARS, a Dual-model Reflective Scoring framework featuring a dedicated Critic model trained for effective reflection. textttDARS achieves strong performance and consistently outperforms existing ASAS baselines across all evaluation metrics. Extensive experiments further provide novel insights into the value of reflection data, framework design, and the scaling behavior of textttDARS.

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

TokenSkip: Controllable Chain-of-Thought Compression in LLMs
poster

TokenSkip: Controllable Chain-of-Thought Compression in LLMs

EMNLP 2025

+2Yongqi LI
Chak Tou Leong and 4 other authors

06 November 2025