Disentangling language understanding and reasoning structures in cross-lingual chain-of-thought prompting

Loading...
Thumbnail Image
Files
2025_EMNLP_CoT_Prompting.pdf(595.53 KB)
Accepted Version
Date
2025-11
Authors
Tran , Khanh-Tung
Vu, Nguyet-Hang
O’Sullivan, Barry
Nguyen, Hoang D.
Journal Title
Journal ISSN
Volume Title
Publisher
Association for Computational Linguistics
Research Projects
Organizational Units
Journal Issue
Abstract
Cross-lingual chain-of-thought prompting techniques have proven effective for investigating diverse reasoning paths in Large Language Models (LLMs), especially for low-resource languages. Despite these empirical gains, the mechanisms underlying cross-lingual improvements remain perplexing. This study, therefore, addresses whether the benefits of cross-lingual prompting arise from reasoning structures intrinsic to each language, or are simply a consequence of improved comprehension through cross-linguistic exposure. We employ neuron intervention and perturbation techniques to analyze and deactivate language-specific reasoning neurons during cross-lingual prompting, leading to performance disparities across languages, up to 27.4%. Our findings disentangle that these neurons are essential for reasoning in their respective languages but have minimal effect on reasoning in other languages, providing evidence for the existence of language-specific local reasoning structures and guiding the development of more interpretable and effective multilingual AI systems.
Description
Keywords
Cross-lingual chain-of-thought prompting techniques , Large Language Models (LLMs)
Citation
Tran, K.-T., Vu, N.-H., O’Sullivan, B. and Nguyen, H. D. (2025) 'Disentangling language understanding and reasoning structures in cross-lingual chain-of-thought prompting', Findings of the Association for Computational Linguistics: EMNLP 2025, pp. 12200-12206. https://doi.org/10.18653/v1/2025.findings-emnlp.652
Link to publisher’s version