mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

17 Downloads (Pure)

Abstract

Large language models (LLMs) with Chain-of-thought (CoT) have recently emerged as a powerful technique for eliciting reasoning to improve various downstream tasks. As most research mainly focuses on English, with few explorations in a multilingual context, the question of how reliable this reasoning capability is in different languages is still open. To address it directly, we study multilingual reasoning consistency across multiple languages, using popular open-source LLMs. First, we compile the first large-scale multilingual math reasoning dataset, mCoT-MATH, covering eleven diverse languages. Then, we introduce multilingual CoT instruction tuning to boost reasoning capability across languages, thereby improving model consistency. While existing LLMs show substantial variation across the languages we consider, and especially low performance for lesser resourced languages, our 7B parameter model mCoT achieves impressive consistency across languages, and superior or comparable performance to close- and open-source models even of much larger sizes.

Original languageEnglish
Title of host publicationProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics
Subtitle of host publication(Volume 1: Long Papers)
EditorsLun-Wei Ku, Andre F. T. Martins, Vivek Srikumar
PublisherAssociation for Computational Linguistics, ACL Anthology
Pages12012-12026
Number of pages15
Volume1
ISBN (Electronic)9798891760943
DOIs
Publication statusPublished - 2024
Event62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Bangkok, Thailand
Duration: 11-Aug-202416-Aug-2024

Conference

Conference62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Country/TerritoryThailand
CityBangkok
Period11/08/202416/08/2024

Fingerprint

Dive into the research topics of 'mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models'. Together they form a unique fingerprint.

Cite this