mishig HF Staff commited on
Commit
21eed1f
·
verified ·
1 Parent(s): aaf3236

Add 1 files

Browse files
Files changed (1) hide show
  1. 2404/2404.04925.md +92 -11
2404/2404.04925.md CHANGED
@@ -5,12 +5,87 @@ URL Source: https://arxiv.org/html/2404.04925
5
  Published Time: Tue, 09 Apr 2024 01:00:16 GMT
6
 
7
  Markdown Content:
8
- ![Image 1: [Uncaptioned image]](https://arxiv.org/html/2404.04925v1/x1.png)Multilingual Large Language Model:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  A Survey of Resources, Taxonomy and Frontiers
11
- -------------------------------------------------------------------------------------------------------------------------------------------------------------
12
 
13
- Libo Qin♣♣{}^{\clubsuit}start_FLOATSUPERSCRIPT ♣ end_FLOATSUPERSCRIPT Qiguang Chen♠♠{}^{\spadesuit}start_FLOATSUPERSCRIPT ♠ end_FLOATSUPERSCRIPT 1 1 footnotemark: 1 Yuhang Zhou♠♠{}^{\spadesuit}start_FLOATSUPERSCRIPT ♠ end_FLOATSUPERSCRIPT Zhi Chen♢♢{}^{\diamondsuit}start_FLOATSUPERSCRIPT ♢ end_FLOATSUPERSCRIPT Yinghui Li♮♮{}^{\natural}start_FLOATSUPERSCRIPT ♮ end_FLOATSUPERSCRIPT
14
 
15
  Lizi Liao���normal-♯{}^{\sharp}start_FLOATSUPERSCRIPT ♯ end_FLOATSUPERSCRIPT Min Li♣normal-♣{}^{\clubsuit}start_FLOATSUPERSCRIPT ♣ end_FLOATSUPERSCRIPT Wanxiang Che♠normal-♠{}^{\spadesuit}start_FLOATSUPERSCRIPT ♠ end_FLOATSUPERSCRIPT Philip S. Yu♡normal-♡{}^{\heartsuit}start_FLOATSUPERSCRIPT ♡ end_FLOATSUPERSCRIPT
16
 
@@ -18,14 +93,15 @@ Lizi Liao♯normal-♯{}^{\sharp}start_FLOATSUPERSCRIPT ♯ end_FLOATSUPERSCRIPT
18
 
19
  ♮♮{}^{\natural}start_FLOATSUPERSCRIPT ♮ end_FLOATSUPERSCRIPT Tsinghua University ♯♯{}^{\sharp}start_FLOATSUPERSCRIPT ♯ end_FLOATSUPERSCRIPT Singapore Management University ♡♡{}^{\heartsuit}start_FLOATSUPERSCRIPT ♡ end_FLOATSUPERSCRIPT University of Illinons at Chicago
20
 
21
- lbqin@csu.edu.cn, {qgchen,car}@ir.hit.edu.cn
 
 
22
 
23
  ###### Abstract
24
 
25
  Multilingual Large Language Models are capable of using powerful Large Language Models to handle and respond to queries in multiple languages, which achieves remarkable success in multilingual natural language processing tasks. Despite these breakthroughs, there still remains a lack of a comprehensive survey to summarize existing approaches and recent developments in this field. To this end, in this paper, we present a thorough review and provide a unified perspective to summarize the recent progress as well as emerging trends in multilingual large language models (MLLMs) literature. The contributions of this paper can be summarized: (1) First survey: to our knowledge, we take the first step and present a thorough review in MLLMs research field according to multi-lingual alignment; (2) New taxonomy: we offer a new and unified perspective to summarize the current progress of MLLMs; (3) New frontiers: we highlight several emerging frontiers and discuss the corresponding challenges; (4) Abundant resources: we collect abundant open-source resources, including relevant papers, data corpora, and leaderboards. We hope our work can provide the community with quick access and spur breakthrough research in MLLMs.
26
 
27
- ![Image 2: [Uncaptioned image]](https://arxiv.org/html/2404.04925v1/x2.png)
28
-
29
  Multilingual Large Language Model:
30
 
31
  A Survey of Resources, Taxonomy and Frontiers
@@ -37,11 +113,11 @@ Libo Qin♣normal-♣{}^{\clubsuit}start_FLOATSUPERSCRIPT ♣ end_FLOATSUPERSCRI
37
 
38
  In recent years, remarkable progress has been witnessed in large language models (LLMs)Brown et al. ([2020](https://arxiv.org/html/2404.04925v1#bib.bib44)); Touvron et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib316)); Bang et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib28)); Zhao et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib408)), which have achieved excellent performance on various natural language processing tasks Pan et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib235)); Nguyen et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib227)); Trivedi et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib318)). In addition, LLMs raise surprising emergent capabilities, including in-context learning Min et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib212)); Dong et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib89)), chain-of-thought reasoning Wei et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib343)); Huang et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib135)); Qin et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib255)), and even planning Driess et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib91)); Hu et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib131)). Nevertheless, the majority of LLMs are English-centric, primarily focusing on English tasks Held et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib122)); Zhang et al. ([2023i](https://arxiv.org/html/2404.04925v1#bib.bib402)), which makes them somewhat weak for multilingual settings, especially in low-resource scenarios.
39
 
40
- ![Image 3: Refer to caption](https://arxiv.org/html/2404.04925v1/x3.png)
41
 
42
  Figure 1: Parameter-Tuning Alignment (§§\lx@sectionsign§[4.1](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")) v.s. Parameter-Frozen Alignment (§§\lx@sectionsign§[4.2](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")). The former requires the model to fine-tune the MLLM parameters for cross-lingual alignment, while the latter directly uses prompts for alignment without parameter tuning.
43
 
44
- ![Image 4: Refer to caption](https://arxiv.org/html/2404.04925v1/x4.png)
45
 
46
  Figure 2: Evolution of selected MLLMs over the past five years, where colored branches indicate different alignment stages. For models with multiple alignment stages, the final stage is represented.
47
 
@@ -66,7 +142,7 @@ Monolingual large language models (LLM) can only process one language at a time.
66
 
67
  where Unexpect indicates that the LLM generates output in an unintended language; mono denotes the single language.
68
 
69
- ![Image 5: Refer to caption](https://arxiv.org/html/2404.04925v1/x5.png)
70
 
71
  Figure 3: Monolingual Large Language Model v.s. Multilingual Large Language Model.
72
 
@@ -192,9 +268,12 @@ Similarly, we categorize the existing multilingual SFT data into 4 classes: (1)
192
 
193
  Some work leveraged the multilingual RLHF data to improve alignment. Specifically, Lai et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib167)) leverages multilingual ranking data for training a reward model using RLHF. Zeng et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib389)) introduce the TIM dataset to train a more effective reward model in multilingual contexts.
194
 
 
 
 
195
  As shown in Figure[4](https://arxiv.org/html/2404.04925v1#S3.F4 "Figure 4 ‣ 3.1 Multilingual Pretraining Data ‣ 3 Data Resource ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers"), we introduce a novel taxonomy including parameter-tuning alignment (§⁢[4.1](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")§[4.1](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")\lx@sectionsign\ref{sec:parameter_tuned_align}§) and parameter-frozen alignment (§⁢[4.2](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")§[4.2](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")\lx@sectionsign\ref{sec:param_freeze_align}§), which aims to provide a unified view for researchers to understand the MLLMs literature. Specifically, parameter tuning alignment (PTA) comprises a series of progressively advanced training and alignment strategies, including Pretraining Alignment, Supervised Fine-Tuning (SFT) Alignment, Reinforcement Learning from Human Feedback (RLHF) Alignment, and, ultimately, Downstream Fine-Tuning Alignment. These stages collectively aim to refine model parameters to align the multilingual performance systematically. Conversely, the parameter frozen alignment (PFA) focuses on four prompting strategies based on PTA: Direct Prompting, Code-Switching Prompting, Translation Alignment Prompting, and Retrieval-Augmented Alignment. This method maintains the original model parameters to achieve desired outcomes.
196
 
197
- ![Image 6: Refer to caption](https://arxiv.org/html/2404.04925v1/x6.png)
198
 
199
  Figure 5: Overview of Parameter-Tuning Alignment (§§\lx@sectionsign§[4.1](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")) Methods, which including PTA in Pretraining Stage (§§\lx@sectionsign§[4.1.1](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS1 "4.1.1 PTA in Pretraining Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")), PTA in SFT stage (§§\lx@sectionsign§[4.1.2](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS2 "4.1.2 PTA in SFT Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")), PTA in RLHF stage (§§\lx@sectionsign§[4.1.3](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS3 "4.1.3 PTA in RLHF Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")) and PTA in Downstream Finetuning stage (§§\lx@sectionsign§[4.1.4](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS4 "4.1.4 PTA in Downstream Finetuning Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")).
200
 
@@ -216,7 +295,7 @@ To address the high computational cost of from-scratch pretraining, continual pr
216
 
217
  As illustrated in Figure[5](https://arxiv.org/html/2404.04925v1#S4.F5 "Figure 5 ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers") (b), PTA in SFT stage means leveraging multiple multilingual task data with instruction format for tuning parameters Fu et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib99)); Yang et al. ([2023f](https://arxiv.org/html/2404.04925v1#bib.bib376)); Team ([2023d](https://arxiv.org/html/2404.04925v1#bib.bib309)); Chen et al. ([2023c](https://arxiv.org/html/2404.04925v1#bib.bib54), [g](https://arxiv.org/html/2404.04925v1#bib.bib59)); Ranaldi et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib263)); Li et al. ([2023h](https://arxiv.org/html/2404.04925v1#bib.bib183)); Chen et al. ([2023g](https://arxiv.org/html/2404.04925v1#bib.bib59)); Santilli and Rodolà ([2023](https://arxiv.org/html/2404.04925v1#bib.bib277)); Bao et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib30)); Kohli et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib158)); Holmström and Doostmohammadi ([2023](https://arxiv.org/html/2404.04925v1#bib.bib127)); Garcia et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib104)). In particular, models like Flan-PaLM Chung et al. ([2022b](https://arxiv.org/html/2404.04925v1#bib.bib68)), mT0, BLOOMz Muennighoff et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib218)), PolyLM Wei et al. ([2023c](https://arxiv.org/html/2404.04925v1#bib.bib346)), Tk-Instruct Wang et al. ([2022b](https://arxiv.org/html/2404.04925v1#bib.bib337)), Chinese-Alpaca Cui et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib78)), Bayling Zhang et al. ([2023g](https://arxiv.org/html/2404.04925v1#bib.bib398)) and Phoenix Chen et al. ([2023h](https://arxiv.org/html/2404.04925v1#bib.bib60)), directly incorporated multilingual data in the SFT stage to achieve implicit multilingual alignment across languages. Besides, to solve the scarcity of multilingual SFT task data, PaLM2 Anil et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib21)), Zhu et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib417)); Cahyawijaya et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib46)); Li et al. ([2023c](https://arxiv.org/html/2404.04925v1#bib.bib178)); Gao et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib103)) added translation task during the SFT alignment stage to improve alignment. Further, Upadhayay and Behzadan ([2023](https://arxiv.org/html/2404.04925v1#bib.bib323)); Chai et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib48)); Zhu et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib416)) began to consider using a more effective SFT alignment strategy to optimize the reasoning process.
218
 
219
- ![Image 7: Refer to caption](https://arxiv.org/html/2404.04925v1/x7.png)
220
 
221
  Figure 6: Overview of Parameter-Frozen Alignment (§§\lx@sectionsign§[4.2](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")) methods, where prompts in sub-figures sourced from Qin et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib256)) and Zhang et al. ([2023f](https://arxiv.org/html/2404.04925v1#bib.bib396)).
222
 
@@ -794,3 +873,5 @@ To test the summarization ability of the model, the model is required to be able
794
  ##### Dialogue
795
 
796
  The communication between models and humans is often interactive, so a lot of work pays attention to MLLMs’ dialogue ability Boughorbel and Hawasly ([2023](https://arxiv.org/html/2404.04925v1#bib.bib42)). The current evaluation set includes xDial-Eval Zhang et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib392)), Multi 3 3{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPT WOZ Hu et al. ([2023c](https://arxiv.org/html/2404.04925v1#bib.bib133)), DIALIGHT Hu et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib132)), HPD Chen et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib52)) and X-RiSAWOZ Moradshahi et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib214)). Since multiple rounds of dialogue are not controllable, traditional indicators cannot be used. Currently, we tend to use PLM for evaluation Mendonça et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib209)). Furthermore, Mendonça et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib210)) proposed a new benchmark, which can achieve more robust evaluation by coordinating with pretrained language models. Ferron et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib95)) proposed the MEEP benchmark to further evaluate the dialogue participation of MLLMs.
 
 
 
5
  Published Time: Tue, 09 Apr 2024 01:00:16 GMT
6
 
7
  Markdown Content:
8
+ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers
9
+ ===============
10
+
11
+ 1. [1 Introduction](https://arxiv.org/html/2404.04925v1#S1 "1 Introduction ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
12
+ 2. [2 Preliminary](https://arxiv.org/html/2404.04925v1#S2 "2 Preliminary ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
13
+ 1. [2.1 Monolingual Large Language Model](https://arxiv.org/html/2404.04925v1#S2.SS1 "2.1 Monolingual Large Language Model ‣ 2 Preliminary ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
14
+ 2. [2.2 Multilingual Large Language Model](https://arxiv.org/html/2404.04925v1#S2.SS2 "2.2 Multilingual Large Language Model ‣ 2 Preliminary ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
15
+
16
+ 3. [3 Data Resource](https://arxiv.org/html/2404.04925v1#S3 "3 Data Resource ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
17
+ 1. [3.1 Multilingual Pretraining Data](https://arxiv.org/html/2404.04925v1#S3.SS1 "3.1 Multilingual Pretraining Data ‣ 3 Data Resource ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
18
+ 2. [3.2 Multilingual SFT Data](https://arxiv.org/html/2404.04925v1#S3.SS2 "3.2 Multilingual SFT Data ‣ 3 Data Resource ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
19
+ 3. [3.3 Multilingual RLHF Data](https://arxiv.org/html/2404.04925v1#S3.SS3 "3.3 Multilingual RLHF Data ‣ 3 Data Resource ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
20
+
21
+ 4. [4 Taxonomy](https://arxiv.org/html/2404.04925v1#S4 "4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
22
+ 1. [4.1 Parameter-Tuning Alignment](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
23
+ 1. [4.1.1 PTA in Pretraining Stage](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS1 "4.1.1 PTA in Pretraining Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
24
+ 1. [From-scratch Pretraining Alignment.](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS1.Px1 "From-scratch Pretraining Alignment. ‣ 4.1.1 PTA in Pretraining Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
25
+ 2. [Continual Pretraining Alignment.](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS1.Px2 "Continual Pretraining Alignment. ‣ 4.1.1 PTA in Pretraining Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
26
+
27
+ 2. [4.1.2 PTA in SFT Stage](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS2 "4.1.2 PTA in SFT Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
28
+ 3. [4.1.3 PTA in RLHF Stage](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS3 "4.1.3 PTA in RLHF Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
29
+ 4. [4.1.4 PTA in Downstream Finetuning Stage](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS4 "4.1.4 PTA in Downstream Finetuning Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
30
+ 1. [Full-Parameter Finetuning Alignment](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS4.Px1 "Full-Parameter Finetuning Alignment ‣ 4.1.4 PTA in Downstream Finetuning Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
31
+ 2. [Parameter-Efficient Finetuning Alignment](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS4.Px2 "Parameter-Efficient Finetuning Alignment ‣ 4.1.4 PTA in Downstream Finetuning Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
32
+ 1. [Takeaways](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS4.Px3 "Takeaways ‣ Parameter-Efficient Finetuning Alignment ‣ 4.1.4 PTA in Downstream Finetuning Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
33
+
34
+ 2. [4.2 Parameter-Frozen Alignment](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
35
+ 1. [4.2.1 Direct Prompting](https://arxiv.org/html/2404.04925v1#S4.SS2.SSS1 "4.2.1 Direct Prompting ‣ 4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
36
+ 2. [4.2.2 Code-Switching Prompting](https://arxiv.org/html/2404.04925v1#S4.SS2.SSS2 "4.2.2 Code-Switching Prompting ‣ 4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
37
+ 3. [4.2.3 Translation Alignment Prompting](https://arxiv.org/html/2404.04925v1#S4.SS2.SSS3 "4.2.3 Translation Alignment Prompting ‣ 4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
38
+ 4. [4.2.4 Retrieval Augmented Alignment](https://arxiv.org/html/2404.04925v1#S4.SS2.SSS4 "4.2.4 Retrieval Augmented Alignment ‣ 4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
39
+ 1. [Takeaways](https://arxiv.org/html/2404.04925v1#S4.SS2.SSS4.Px1 "Takeaways ‣ 4.2.4 Retrieval Augmented Alignment ‣ 4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
40
+
41
+ 5. [5 Future work and New Froniter](https://arxiv.org/html/2404.04925v1#S5 "5 Future work and New Froniter ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
42
+ 1. [5.1 Hallucination in MLLMs](https://arxiv.org/html/2404.04925v1#S5.SS1 "5.1 Hallucination in MLLMs ‣ 5 Future work and New Froniter ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
43
+ 2. [5.2 Knowledge Editing in MLLMs](https://arxiv.org/html/2404.04925v1#S5.SS2 "5.2 Knowledge Editing in MLLMs ‣ 5 Future work and New Froniter ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
44
+ 3. [5.3 Safety in MLLMs](https://arxiv.org/html/2404.04925v1#S5.SS3 "5.3 Safety in MLLMs ‣ 5 Future work and New Froniter ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
45
+ 4. [5.4 Fairness in MLLMs](https://arxiv.org/html/2404.04925v1#S5.SS4 "5.4 Fairness in MLLMs ‣ 5 Future work and New Froniter ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
46
+ 5. [5.5 Language Extension in MLLMs](https://arxiv.org/html/2404.04925v1#S5.SS5 "5.5 Language Extension in MLLMs ‣ 5 Future work and New Froniter ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
47
+ 6. [5.6 Multi-Modality Extension in MLLMs](https://arxiv.org/html/2404.04925v1#S5.SS6 "5.6 Multi-Modality Extension in MLLMs ‣ 5 Future work and New Froniter ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
48
+
49
+ 6. [6 Conclusion](https://arxiv.org/html/2404.04925v1#S6 "6 Conclusion ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
50
+ 7. [A Multilingual Performance Evaluation](https://arxiv.org/html/2404.04925v1#A1 "Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
51
+ 1. [A.1 Evaluation Metrics](https://arxiv.org/html/2404.04925v1#A1.SS1 "A.1 Evaluation Metrics ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
52
+ 1. [Traditional Automatic Metric](https://arxiv.org/html/2404.04925v1#A1.SS1.SSS0.Px1 "Traditional Automatic Metric ‣ A.1 Evaluation Metrics ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
53
+ 2. [MLLM-based Automatic Metric](https://arxiv.org/html/2404.04925v1#A1.SS1.SSS0.Px2 "MLLM-based Automatic Metric ‣ A.1 Evaluation Metrics ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
54
+ 3. [Human Evaluation](https://arxiv.org/html/2404.04925v1#A1.SS1.SSS0.Px3 "Human Evaluation ‣ A.1 Evaluation Metrics ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
55
+
56
+ 2. [A.2 Evaluation Benchmarks](https://arxiv.org/html/2404.04925v1#A1.SS2 "A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
57
+ 1. [A.2.1 Natural Language Understanding](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS1 "A.2.1 Natural Language Understanding ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
58
+ 1. [Linguistics Analysis](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS1.Px1 "Linguistics Analysis ‣ A.2.1 Natural Language Understanding ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
59
+ 2. [Semantic Understanding](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS1.Px2 "Semantic Understanding ‣ A.2.1 Natural Language Understanding ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
60
+ 3. [Cultural Understanding](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS1.Px3 "Cultural Understanding ‣ A.2.1 Natural Language Understanding ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
61
+ 4. [Knowledge Understanding](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS1.Px4 "Knowledge Understanding ‣ A.2.1 Natural Language Understanding ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
62
+
63
+ 2. [A.2.2 Natural Language Generation](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS2 "A.2.2 Natural Language Generation ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
64
+ 1. [Translation](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS2.Px1 "Translation ‣ A.2.2 Natural Language Generation ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
65
+ 2. [Reasoning](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS2.Px2 "Reasoning ‣ A.2.2 Natural Language Generation ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
66
+ 3. [Coding Generation](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS2.Px3 "Coding Generation ‣ A.2.2 Natural Language Generation ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
67
+ 4. [Summarization](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS2.Px4 "Summarization ‣ A.2.2 Natural Language Generation ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
68
+ 5. [Dialogue](https://arxiv.org/html/2404.04925v1#A1.SS2.SSS2.Px5 "Dialogue ‣ A.2.2 Natural Language Generation ‣ A.2 Evaluation Benchmarks ‣ Appendix A Multilingual Performance Evaluation ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")
69
+
70
+ HTML conversions [sometimes display errors](https://info.dev.arxiv.org/about/accessibility_html_error_messages.html) due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
71
+
72
+ * failed: arydshln
73
+ * failed: forest
74
+ * failed: forest
75
+ * failed: inconsolata
76
+
77
+ Authors: achieve the best HTML results from your LaTeX submissions by following these [best practices](https://info.arxiv.org/help/submit_latex_best_practices.html).
78
+
79
+ License: arXiv.org perpetual non-exclusive license
80
+
81
+ arXiv:2404.04925v1 [cs.CL] 07 Apr 2024
82
+
83
+ ![Image 1: [Uncaptioned image]](https://arxiv.org/html/x1.png)Multilingual Large Language Model:
84
 
85
  A Survey of Resources, Taxonomy and Frontiers
86
+ ================================================================================================================================================
87
 
88
+ Libo Qin♣♣{}^{\clubsuit}start_FLOATSUPERSCRIPT ♣ end_FLOATSUPERSCRIPT Qiguang Chen♠♠{}^{\spadesuit}start_FLOATSUPERSCRIPT ♠ end_FLOATSUPERSCRIPT 1 1 footnotemark: 1 Yuhang Zhou♠♠{}^{\spadesuit}start_FLOATSUPERSCRIPT ♠ end_FLOATSUPERSCRIPT Zhi Chen♢♢{}^{\diamondsuit}start_FLOATSUPERSCRIPT ♢ end_FLOATSUPERSCRIPT Yinghui Li♮♮{}^{\natural}start_FLOATSUPERSCRIPT ♮ end_FLOATSUPERSCRIPT
89
 
90
  Lizi Liao���normal-♯{}^{\sharp}start_FLOATSUPERSCRIPT ♯ end_FLOATSUPERSCRIPT Min Li♣normal-♣{}^{\clubsuit}start_FLOATSUPERSCRIPT ♣ end_FLOATSUPERSCRIPT Wanxiang Che♠normal-♠{}^{\spadesuit}start_FLOATSUPERSCRIPT ♠ end_FLOATSUPERSCRIPT Philip S. Yu♡normal-♡{}^{\heartsuit}start_FLOATSUPERSCRIPT ♡ end_FLOATSUPERSCRIPT
91
 
 
93
 
94
  ♮♮{}^{\natural}start_FLOATSUPERSCRIPT ♮ end_FLOATSUPERSCRIPT Tsinghua University ♯♯{}^{\sharp}start_FLOATSUPERSCRIPT ♯ end_FLOATSUPERSCRIPT Singapore Management University ♡♡{}^{\heartsuit}start_FLOATSUPERSCRIPT ♡ end_FLOATSUPERSCRIPT University of Illinons at Chicago
95
 
96
+ lbqin@csu.edu.cn, {qgchen,car}@ir.hit.edu.cn
97
+
98
+ Equal Contribution
99
 
100
  ###### Abstract
101
 
102
  Multilingual Large Language Models are capable of using powerful Large Language Models to handle and respond to queries in multiple languages, which achieves remarkable success in multilingual natural language processing tasks. Despite these breakthroughs, there still remains a lack of a comprehensive survey to summarize existing approaches and recent developments in this field. To this end, in this paper, we present a thorough review and provide a unified perspective to summarize the recent progress as well as emerging trends in multilingual large language models (MLLMs) literature. The contributions of this paper can be summarized: (1) First survey: to our knowledge, we take the first step and present a thorough review in MLLMs research field according to multi-lingual alignment; (2) New taxonomy: we offer a new and unified perspective to summarize the current progress of MLLMs; (3) New frontiers: we highlight several emerging frontiers and discuss the corresponding challenges; (4) Abundant resources: we collect abundant open-source resources, including relevant papers, data corpora, and leaderboards. We hope our work can provide the community with quick access and spur breakthrough research in MLLMs.
103
 
104
+ ![Image 2: [Uncaptioned image]](https://arxiv.org/html/x2.png)
 
105
  Multilingual Large Language Model:
106
 
107
  A Survey of Resources, Taxonomy and Frontiers
 
113
 
114
  In recent years, remarkable progress has been witnessed in large language models (LLMs)Brown et al. ([2020](https://arxiv.org/html/2404.04925v1#bib.bib44)); Touvron et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib316)); Bang et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib28)); Zhao et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib408)), which have achieved excellent performance on various natural language processing tasks Pan et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib235)); Nguyen et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib227)); Trivedi et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib318)). In addition, LLMs raise surprising emergent capabilities, including in-context learning Min et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib212)); Dong et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib89)), chain-of-thought reasoning Wei et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib343)); Huang et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib135)); Qin et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib255)), and even planning Driess et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib91)); Hu et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib131)). Nevertheless, the majority of LLMs are English-centric, primarily focusing on English tasks Held et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib122)); Zhang et al. ([2023i](https://arxiv.org/html/2404.04925v1#bib.bib402)), which makes them somewhat weak for multilingual settings, especially in low-resource scenarios.
115
 
116
+ ![Image 3: Refer to caption](https://arxiv.org/html/x3.png)
117
 
118
  Figure 1: Parameter-Tuning Alignment (§§\lx@sectionsign§[4.1](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")) v.s. Parameter-Frozen Alignment (§§\lx@sectionsign§[4.2](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")). The former requires the model to fine-tune the MLLM parameters for cross-lingual alignment, while the latter directly uses prompts for alignment without parameter tuning.
119
 
120
+ ![Image 4: Refer to caption](https://arxiv.org/html/x4.png)
121
 
122
  Figure 2: Evolution of selected MLLMs over the past five years, where colored branches indicate different alignment stages. For models with multiple alignment stages, the final stage is represented.
123
 
 
142
 
143
  where Unexpect indicates that the LLM generates output in an unintended language; mono denotes the single language.
144
 
145
+ ![Image 5: Refer to caption](https://arxiv.org/html/x5.png)
146
 
147
  Figure 3: Monolingual Large Language Model v.s. Multilingual Large Language Model.
148
 
 
268
 
269
  Some work leveraged the multilingual RLHF data to improve alignment. Specifically, Lai et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib167)) leverages multilingual ranking data for training a reward model using RLHF. Zeng et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib389)) introduce the TIM dataset to train a more effective reward model in multilingual contexts.
270
 
271
+ 4 Taxonomy
272
+ ----------
273
+
274
  As shown in Figure[4](https://arxiv.org/html/2404.04925v1#S3.F4 "Figure 4 ‣ 3.1 Multilingual Pretraining Data ‣ 3 Data Resource ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers"), we introduce a novel taxonomy including parameter-tuning alignment (§⁢[4.1](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")§[4.1](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")\lx@sectionsign\ref{sec:parameter_tuned_align}§) and parameter-frozen alignment (§⁢[4.2](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")§[4.2](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")\lx@sectionsign\ref{sec:param_freeze_align}§), which aims to provide a unified view for researchers to understand the MLLMs literature. Specifically, parameter tuning alignment (PTA) comprises a series of progressively advanced training and alignment strategies, including Pretraining Alignment, Supervised Fine-Tuning (SFT) Alignment, Reinforcement Learning from Human Feedback (RLHF) Alignment, and, ultimately, Downstream Fine-Tuning Alignment. These stages collectively aim to refine model parameters to align the multilingual performance systematically. Conversely, the parameter frozen alignment (PFA) focuses on four prompting strategies based on PTA: Direct Prompting, Code-Switching Prompting, Translation Alignment Prompting, and Retrieval-Augmented Alignment. This method maintains the original model parameters to achieve desired outcomes.
275
 
276
+ ![Image 6: Refer to caption](https://arxiv.org/html/x6.png)
277
 
278
  Figure 5: Overview of Parameter-Tuning Alignment (§§\lx@sectionsign§[4.1](https://arxiv.org/html/2404.04925v1#S4.SS1 "4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")) Methods, which including PTA in Pretraining Stage (§§\lx@sectionsign§[4.1.1](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS1 "4.1.1 PTA in Pretraining Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")), PTA in SFT stage (§§\lx@sectionsign§[4.1.2](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS2 "4.1.2 PTA in SFT Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")), PTA in RLHF stage (§§\lx@sectionsign§[4.1.3](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS3 "4.1.3 PTA in RLHF Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")) and PTA in Downstream Finetuning stage (§§\lx@sectionsign§[4.1.4](https://arxiv.org/html/2404.04925v1#S4.SS1.SSS4 "4.1.4 PTA in Downstream Finetuning Stage ‣ 4.1 Parameter-Tuning Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")).
279
 
 
295
 
296
  As illustrated in Figure[5](https://arxiv.org/html/2404.04925v1#S4.F5 "Figure 5 ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers") (b), PTA in SFT stage means leveraging multiple multilingual task data with instruction format for tuning parameters Fu et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib99)); Yang et al. ([2023f](https://arxiv.org/html/2404.04925v1#bib.bib376)); Team ([2023d](https://arxiv.org/html/2404.04925v1#bib.bib309)); Chen et al. ([2023c](https://arxiv.org/html/2404.04925v1#bib.bib54), [g](https://arxiv.org/html/2404.04925v1#bib.bib59)); Ranaldi et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib263)); Li et al. ([2023h](https://arxiv.org/html/2404.04925v1#bib.bib183)); Chen et al. ([2023g](https://arxiv.org/html/2404.04925v1#bib.bib59)); Santilli and Rodolà ([2023](https://arxiv.org/html/2404.04925v1#bib.bib277)); Bao et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib30)); Kohli et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib158)); Holmström and Doostmohammadi ([2023](https://arxiv.org/html/2404.04925v1#bib.bib127)); Garcia et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib104)). In particular, models like Flan-PaLM Chung et al. ([2022b](https://arxiv.org/html/2404.04925v1#bib.bib68)), mT0, BLOOMz Muennighoff et al. ([2022](https://arxiv.org/html/2404.04925v1#bib.bib218)), PolyLM Wei et al. ([2023c](https://arxiv.org/html/2404.04925v1#bib.bib346)), Tk-Instruct Wang et al. ([2022b](https://arxiv.org/html/2404.04925v1#bib.bib337)), Chinese-Alpaca Cui et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib78)), Bayling Zhang et al. ([2023g](https://arxiv.org/html/2404.04925v1#bib.bib398)) and Phoenix Chen et al. ([2023h](https://arxiv.org/html/2404.04925v1#bib.bib60)), directly incorporated multilingual data in the SFT stage to achieve implicit multilingual alignment across languages. Besides, to solve the scarcity of multilingual SFT task data, PaLM2 Anil et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib21)), Zhu et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib417)); Cahyawijaya et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib46)); Li et al. ([2023c](https://arxiv.org/html/2404.04925v1#bib.bib178)); Gao et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib103)) added translation task during the SFT alignment stage to improve alignment. Further, Upadhayay and Behzadan ([2023](https://arxiv.org/html/2404.04925v1#bib.bib323)); Chai et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib48)); Zhu et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib416)) began to consider using a more effective SFT alignment strategy to optimize the reasoning process.
297
 
298
+ ![Image 7: Refer to caption](https://arxiv.org/html/x7.png)
299
 
300
  Figure 6: Overview of Parameter-Frozen Alignment (§§\lx@sectionsign§[4.2](https://arxiv.org/html/2404.04925v1#S4.SS2 "4.2 Parameter-Frozen Alignment ‣ 4 Taxonomy ‣ Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers")) methods, where prompts in sub-figures sourced from Qin et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib256)) and Zhang et al. ([2023f](https://arxiv.org/html/2404.04925v1#bib.bib396)).
301
 
 
873
  ##### Dialogue
874
 
875
  The communication between models and humans is often interactive, so a lot of work pays attention to MLLMs’ dialogue ability Boughorbel and Hawasly ([2023](https://arxiv.org/html/2404.04925v1#bib.bib42)). The current evaluation set includes xDial-Eval Zhang et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib392)), Multi 3 3{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPT WOZ Hu et al. ([2023c](https://arxiv.org/html/2404.04925v1#bib.bib133)), DIALIGHT Hu et al. ([2024](https://arxiv.org/html/2404.04925v1#bib.bib132)), HPD Chen et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib52)) and X-RiSAWOZ Moradshahi et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib214)). Since multiple rounds of dialogue are not controllable, traditional indicators cannot be used. Currently, we tend to use PLM for evaluation Mendonça et al. ([2023a](https://arxiv.org/html/2404.04925v1#bib.bib209)). Furthermore, Mendonça et al. ([2023b](https://arxiv.org/html/2404.04925v1#bib.bib210)) proposed a new benchmark, which can achieve more robust evaluation by coordinating with pretrained language models. Ferron et al. ([2023](https://arxiv.org/html/2404.04925v1#bib.bib95)) proposed the MEEP benchmark to further evaluate the dialogue participation of MLLMs.
876
+
877
+ Generated on Sun Apr 7 11:17:04 2024 by [L A T E xml![Image 8: [LOGO]](blob:http://localhost/70e087b9e50c3aa663763c3075b0d6c5)](http://dlmf.nist.gov/LaTeXML/)