File size: 11,445 Bytes
ae914c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
---
configs:
- config_name: gsm8k_araeng
  data_files:
  - split: test
    path:
    - "gsm8k/gsm8k_araeng.csv"
- config_name: gsm8k_chieng
  data_files:
  - split: test
    path:
    - "gsm8k/gsm8k_chieng.csv"
- config_name: gsm8k_hineng
  data_files:
  - split: test
    path:
    - "gsm8k/gsm8k_hineng.csv"
- config_name: gsm8k_spaeng
  data_files:
  - split: test
    path:
    - "gsm8k/gsm8k_spaeng.csv"
- config_name: lid_chieng
  data_files:
  - split: test
    path:
    - "lid/lid_chieng.csv"
- config_name: lid_fridut
  data_files:
  - split: test
    path:
    - "lid/lid_fridut.csv"
- config_name: lid_gereng
  data_files:
  - split: test
    path:
    - "lid/lid_gereng.csv"
- config_name: lid_guaspa
  data_files:
  - split: test
    path:
    - "lid/lid_guaspa.csv"
- config_name: lid_hineng
  data_files:
  - split: test
    path:
    - "lid/lid_hineng.csv"
- config_name: lid_hokman
  data_files:
  - split: test
    path:
    - "lid/lid_hokman.csv"
- config_name: lid_mareng
  data_files:
  - split: test
    path:
    - "lid/lid_mareng.csv"
- config_name: lid_msaea
  data_files:
  - split: test
    path:
    - "lid/lid_msaea.csv"
- config_name: lid_nepeng
  data_files:
  - split: test
    path:
    - "lid/lid_nepeng.csv"
- config_name: mmlu_araeng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_araeng.csv"
- config_name: mmlu_beneng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_beneng.csv"
- config_name: mmlu_chieng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_chieng.csv"
- config_name: mmlu_duteng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_duteng.csv"
- config_name: mmlu_freeng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_freeng.csv"
- config_name: mmlu_gereng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_gereng.csv"
- config_name: mmlu_hineng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_hineng.csv"
- config_name: mmlu_mareng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_mareng.csv"
- config_name: mmlu_nepeng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_nepeng.csv"
- config_name: mmlu_spaeng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_spaeng.csv"
- config_name: mmlu_tameng
  data_files:
  - split: test
    path:
    - "mmlu/mmlu_tameng.csv"
- config_name: mt_araeng_eng
  data_files:
  - split: test
    path:
    - "mt/mt_araeng_eng.csv"
- config_name: mt_beneng_eng
  data_files:
  - split: test
    path:
    - "mt/mt_beneng_eng.csv"
- config_name: mt_chieng_chi
  data_files:
  - split: test
    path:
    - "mt/mt_chieng_chi.csv"
- config_name: mt_chieng_eng
  data_files:
  - split: test
    path:
    - "mt/mt_chieng_eng.csv"
- config_name: mt_hineng_eng
  data_files:
  - split: test
    path:
    - "mt/mt_hineng_eng.csv"
- config_name: mt_hokman_man
  data_files:
  - split: test
    path:
    - "mt/mt_hokman_man.csv"
- config_name: mt_mareng_eng
  data_files:
  - split: test
    path:
    - "mt/mt_mareng_eng.csv"
- config_name: mt_spaeng_eng
  data_files:
  - split: test
    path:
    - "mt/mt_spaeng_eng.csv"
- config_name: ner_guaspa
  data_files:
  - split: test
    path:
    - "ner/ner_guaspa.csv"
- config_name: ner_hineng
  data_files:
  - split: test
    path:
    - "ner/ner_hineng.csv"
- config_name: ner_msaea
  data_files:
  - split: test
    path:
    - "ner/ner_msaea.csv"
- config_name: ner_spaeng
  data_files:
  - split: test
    path:
    - "ner/ner_spaeng.csv"
- config_name: pos_chieng
  data_files:
  - split: test
    path:
    - "pos/pos_chieng.csv"
- config_name: pos_fridut
  data_files:
  - split: test
    path:
    - "pos/pos_fridut.csv"
- config_name: pos_hineng
  data_files:
  - split: test
    path:
    - "pos/pos_hineng.csv"
- config_name: pos_spaeng
  data_files:
  - split: test
    path:
    - "pos/pos_spaeng.csv"
- config_name: sa_beneng
  data_files:
  - split: test
    path:
    - "sa/sa_beneng.csv"
- config_name: sa_hineng
  data_files:
  - split: test
    path:
    - "sa/sa_hineng.csv"
- config_name: sa_maleng
  data_files:
  - split: test
    path:
    - "sa/sa_maleng.csv"
- config_name: sa_mareng
  data_files:
  - split: test
    path:
    - "sa/sa_mareng.csv"
- config_name: sa_nepeng
  data_files:
  - split: test
    path:
    - "sa/sa_nepeng.csv"
- config_name: sa_spaeng
  data_files:
  - split: test
    path:
    - "sa/sa_spaeng.csv"
- config_name: sa_tameng
  data_files:
  - split: test
    path:
    - "sa/sa_tameng.csv"
- config_name: truthfulqa_araeng
  data_files:
  - split: test
    path:
    - "truthfulqa/truthfulqa_araeng.csv"
- config_name: truthfulqa_chieng
  data_files:
  - split: test
    path:
    - "truthfulqa/truthfulqa_chieng.csv"
- config_name: truthfulqa_hineng
  data_files:
  - split: test
    path:
    - "truthfulqa/truthfulqa_hineng.csv"
- config_name: truthfulqa_spaeng
  data_files:
  - split: test
    path:
    - "truthfulqa/truthfulqa_spaeng.csv"
license: apache-2.0
language:
- zh
- en
- es
- hi
- de
- nl
- fy
- fr
- ar
- bn
- mr
- ne
- ta
- ml
- gn
- ne

size_categories:
- 10K<n<100K

task_categories:
- text-generation
- question-answering
- translation
- text-classification

tags:
- code-mixing
- multilingual
- llm-evaluation
- benchmark
---
# ℹ️Dataset Card for CodeMixBench

## [EMNLP'25] [CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages](https://arxiv.org/abs/2507.18791)

   <a href="https://github.com/Jeromeyluck/CodeMixBench" target="_blank">
      <img alt="Github" src="https://img.shields.io/badge/🐙-Github-blue" />
   </a>
        
  <a href="https://arxiv.org/abs/2507.18791" target="_blank">
      <img alt="Paper" src="https://img.shields.io/badge/📜-Paper-purple" />
   </a>
  <a href="https://2025.emnlp.org/" target="_blank">
      <img alt="EMNLP 2025" src="https://img.shields.io/badge/Proceedings-EMNLP2025-blue" />
   </a>



<!-- Provide a quick summary of the dataset. -->

Code-mixing is a linguistic phenomenon where multilingual speakers switch or mix two or more languages within a single utterance or conversation. 
To evaluate LLMs’ comprehension of multilingual code-mixed texts, we introduce CodeMixBench, a benchmark comprising eight tasks across 18 languages. 

![Statistics of 18 languages](./pics/18_languages.png)


## 🔎Dataset Details

Our benchmark comprises synthesized datasets targeting knowledge reasoning, 
mathematical reasoning, and truthfulness tasks, along with LID, POS, NER, SA, and MT tasks, 
which have been adapted from open-source studies. 


### CodeMixBench vs. Others

Previous benchmarks, such as GLUECoS and LinCE, primarily focus on traditional NLP tasks and are limited to a small number of languages. 
LinCE includes four language pairs and five NLP tasks: Language Identification(LID), 
Part of Speech (POS), Named Entity Recognition (NER), Sentiment Analysis (SA), and Machine Translation (MT). 
In contrast, GLUECoS covers two language pairs, lacks the MT task, but adds Question Answering (QA) and Natural Language Inference (NLI). 
Our review of recent codemixing studies indicates that research extends beyond the language pairs used in LinCE and GLUECoS. 
Therefore, we expanded to 16 language pairs and introduced tasks better suited for evaluating LLMs, 
such as Multi-Choice, Math, and Truthfulness, resulting in a total of eight tasks.

![language pairs and tasks](./pics/language_pairs.png)

### Statistics of Synthetic Datasets
For knowledge reasoning, we developed the code-mixed MMLU (CM-MMLU) based on the MMLU test set, 
featuring multiple-choice questions from 57 subjects to assess the model's comprehensive knowledge reasoning abilities. 
For mathematical reasoning, we created the code-mixed GSM8K (CM-GSM8K), derived from the GSM8K test set, 
which evaluates mathematical reasoning capabilities with each question including step-by-step solutions. 
For truthfulness assessment, we constructed the code-mixed TruthfulQA (CM-TruthfulQA) using 817 multiple-choice 
questions from the TruthfulQA test set. 

![Main evaluation results on CodeMixBench](./pics/statistics_synthesized_dataset.png)

### Statistics of Collected Datasets
We selected and reconstructed 30 datasets from existing open-source projects. To comprehensively evaluate the performance of large 
models on code-mixing, we aimed to encompass a diverse range of language families and tasks, prioritizing manually annotated datasets. 
Ultimately, we cover traditional NLP tasks such as Language Identification (LID), Named Entity Recognition (NER), 
Part-of-Speech tagging (POS), Sentiment Analysis(SA), and Machine Translation (MT), and cover 16 languages from seven language families: 
Germanic(en, de, nl, fy), Sino-Tibetan (zh, hok), Romance(es), Afro-Asiatic (msa, ea), Indo-Aryan (hi, bn, ne,mr), Dravidian (ta, ml), and Tupian (gn).

![Main evaluation results on CodeMixBench](./pics/statistics_collected_dataset.png)

### Experience Results
We evaluate three families of LLMs on CodeMixBench, revealing consistent underperformance across all models on code-mixing 
datasets involving language pairs from different language families. However, enhancements 
in training data size, model scale, post-training, and few-shot learning can improve LLM performance on code-mixing datasets.

![Main evaluation results on CodeMixBench](./pics/main_result_lineplot.png)

![Main evaluation results on CodeMixBench](./pics/other_result.png)



## 🚀Load CodeMixBench

   Taking the GSM8K task with mixed Chinese and English, gsm8k_chieng, as an example.
   
   ```python
   from datasets import load_dataset

   dataset_dict = load_dataset('CodeMixBench/CodeMixBench', data_files={'test': './gsm8k/gsm8k_chieng.csv'})
   ```

### 📍Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Repository:** https://github.com/Jeromeyluck/CodeMixBench/
- **Paper:** [CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages](https://huggingface.co/papers/2507.18791)

## Setup

1. Follow these steps to set up your development environment:
   ```bash
   git clone git@github.com:Jeromeyluck/CodeMixBench.git
   cd CodeMixBench

   conda create -n CodeMixBench python=3.9
   conda activate CodeMixBench
   pip install -r requirements.txt
   ```
   
2. To launch an llm for testing:
   ```bash
   python ./test_model.py \
     --dataset lid_guaspa \
     --expid lid_guaspa_all_0shot \
     --model gpt-3.5-turbo \
     --shot 5 \
     --api sk-********************* \
     --url https://****************
   ```
   - `dataset`: select the dataset (e.g., `lid_gereng`, `lid_spaeng`, `ner_hineng`).
   - `expid`: define the ID of the test, the results file will be named after this ID.
   - `model`: the model you test. The default model is `gpt-3.5-turbo`.
   - `shot`: use for few-shot test (by default it will be `1`).
   - `api`: API Key (default key will be `OPENAI_API_KEY` defined in system path).
   - `url`: API function provider's URL.


## 🔗Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
 @misc{yang2025codemixbenchevaluatingcodemixingcapabilities,
    title={CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages}, 
    author={Yilun Yang and Yekun Chai},
    year={2025},
    eprint={2507.18791},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2507.18791}, 
}
```