Datasets:

metaphors commited on
Commit
74d0bf6
·
verified ·
1 Parent(s): b1efac5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -2
README.md CHANGED
@@ -1,2 +1,30 @@
1
- MiLMo:Minority Multilingual Pre-trained Language Model constructs a minority multilingual text classification dataset named MiTC,While training the pre-trained model of minority languages,this paper constructs a multilingual text classification dataset
2
- MiTC for various languages, which contains 82,662 samples.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MiTC
2
+
3
+ ## Introduction
4
+
5
+ [MiLMo](https://github.com/CMLI-NLP/MiLMo) constructs a minority multilingual text classification dataset named MiTC which contains five languages, including Mongolian, Tibetan, Uyghur, Kazakh and Korean.
6
+
7
+ We also use [MiLMo](https://github.com/CMLI-NLP/MiLMo) for the downstream experiment of text classification on MiTC.
8
+
9
+ ## Hugging Face
10
+
11
+ https://huggingface.co/datasets/CMLI-NLP/MiTC
12
+
13
+ ## Citation
14
+
15
+ Plain Text:
16
+ J. Deng, H. Shi, X. Yu, W. Bao, Y. Sun and X. Zhao, "MiLMo:Minority Multilingual Pre-Trained Language Model," 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Honolulu, Oahu, HI, USA, 2023, pp. 329-334, doi: 10.1109/SMC53992.2023.10393961.
17
+
18
+ BibTeX:
19
+ ```
20
+ @INPROCEEDINGS{10393961,
21
+ author={Deng, Junjie and Shi, Hanru and Yu, Xinhe and Bao, Wugedele and Sun, Yuan and Zhao, Xiaobing},
22
+ booktitle={2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
23
+ title={MiLMo:Minority Multilingual Pre-Trained Language Model},
24
+ year={2023},
25
+ volume={},
26
+ number={},
27
+ pages={329-334},
28
+ keywords={Soft sensors;Text categorization;Social sciences;Government;Data acquisition;Morphology;Data models;Multilingual;Pre-trained language model;Datasets;Word2vec},
29
+ doi={10.1109/SMC53992.2023.10393961}}
30
+ ```