nur-dev commited on
Commit
a1aef4b
·
verified ·
1 Parent(s): aaf9ed7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -36
README.md CHANGED
@@ -1,37 +1,37 @@
1
- ---
2
- license: apache-2.0
3
- datasets:
4
- - kz-transformers/multidomain-kazakh-dataset
5
- language:
6
- - kk
7
- library_name: transformers
8
- pipeline_tag: fill-mask
9
- ---
10
-
11
- # RoBERTa-kaz-large
12
-
13
- ## Model Description
14
- `roberta-kaz-large` is a RoBERTa-based language model for the Kazakh language, trained from scratch using the RobertaForMaskedLM architecture. It has been trained on the "kz-transformers/multidomain-kazakh-dataset" from Hugging Face, which covers diverse domains to ensure broad generalization capabilities.
15
-
16
- ## Usage
17
- The model can be used with the Hugging Face Transformers library:
18
- ```python
19
- from transformers import RobertaTokenizerFast, RobertaForMaskedLM
20
-
21
- tokenizer = RobertaTokenizerFast.from_pretrained('roberta-kaz-large')
22
- model = RobertaForMaskedLM.from_pretrained('roberta-kaz-large')
23
- ```
24
- Or directly with a pipeline for MLM:
25
- ```python
26
- from transformers import pipeline
27
- pipe = pipeline('fill-mask', model='kz-transformers/kaz-roberta-conversational')
28
- predicted = pipe("Қазіргі <mask> әлемдік деңгейдегі <mask> университеттері сапалы білім, зияткерлік және мәдени <mask> беретін <mask> <mask> <mask> ғана емес, сонымен қатар мемлекет үшін <mask> қабілетті адами капиталды құратын <mask>, ғылым және өндірісті интеграциялаудың <mask> <mask> болып табылады.")
29
-
30
- for t in predicted:
31
- print(t[0]['score'], t[0]['token_str'])
32
- ```
33
- ## Training procedure
34
- The model was trained using two NVIDIA A100 GPUs on over 5.3 million examples from the "kz-transformers/multidomain-kazakh-dataset." We conducted training across 10 epochs, handling large batches of data efficiently through gradient accumulation. The learning setup included a slow build-up in the learning rate to maximize learning stability and was optimized over 208,100 steps, focusing on improving the model’s ability to understand and generate the Kazakh language.
35
-
36
- ## Limitations and Bias
37
  As with any language model, roberta-kaz-large may inherently learn biases present in the training data. Users should be cautious and evaluate the model in diverse contexts to ensure it performs as expected, especially in sensitive applications.
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - kz-transformers/multidomain-kazakh-dataset
5
+ language:
6
+ - kk
7
+ library_name: transformers
8
+ pipeline_tag: fill-mask
9
+ ---
10
+
11
+ # RoBERTa-kaz-large
12
+
13
+ ## Model Description
14
+ `roberta-kaz-large` is a RoBERTa-based language model for the Kazakh language, trained from scratch using the RobertaForMaskedLM architecture. It has been trained on the "kz-transformers/multidomain-kazakh-dataset" from Hugging Face, which covers diverse domains to ensure broad generalization capabilities.
15
+
16
+ ## Usage
17
+ The model can be used with the Hugging Face Transformers library:
18
+ ```python
19
+ from transformers import RobertaTokenizerFast, RobertaForMaskedLM
20
+
21
+ tokenizer = RobertaTokenizerFast.from_pretrained('roberta-kaz-large')
22
+ model = RobertaForMaskedLM.from_pretrained('roberta-kaz-large')
23
+ ```
24
+ Or directly with a pipeline for MLM:
25
+ ```python
26
+ from transformers import pipeline
27
+ pipe = pipeline('fill-mask', model='kz-transformers/kaz-roberta-conversational')
28
+ predicted = pipe("Қазіргі <mask> әлемдік деңгейдегі <mask> университеттері сапалы білім, зияткерлік және мәдени <mask> беретін <mask> <mask> <mask> ғана емес, сонымен қатар мемлекет үшін <mask> қабілетті адами капиталды құратын <mask>, ғылым және өндірісті интеграциялаудың <mask> <mask> болып табылады.")
29
+
30
+ for t in predicted:
31
+ print(t[0]['score'], t[0]['token_str'])
32
+ ```
33
+ ## Training procedure
34
+ The model was trained using two NVIDIA A100 GPUs on over 5.3 million examples from the "kz-transformers/multidomain-kazakh-dataset." We conducted training across 10 epochs, handling large batches of data efficiently through gradient accumulation. The learning setup included a slow build-up in the learning rate to maximize learning stability and was optimized over 208,100 steps, focusing on improving the model’s ability to understand and generate the Kazakh language.
35
+
36
+ ## Limitations and Bias
37
  As with any language model, roberta-kaz-large may inherently learn biases present in the training data. Users should be cautious and evaluate the model in diverse contexts to ensure it performs as expected, especially in sensitive applications.