VaishalBusiness commited on
Commit
04e6ed3
ยท
verified ยท
1 Parent(s): fd9935d

Add README.md for Helsinki-NLP-opus-mt-tc-big-gmq-he

Browse files
Helsinki-NLP-opus-mt-tc-big-gmq-he/README.md ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - da
4
+ - he
5
+ - sv
6
+
7
+ tags:
8
+ - translation
9
+ - opus-mt-tc
10
+
11
+ license: cc-by-4.0
12
+ model-index:
13
+ - name: opus-mt-tc-big-gmq-he
14
+ results:
15
+ - task:
16
+ name: Translation dan-heb
17
+ type: translation
18
+ args: dan-heb
19
+ dataset:
20
+ name: flores101-devtest
21
+ type: flores_101
22
+ args: dan heb devtest
23
+ metrics:
24
+ - name: BLEU
25
+ type: bleu
26
+ value: 22.9
27
+ - name: chr-F
28
+ type: chrf
29
+ value: 0.52815
30
+ - task:
31
+ name: Translation isl-heb
32
+ type: translation
33
+ args: isl-heb
34
+ dataset:
35
+ name: flores101-devtest
36
+ type: flores_101
37
+ args: isl heb devtest
38
+ metrics:
39
+ - name: BLEU
40
+ type: bleu
41
+ value: 14.2
42
+ - name: chr-F
43
+ type: chrf
44
+ value: 0.42284
45
+ - task:
46
+ name: Translation nob-heb
47
+ type: translation
48
+ args: nob-heb
49
+ dataset:
50
+ name: flores101-devtest
51
+ type: flores_101
52
+ args: nob heb devtest
53
+ metrics:
54
+ - name: BLEU
55
+ type: bleu
56
+ value: 19.2
57
+ - name: chr-F
58
+ type: chrf
59
+ value: 0.49492
60
+ - task:
61
+ name: Translation swe-heb
62
+ type: translation
63
+ args: swe-heb
64
+ dataset:
65
+ name: flores101-devtest
66
+ type: flores_101
67
+ args: swe heb devtest
68
+ metrics:
69
+ - name: BLEU
70
+ type: bleu
71
+ value: 23.0
72
+ - name: chr-F
73
+ type: chrf
74
+ value: 0.52408
75
+ ---
76
+ # opus-mt-tc-big-gmq-he
77
+
78
+ ## Table of Contents
79
+ - [Model Details](#model-details)
80
+ - [Uses](#uses)
81
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
82
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
83
+ - [Training](#training)
84
+ - [Evaluation](#evaluation)
85
+ - [Citation Information](#citation-information)
86
+ - [Acknowledgements](#acknowledgements)
87
+
88
+ ## Model Details
89
+
90
+ Neural machine translation model for translating from North Germanic languages (gmq) to Hebrew (he).
91
+
92
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
93
+ **Model Description:**
94
+ - **Developed by:** Language Technology Research Group at the University of Helsinki
95
+ - **Model Type:** Translation (transformer-big)
96
+ - **Release**: 2022-07-28
97
+ - **License:** CC-BY-4.0
98
+ - **Language(s):**
99
+ - Source Language(s): dan nor swe
100
+ - Target Language(s): heb
101
+ - Language Pair(s): dan-heb swe-heb
102
+ - Valid Target Language Labels:
103
+ - **Original Model**: [opusTCv20210807_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-heb/opusTCv20210807_transformer-big_2022-07-28.zip)
104
+ - **Resources for more information:**
105
+ - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
106
+ - More information about released models for this language pair: [OPUS-MT gmq-heb README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-heb/README.md)
107
+ - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
108
+ - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
109
+
110
+ ## Uses
111
+
112
+ This model can be used for translation and text-to-text generation.
113
+
114
+ ## Risks, Limitations and Biases
115
+
116
+ **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
117
+
118
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
119
+
120
+ ## How to Get Started With the Model
121
+
122
+ A short example code:
123
+
124
+ ```python
125
+ from transformers import MarianMTModel, MarianTokenizer
126
+
127
+ src_text = [
128
+ "Alle L.L. Zamenhofs tre bรธrn blev myrdet i holocausten.",
129
+ "Tom visade sig vara spion."
130
+ ]
131
+
132
+ model_name = "pytorch-models/opus-mt-tc-big-gmq-he"
133
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
134
+ model = MarianMTModel.from_pretrained(model_name)
135
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
136
+
137
+ for t in translated:
138
+ print( tokenizer.decode(t, skip_special_tokens=True) )
139
+
140
+ # expected output:
141
+ # ื›ืœ ืฉืœื•ืฉืช ื”ื™ืœื“ื™ื ืฉืœ ืืœ-ืืœ ื–ืืžื ื”ื•ืฃ ื ืจืฆื—ื• ื‘ืฉื•ืื”.
142
+ # ืžืกืชื‘ืจ ืฉื˜ื•ื ื”ื™ื” ืžืจื’ืœ.
143
+ ```
144
+
145
+ You can also use OPUS-MT models with the transformers pipelines, for example:
146
+
147
+ ```python
148
+ from transformers import pipeline
149
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-gmq-he")
150
+ print(pipe("Alle L.L. Zamenhofs tre bรธrn blev myrdet i holocausten."))
151
+
152
+ # expected output: ื›ืœ ืฉืœื•ืฉืช ื”ื™ืœื“ื™ื ืฉืœ ืืœ-ืืœ ื–ืืžื ื”ื•ืฃ ื ืจืฆื—ื• ื‘ืฉื•ืื”.
153
+ ```
154
+
155
+ ## Training
156
+
157
+ - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
158
+ - **Pre-processing**: SentencePiece (spm32k,spm32k)
159
+ - **Model Type:** transformer-big
160
+ - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-heb/opusTCv20210807_transformer-big_2022-07-28.zip)
161
+ - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
162
+
163
+ ## Evaluation
164
+
165
+ * test set translations: [opusTCv20210807_transformer-big_2022-07-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-heb/opusTCv20210807_transformer-big_2022-07-28.test.txt)
166
+ * test set scores: [opusTCv20210807_transformer-big_2022-07-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-heb/opusTCv20210807_transformer-big_2022-07-28.eval.txt)
167
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
168
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
169
+
170
+ | langpair | testset | chr-F | BLEU | #sent | #words |
171
+ |----------|---------|-------|-------|-------|--------|
172
+ | dan-heb | flores101-devtest | 0.52815 | 22.9 | 1012 | 20749 |
173
+ | isl-heb | flores101-devtest | 0.42284 | 14.2 | 1012 | 20749 |
174
+ | nob-heb | flores101-devtest | 0.49492 | 19.2 | 1012 | 20749 |
175
+ | swe-heb | flores101-devtest | 0.52408 | 23.0 | 1012 | 20749 |
176
+
177
+ ## Citation Information
178
+
179
+ * Publications: [OPUS-MT โ€“ Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge โ€“ Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
180
+
181
+ ```
182
+ @inproceedings{tiedemann-thottingal-2020-opus,
183
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
184
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
185
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
186
+ month = nov,
187
+ year = "2020",
188
+ address = "Lisboa, Portugal",
189
+ publisher = "European Association for Machine Translation",
190
+ url = "https://aclanthology.org/2020.eamt-1.61",
191
+ pages = "479--480",
192
+ }
193
+
194
+ @inproceedings{tiedemann-2020-tatoeba,
195
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
196
+ author = {Tiedemann, J{\"o}rg},
197
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
198
+ month = nov,
199
+ year = "2020",
200
+ address = "Online",
201
+ publisher = "Association for Computational Linguistics",
202
+ url = "https://aclanthology.org/2020.wmt-1.139",
203
+ pages = "1174--1182",
204
+ }
205
+ ```
206
+
207
+ ## Acknowledgements
208
+
209
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unionโ€™s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unionโ€™s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
210
+
211
+ ## Model conversion info
212
+
213
+ * transformers version: 4.16.2
214
+ * OPUS-MT git hash: 8b9f0b0
215
+ * port time: Sat Aug 13 00:03:50 EEST 2022
216
+ * port machine: LM0-400-22516.local