VaishalBusiness commited on
Commit
fef931e
·
verified ·
1 Parent(s): be26f8a

Add README.md for Helsinki-NLP-opus-mt-tc-big-lt-en

Browse files
Helsinki-NLP-opus-mt-tc-big-lt-en/README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - lt
5
+ tags:
6
+ - translation
7
+ - opus-mt-tc
8
+ license: cc-by-4.0
9
+ model-index:
10
+ - name: opus-mt-tc-big-lt-en
11
+ results:
12
+ - task:
13
+ name: Translation lit-eng
14
+ type: translation
15
+ args: lit-eng
16
+ dataset:
17
+ name: flores101-devtest
18
+ type: flores_101
19
+ args: lit eng devtest
20
+ metrics:
21
+ - name: BLEU
22
+ type: bleu
23
+ value: 34.3
24
+ - task:
25
+ name: Translation lit-eng
26
+ type: translation
27
+ args: lit-eng
28
+ dataset:
29
+ name: newsdev2019
30
+ type: newsdev2019
31
+ args: lit-eng
32
+ metrics:
33
+ - name: BLEU
34
+ type: bleu
35
+ value: 32.9
36
+ - task:
37
+ name: Translation lit-eng
38
+ type: translation
39
+ args: lit-eng
40
+ dataset:
41
+ name: tatoeba-test-v2021-08-07
42
+ type: tatoeba_mt
43
+ args: lit-eng
44
+ metrics:
45
+ - name: BLEU
46
+ type: bleu
47
+ value: 61.6
48
+ - task:
49
+ name: Translation lit-eng
50
+ type: translation
51
+ args: lit-eng
52
+ dataset:
53
+ name: newstest2019
54
+ type: wmt-2019-news
55
+ args: lit-eng
56
+ metrics:
57
+ - name: BLEU
58
+ type: bleu
59
+ value: 32.3
60
+ ---
61
+ # opus-mt-tc-big-lt-en
62
+
63
+ Neural machine translation model for translating from Lithuanian (lt) to English (en).
64
+
65
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
66
+
67
+ * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
68
+
69
+ ```
70
+ @inproceedings{tiedemann-thottingal-2020-opus,
71
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
72
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
73
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
74
+ month = nov,
75
+ year = "2020",
76
+ address = "Lisboa, Portugal",
77
+ publisher = "European Association for Machine Translation",
78
+ url = "https://aclanthology.org/2020.eamt-1.61",
79
+ pages = "479--480",
80
+ }
81
+
82
+ @inproceedings{tiedemann-2020-tatoeba,
83
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
84
+ author = {Tiedemann, J{\"o}rg},
85
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
86
+ month = nov,
87
+ year = "2020",
88
+ address = "Online",
89
+ publisher = "Association for Computational Linguistics",
90
+ url = "https://aclanthology.org/2020.wmt-1.139",
91
+ pages = "1174--1182",
92
+ }
93
+ ```
94
+
95
+ ## Model info
96
+
97
+ * Release: 2022-02-25
98
+ * source language(s): lit
99
+ * target language(s): eng
100
+ * model: transformer-big
101
+ * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
102
+ * tokenization: SentencePiece (spm32k,spm32k)
103
+ * original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-eng/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
104
+ * more information released models: [OPUS-MT lit-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-eng/README.md)
105
+
106
+ ## Usage
107
+
108
+ A short example code:
109
+
110
+ ```python
111
+ from transformers import MarianMTModel, MarianTokenizer
112
+
113
+ src_text = [
114
+ "Katė sedėjo ant kėdės.",
115
+ "Jukiko mėgsta bulves."
116
+ ]
117
+
118
+ model_name = "pytorch-models/opus-mt-tc-big-lt-en"
119
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
120
+ model = MarianMTModel.from_pretrained(model_name)
121
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
122
+
123
+ for t in translated:
124
+ print( tokenizer.decode(t, skip_special_tokens=True) )
125
+
126
+ # expected output:
127
+ # The cat sat on a chair.
128
+ # Yukiko likes potatoes.
129
+ ```
130
+
131
+ You can also use OPUS-MT models with the transformers pipelines, for example:
132
+
133
+ ```python
134
+ from transformers import pipeline
135
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-lt-en")
136
+ print(pipe("Katė sedėjo ant kėdės."))
137
+
138
+ # expected output: The cat sat on a chair.
139
+ ```
140
+
141
+ ## Benchmarks
142
+
143
+ * test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-eng/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
144
+ * test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-eng/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
145
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
146
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
147
+
148
+ | langpair | testset | chr-F | BLEU | #sent | #words |
149
+ |----------|---------|-------|-------|-------|--------|
150
+ | lit-eng | tatoeba-test-v2021-08-07 | 0.74881 | 61.6 | 2528 | 17855 |
151
+ | lit-eng | flores101-devtest | 0.60662 | 34.3 | 1012 | 24721 |
152
+ | lit-eng | newsdev2019 | 0.59995 | 32.9 | 2000 | 49312 |
153
+ | lit-eng | newstest2019 | 0.61742 | 32.3 | 1000 | 25878 |
154
+
155
+ ## Acknowledgements
156
+
157
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
158
+
159
+ ## Model conversion info
160
+
161
+ * transformers version: 4.16.2
162
+ * OPUS-MT git hash: 3405783
163
+ * port time: Wed Apr 13 19:55:51 EEST 2022
164
+ * port machine: LM0-400-22516.local