spapi commited on
Commit
9df35ed
·
verified ·
1 Parent(s): 2604857

Add MT inference code to the main README

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -123,8 +123,58 @@ To download and process YouTube-Commons, please refer to the
123
  [dedicated YouTube-Commons README](https://huggingface.co/datasets/FBK-MT/fama-data/blob/main/scripts/YouTube-Commons-README.md).
124
 
125
  The code used to produce all translations with [MADALAD-400 3B-MT](https://huggingface.co/google/madlad400-3b-mt) is the following:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  ```
127
- ```
 
128
 
129
  The script used for filtering the ST datasets is
130
  [`filter_tsv_based_on_ratio`](https://huggingface.co/datasets/FBK-MT/fama-data/blob/main/scripts/filter_tsv_based_on_ratio.py) and
 
123
  [dedicated YouTube-Commons README](https://huggingface.co/datasets/FBK-MT/fama-data/blob/main/scripts/YouTube-Commons-README.md).
124
 
125
  The code used to produce all translations with [MADALAD-400 3B-MT](https://huggingface.co/google/madlad400-3b-mt) is the following:
126
+ ```python
127
+ import os
128
+ import sys
129
+ import torch
130
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
131
+
132
+ modelname = "google/madlad400-3b-mt"
133
+ batch_size = {$BATCH_SIZE}
134
+ tlang = {$LANGUAGE}
135
+
136
+ class BatchedMT:
137
+ def __init__(self, tokenizer, model):
138
+ self.buffer_lines = []
139
+ self.model = model
140
+ if torch.cuda.is_available():
141
+ self.model = self.model.cuda()
142
+ self.tokenizer = tokenizer
143
+
144
+ def process_line(self, line):
145
+ self.buffer_lines.append(line.strip())
146
+ if len(self.buffer_lines) >= BATCHSIZE:
147
+ self.print_translations()
148
+ self.buffer_lines = []
149
+
150
+ def print_translations(self):
151
+ outs = self._do_translate()
152
+ for s in outs:
153
+ print(s)
154
+
155
+ def _do_translate(self):
156
+ tokens = self.tokenizer(self.buffer_lines, return_tensors="pt", padding=True)
157
+ if torch.cuda.is_available():
158
+ tokens = {k: v.cuda() for k, v in tokens.items()}
159
+ translated = self.model.generate(**tokens, max_new_tokens=512)
160
+ return [self.tokenizer.decode(t, skip_special_tokens=True) for t in translated]
161
+
162
+ def close(self):
163
+ if len(self.buffer_lines) > 0:
164
+ self.print_translations()
165
+ self.buffer_lines = []
166
+
167
+
168
+ mt = BatchedMT(
169
+ AutoTokenizer.from_pretrained(modelname),
170
+ AutoModelForSeq2SeqLM.from_pretrained(modelname))
171
+
172
+ for input_line in sys.stdin:
173
+ mt.process_line("<2" + tlang + "> " + input_line)
174
+ mt.close()
175
  ```
176
+ where the input text is passad as stdin, `{$BATCH_SIZE}` is the batch size supported on your machine
177
+ and `{$LANGUAGE}` is either `en` for Italian to English translation and `it` for English to Italian translation.
178
 
179
  The script used for filtering the ST datasets is
180
  [`filter_tsv_based_on_ratio`](https://huggingface.co/datasets/FBK-MT/fama-data/blob/main/scripts/filter_tsv_based_on_ratio.py) and