| pipeline_tag: summarization | |
| language: | |
| - multilingual | |
| library_name: transformers | |
| license: apache-2.0 | |
| tags: | |
| - summarization | |
| - multilingual | |
| - seq2seq | |
| # multi-lang_summay | |
| Fine-tuned seq2seq model for multilingual abstractive summarization. | |
| ## Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForSeq2SeqLM | |
| import torch | |
| repo_id = "vatsal18/multi-lang_summay" | |
| tok = AutoTokenizer.from_pretrained(repo_id) | |
| mdl = AutoModelForSeq2SeqLM.from_pretrained(repo_id).eval() | |
| text = "Paste any article (any supported language) here." | |
| enc = tok(text, return_tensors="pt", truncation=True, max_length=1024) | |
| with torch.no_grad(): | |
| out = mdl.generate(**enc, max_new_tokens=128, num_beams=4, length_penalty=0.8) | |
| print(tok.decode(out[0], skip_special_tokens=True)) | |