Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,102 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
tags:
|
| 3 |
+
- paraphrase-generation
|
| 4 |
+
- multilingual
|
| 5 |
+
- nlp
|
| 6 |
+
- indicnlp
|
| 7 |
+
datasets:
|
| 8 |
+
- ai4bharat/IndicParaphrase
|
| 9 |
+
language:
|
| 10 |
+
- as
|
| 11 |
+
- bn
|
| 12 |
+
- gu
|
| 13 |
+
- hi
|
| 14 |
+
- kn
|
| 15 |
+
- ml
|
| 16 |
+
- mr
|
| 17 |
+
- or
|
| 18 |
+
- pa
|
| 19 |
+
- ta
|
| 20 |
+
- te
|
| 21 |
+
licenses:
|
| 22 |
+
- cc-by-nc-4.0
|
| 23 |
+
|
| 24 |
+
|
| 25 |
---
|
| 26 |
+
|
| 27 |
+
# MultiIndicParaphraseGeneration
|
| 28 |
+
|
| 29 |
+
This repository contains the [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint finetuned on the 11 languages of [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) dataset. For finetuning details,
|
| 30 |
+
see the [paper](https://arxiv.org/abs/2203.05437).
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
## Using this model in `transformers`
|
| 34 |
+
|
| 35 |
+
```
|
| 36 |
+
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
|
| 37 |
+
from transformers import AlbertTokenizer, AutoTokenizer
|
| 38 |
+
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
|
| 39 |
+
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
|
| 40 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS")
|
| 41 |
+
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicParaphraseGenerationSS")
|
| 42 |
+
# Some initial mapping
|
| 43 |
+
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
|
| 44 |
+
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
|
| 45 |
+
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
|
| 46 |
+
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
|
| 47 |
+
# First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
|
| 48 |
+
inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]])
|
| 49 |
+
|
| 50 |
+
# For generation. Pardon the messiness. Note the decoder_start_token_id.
|
| 51 |
+
model.eval() # Set dropouts to zero
|
| 52 |
+
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
|
| 53 |
+
# Decode to get output strings
|
| 54 |
+
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
| 55 |
+
print(decoded_output) # I am a boy
|
| 56 |
+
# Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library.
|
| 57 |
+
# What if we mask?
|
| 58 |
+
inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
|
| 59 |
+
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
|
| 60 |
+
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
| 61 |
+
print(decoded_output) # I am happy
|
| 62 |
+
inp = tokenizer("मैं [MASK] हूठ</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
|
| 63 |
+
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
|
| 64 |
+
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
| 65 |
+
print(decoded_output) # मैं जानता हूà¤
|
| 66 |
+
inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
|
| 67 |
+
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
|
| 68 |
+
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
| 69 |
+
print(decoded_output) # मला ओळखलं पाहिजे
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
## Benchmarks
|
| 73 |
+
|
| 74 |
+
Scores on the `IndicParaphrase` test sets are as follows:
|
| 75 |
+
|
| 76 |
+
Language | BLEU / Self-BLEU / iBLEU
|
| 77 |
+
---------|----------------------------
|
| 78 |
+
as | 1.19 / 1.64 / 0.34
|
| 79 |
+
bn | 10.04 / 1.08 / 6.70
|
| 80 |
+
gu | 18.69 / 1.62 / 12.60
|
| 81 |
+
hi | 25.05 / 1.75 / 17.01
|
| 82 |
+
kn | 13.14 / 1.89 / 8.63
|
| 83 |
+
ml | 8.71 / 1.36 / 5.69
|
| 84 |
+
mr | 18.50 / 1.49 / 12.50
|
| 85 |
+
or | 23.02 / 2.68 / 15.31
|
| 86 |
+
pa | 17.61 / 1.37 / 11.92
|
| 87 |
+
ta | 16.25 / 2.13 / 10.74
|
| 88 |
+
te | 14.16 / 2.29 / 9.23
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
## Citation
|
| 93 |
+
|
| 94 |
+
If you use this model, please cite the following paper:
|
| 95 |
+
```
|
| 96 |
+
@inproceedings{Kumar2022IndicNLGSM,
|
| 97 |
+
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
|
| 98 |
+
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
|
| 99 |
+
year={2022},
|
| 100 |
+
url = "https://arxiv.org/abs/2203.05437"
|
| 101 |
+
}
|
| 102 |
+
```
|