ethanthoma Satya10 commited on
Commit
6a7dba3
·
verified ·
0 Parent(s):

Duplicate from Babelscape/rebel-large

Browse files

Co-authored-by: Satya <Satya10@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ widget:
5
+ - text: "Punta Cana is a resort town in the municipality of Higuey, in La Altagracia Province, the eastern most province of the Dominican Republic"
6
+ tags:
7
+ - seq2seq
8
+ - relation-extraction
9
+ datasets:
10
+ - Babelscape/rebel-dataset
11
+ model-index:
12
+ - name: REBEL
13
+ results:
14
+ - task:
15
+ name: Relation Extraction
16
+ type: Relation-Extraction
17
+ dataset:
18
+ name: "CoNLL04"
19
+ type: CoNLL04
20
+ metrics:
21
+ - name: RE+ Macro F1
22
+ type: re+ macro f1
23
+ value: 76.65
24
+ - task:
25
+ name: Relation Extraction
26
+ type: Relation-Extraction
27
+ dataset:
28
+ name: "NYT"
29
+ type: NYT
30
+ metrics:
31
+ - name: F1
32
+ type: f1
33
+ value: 93.4
34
+ license: cc-by-nc-sa-4.0
35
+ ---
36
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rebel-relation-extraction-by-end-to-end/relation-extraction-on-nyt)](https://paperswithcode.com/sota/relation-extraction-on-nyt?p=rebel-relation-extraction-by-end-to-end)
37
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rebel-relation-extraction-by-end-to-end/relation-extraction-on-conll04)](https://paperswithcode.com/sota/relation-extraction-on-conll04?p=rebel-relation-extraction-by-end-to-end)
38
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rebel-relation-extraction-by-end-to-end/joint-entity-and-relation-extraction-on-3)](https://paperswithcode.com/sota/joint-entity-and-relation-extraction-on-3?p=rebel-relation-extraction-by-end-to-end)
39
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rebel-relation-extraction-by-end-to-end/relation-extraction-on-ade-corpus)](https://paperswithcode.com/sota/relation-extraction-on-ade-corpus?p=rebel-relation-extraction-by-end-to-end)
40
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rebel-relation-extraction-by-end-to-end/relation-extraction-on-re-tacred)](https://paperswithcode.com/sota/relation-extraction-on-re-tacred?p=rebel-relation-extraction-by-end-to-end)
41
+
42
+ ## Multilingual update! Check [mREBEL](https://huggingface.co/Babelscape/mrebel-large), a multilingual version covering more relation types, languages and including entity types.
43
+
44
+ # REBEL <img src="https://i.ibb.co/qsLzNqS/hf-rebel.png" width="30" alt="hf-rebel" border="0" style="display:inline; white-space:nowrap;">: Relation Extraction By End-to-end Language generation
45
+ This is the model card for the Findings of EMNLP 2021 paper [REBEL: Relation Extraction By End-to-end Language generation](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf). We present a new linearization approach and a reframing of Relation Extraction as a seq2seq task. The paper can be found [here](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf). If you use the code, please reference this work in your paper:
46
+
47
+ @inproceedings{huguet-cabot-navigli-2021-rebel-relation,
48
+ title = "{REBEL}: Relation Extraction By End-to-end Language generation",
49
+ author = "Huguet Cabot, Pere-Llu{\'\i}s and
50
+ Navigli, Roberto",
51
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
52
+ month = nov,
53
+ year = "2021",
54
+ address = "Punta Cana, Dominican Republic",
55
+ publisher = "Association for Computational Linguistics",
56
+ url = "https://aclanthology.org/2021.findings-emnlp.204",
57
+ pages = "2370--2381",
58
+ abstract = "Extracting relation triplets from raw text is a crucial task in Information Extraction, enabling multiple applications such as populating or validating knowledge bases, factchecking, and other downstream tasks. However, it usually involves multiple-step pipelines that propagate errors or are limited to a small number of relation types. To overcome these issues, we propose the use of autoregressive seq2seq models. Such models have previously been shown to perform well not only in language generation, but also in NLU tasks such as Entity Linking, thanks to their framing as seq2seq tasks. In this paper, we show how Relation Extraction can be simplified by expressing triplets as a sequence of text and we present REBEL, a seq2seq model based on BART that performs end-to-end relation extraction for more than 200 different relation types. We show our model{'}s flexibility by fine-tuning it on an array of Relation Extraction and Relation Classification benchmarks, with it attaining state-of-the-art performance in most of them.",
59
+ }
60
+
61
+ The original repository for the paper can be found [here](https://github.com/Babelscape/rebel)
62
+
63
+ Be aware that the inference widget at the right does not output special tokens, which are necessary to distinguish the subject, object and relation types. For a demo of REBEL and its pre-training dataset check the [Spaces demo](https://huggingface.co/spaces/Babelscape/rebel-demo).
64
+
65
+ ## Pipeline usage
66
+
67
+ ```python
68
+ from transformers import pipeline
69
+
70
+ triplet_extractor = pipeline('text2text-generation', model='Babelscape/rebel-large', tokenizer='Babelscape/rebel-large')
71
+ # We need to use the tokenizer manually since we need special tokens.
72
+ extracted_text = triplet_extractor.tokenizer.batch_decode([triplet_extractor("Punta Cana is a resort town in the municipality of Higuey, in La Altagracia Province, the eastern most province of the Dominican Republic", return_tensors=True, return_text=False)[0]["generated_token_ids"]])
73
+ print(extracted_text[0])
74
+ # Function to parse the generated text and extract the triplets
75
+ def extract_triplets(text):
76
+ triplets = []
77
+ relation, subject, relation, object_ = '', '', '', ''
78
+ text = text.strip()
79
+ current = 'x'
80
+ for token in text.replace("<s>", "").replace("<pad>", "").replace("</s>", "").split():
81
+ if token == "<triplet>":
82
+ current = 't'
83
+ if relation != '':
84
+ triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
85
+ relation = ''
86
+ subject = ''
87
+ elif token == "<subj>":
88
+ current = 's'
89
+ if relation != '':
90
+ triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
91
+ object_ = ''
92
+ elif token == "<obj>":
93
+ current = 'o'
94
+ relation = ''
95
+ else:
96
+ if current == 't':
97
+ subject += ' ' + token
98
+ elif current == 's':
99
+ object_ += ' ' + token
100
+ elif current == 'o':
101
+ relation += ' ' + token
102
+ if subject != '' and relation != '' and object_ != '':
103
+ triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
104
+ return triplets
105
+ extracted_triplets = extract_triplets(extracted_text[0])
106
+ print(extracted_triplets)
107
+ ```
108
+
109
+ ## Model and Tokenizer using transformers
110
+
111
+ ```python
112
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
113
+
114
+ def extract_triplets(text):
115
+ triplets = []
116
+ relation, subject, relation, object_ = '', '', '', ''
117
+ text = text.strip()
118
+ current = 'x'
119
+ for token in text.replace("<s>", "").replace("<pad>", "").replace("</s>", "").split():
120
+ if token == "<triplet>":
121
+ current = 't'
122
+ if relation != '':
123
+ triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
124
+ relation = ''
125
+ subject = ''
126
+ elif token == "<subj>":
127
+ current = 's'
128
+ if relation != '':
129
+ triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
130
+ object_ = ''
131
+ elif token == "<obj>":
132
+ current = 'o'
133
+ relation = ''
134
+ else:
135
+ if current == 't':
136
+ subject += ' ' + token
137
+ elif current == 's':
138
+ object_ += ' ' + token
139
+ elif current == 'o':
140
+ relation += ' ' + token
141
+ if subject != '' and relation != '' and object_ != '':
142
+ triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()})
143
+ return triplets
144
+
145
+ # Load model and tokenizer
146
+ tokenizer = AutoTokenizer.from_pretrained("Babelscape/rebel-large")
147
+ model = AutoModelForSeq2SeqLM.from_pretrained("Babelscape/rebel-large")
148
+ gen_kwargs = {
149
+ "max_length": 256,
150
+ "length_penalty": 0,
151
+ "num_beams": 3,
152
+ "num_return_sequences": 3,
153
+ }
154
+
155
+ # Text to extract triplets from
156
+ text = 'Punta Cana is a resort town in the municipality of Higüey, in La Altagracia Province, the easternmost province of the Dominican Republic.'
157
+
158
+ # Tokenizer text
159
+ model_inputs = tokenizer(text, max_length=256, padding=True, truncation=True, return_tensors = 'pt')
160
+
161
+ # Generate
162
+ generated_tokens = model.generate(
163
+ model_inputs["input_ids"].to(model.device),
164
+ attention_mask=model_inputs["attention_mask"].to(model.device),
165
+ **gen_kwargs,
166
+ )
167
+
168
+ # Extract text
169
+ decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=False)
170
+
171
+ # Extract triplets
172
+ for idx, sentence in enumerate(decoded_preds):
173
+ print(f'Prediction triplets sentence {idx}')
174
+ print(extract_triplets(sentence))
175
+ ```
added_tokens.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"<obj>": 50265, "<head>": 50268, "<triplet>": 50267, "<tail>": 50270, "<subj>": 50266, "</tail>": 50271, "</head>": 50269}
config.json ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "model/rebel-large",
3
+ "activation_dropout": 0.1,
4
+ "activation_function": "gelu",
5
+ "add_bias_logits": false,
6
+ "add_final_layer_norm": false,
7
+ "architectures": [
8
+ "BartForConditionalGeneration"
9
+ ],
10
+ "attention_dropout": 0.1,
11
+ "bos_token_id": 0,
12
+ "classif_dropout": 0.1,
13
+ "classifier_dropout": 0.0,
14
+ "d_model": 1024,
15
+ "decoder_attention_heads": 16,
16
+ "decoder_ffn_dim": 4096,
17
+ "decoder_layerdrop": 0.0,
18
+ "decoder_layers": 12,
19
+ "decoder_start_token_id": 0,
20
+ "dropout": 0.1,
21
+ "encoder_attention_heads": 16,
22
+ "encoder_ffn_dim": 4096,
23
+ "encoder_layerdrop": 0.0,
24
+ "encoder_layers": 12,
25
+ "eos_token_id": 2,
26
+ "forced_eos_token_id": 2,
27
+ "gradient_checkpointing": false,
28
+ "id2label": {
29
+ "0": "LABEL_0",
30
+ "1": "LABEL_1",
31
+ "2": "LABEL_2"
32
+ },
33
+ "init_std": 0.02,
34
+ "is_encoder_decoder": true,
35
+ "label2id": {
36
+ "LABEL_0": 0,
37
+ "LABEL_1": 1,
38
+ "LABEL_2": 2
39
+ },
40
+ "max_length": 200,
41
+ "max_position_embeddings": 1024,
42
+ "model_type": "bart",
43
+ "normalize_before": false,
44
+ "num_beams": 4,
45
+ "num_hidden_layers": 12,
46
+ "pad_token_id": 1,
47
+ "scale_embedding": false,
48
+ "task_specific_params": {
49
+ "relation_extraction": {
50
+ "length_penalty": 0.0,
51
+ "max_length": 256,
52
+ "min_length": 12,
53
+ "no_repeat_ngram_size": 0,
54
+ "num_beams": 4
55
+ }
56
+ },
57
+ "torch_dtype": "float32",
58
+ "transformers_version": "4.9.1",
59
+ "use_cache": true,
60
+ "vocab_size": 50272
61
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42b2eb3bfe329f45ed2a805b715f849d05dc2b838bc2aefb46ee34e627a7b204
3
+ size 1625455696
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:407feb8d55cae8ee077aa032a4aab5577a5503f910d090593626ebd6fccb6cff
3
+ size 1625590959
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}, "additional_special_tokens": ["<obj>", "<subj>", "<triplet>", "<head>", "</head>", "<tail>", "</tail>"]}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "errors": "replace", "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 1024, "additional_special_tokens": ["<obj>", "<subj>", "<triplet>", "<head>", "</head>", "<tail>", "</tail>"], "special_tokens_map_file": null, "name_or_path": "model/rebel-large", "tokenizer_class": "BartTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff