Commit ·
46bdd51
1
Parent(s): 6f162cd
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,55 +1,51 @@
|
|
| 1 |
-
#
|
| 2 |
|
| 3 |
-
|
| 4 |
-
RoBERTa version of the same model is also available [here](https://huggingface.co/satyaalmasian/temporal_tagger_roberta2roberta) and has better performance.
|
| 5 |
|
| 6 |
# Model description
|
| 7 |
-
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. We use BERT
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
# Intended uses & limitations
|
| 10 |
-
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide
|
| 11 |
|
| 12 |
# How to use
|
| 13 |
you can load the model as follows:
|
| 14 |
```
|
| 15 |
-
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier")
|
| 16 |
-
model =
|
| 17 |
|
| 18 |
```
|
| 19 |
for inference use:
|
| 20 |
```
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
|
| 25 |
```
|
| 26 |
-
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
|
| 27 |
-
|
| 28 |
-
``
|
| 29 |
-
|
| 30 |
-
model=model2model,
|
| 31 |
-
tokenizer=tokenizer,
|
| 32 |
-
args=training_args,
|
| 33 |
-
compute_metrics=metrics.compute_metrics,
|
| 34 |
-
train_dataset=train_data,
|
| 35 |
-
eval_dataset=val_data,
|
| 36 |
-
)
|
| 37 |
-
|
| 38 |
-
train_result=trainer.train()
|
| 39 |
-
```
|
| 40 |
-
where the `training_args` is an instance of `Seq2SeqTrainingArguments`.
|
| 41 |
#Training data
|
| 42 |
-
We use
|
| 43 |
-
|
| 44 |
-
Fine-tunning: [Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html), Wikiwars, Tweets datasets. For the correct data versions please refer to our [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
|
| 45 |
|
| 46 |
#Training procedure
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
We fine-tune the 3 benchmark data for 8 epochs with 5 different random seeds, this version of the model is the only seed=4.
|
| 50 |
-
The batch size and the learning rate is the same as the pre-training setup, but the warm-up steps are reduced to 100.
|
| 51 |
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
| 52 |
-
For inference in seq2seq models, we use Greedy decoding, since beam search had sub-optimal results.
|
| 53 |
|
| 54 |
|
| 55 |
|
|
|
|
| 1 |
+
# BERT based temporal tagged
|
| 2 |
|
| 3 |
+
Token classifier for temporal tagging of plain text using BERT language model. The model is introduced in the paper BERT got a Date: Introducing Transformers to Temporal Tagging and release in this [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
|
|
|
|
| 4 |
|
| 5 |
# Model description
|
| 6 |
+
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. We use BERT for token classification to tag the tokens in text with classes:
|
| 7 |
+
```
|
| 8 |
+
O -- outside of a tag
|
| 9 |
+
I-TIME -- inside tag of time
|
| 10 |
+
B-TIME -- beginning tag of time
|
| 11 |
+
I-DATE -- inside tag of date
|
| 12 |
+
B-DATE -- beginning tag of date
|
| 13 |
+
I-DURATION -- inside tag of duration
|
| 14 |
+
B-DURATION -- beginning tag of duration
|
| 15 |
+
I-SET -- inside tag of the set
|
| 16 |
+
B-SET -- beginning tag of the set
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
|
| 20 |
# Intended uses & limitations
|
| 21 |
+
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output.
|
| 22 |
|
| 23 |
# How to use
|
| 24 |
you can load the model as follows:
|
| 25 |
```
|
| 26 |
+
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier", use_fast=False)
|
| 27 |
+
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier")
|
| 28 |
|
| 29 |
```
|
| 30 |
for inference use:
|
| 31 |
```
|
| 32 |
+
processed_text = tokenizer(input_text, return_tensors="pt")
|
| 33 |
+
result = model(**processed_text)
|
| 34 |
+
classification= result[0]
|
| 35 |
|
| 36 |
```
|
| 37 |
+
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
|
| 38 |
+
We provide a function `merge_tokens` to decipher the output.
|
| 39 |
+
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
|
| 40 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
#Training data
|
| 42 |
+
We use 3 data sources:
|
| 43 |
+
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html), Wikiwars, Tweets datasets. For the correct data versions please refer to our [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
|
|
|
|
| 44 |
|
| 45 |
#Training procedure
|
| 46 |
+
The model is trained from publicly available checkpoints on huggingface (`bert-base-uncased`), with a batch size of 34. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
|
| 47 |
+
We fine-tune with 5 different random seeds, this version of the model is the only seed=4.
|
|
|
|
|
|
|
| 48 |
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
|
|
|
| 49 |
|
| 50 |
|
| 51 |
|