Graph Pre-training for AMR Parsing and Generation
Paper • 2203.07836 • Published • 1
This model is a fine-tuned version of AMRBART-large on an AMR2.0 dataset. It achieves a Smatch of 85.4 on the evaluation set: More details are introduced in the paper: Graph Pre-training for AMR Parsing and Generation by bai et al. in ACL 2022.
Same with AMRBART.
The model is finetuned on AMR2.0, a dataset consisting of 36,521 training instances, 1,368 validation instances, and 1,371 test instances.
You can use the model for AMR parsing, but it's mostly intended to be used in the domain of News.
Here is how to initialize this model in PyTorch:
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR2.0-AMRParsing")
Please refer to this repository for tokenizer initialization and data preprocessing.
Please cite this paper if you find this model helpful
@inproceedings{bai-etal-2022-graph,
title = "Graph Pre-training for {AMR} Parsing and Generation",
author = "Bai, Xuefeng and
Chen, Yulong and
Zhang, Yue",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "todo",
doi = "todo",
pages = "todo"
}