SlowGuess's picture
Add Batch 90926e2d-03d6-4133-8bdf-92dc96585357
785cb68 verified

Abstractive Open Information Extraction

Kevin Pei1, Ishan Jindal2, Kevin Chen-Chuan Chang1

1University of Illinois at Urbana-Champaign, 2IBM Research 1{kspei2, kcchang}@illinois.edu, 2ishan.jindal@ibm.com

Abstract

Open Information Extraction (OpenIE) is a traditional NLP task that extracts structured information from unstructured text to be used for other downstream applications. Traditionally, OpenIE focuses on extracting the surface forms of relations as they appear in the raw text, which we term extractive OpenIE. One of the main drawbacks of this approach is that implicit semantic relations (inferred relations) can not be extracted, compromising the performance of downstream applications. In this paper, we broaden the scope of OpenIE relations from merely the surface form of relations to include inferred relations, which we term abstractive OpenIE. This new task calls for the development of a new abstractive OpenIE training dataset and a baseline neural model that can extract those inferred relations. We also demonstrate the necessity for a new semantics-based metric for evaluating abstractive OpenIE extractions. Via a case study on Complex QA, we demonstrate the effectiveness of abstractive OpenIE.

1 Introduction

Open Information Extraction (OpenIE) is the task of extracting relation tuples from unstructured text (Etzioni et al., 2008; Mausam et al., 2012; Angeli et al., 2015). Unlike traditional information extraction, OpenIE is open domain, intended to be easy to deploy in different domains without fine-tuning. These relations can then be used in downstream applications like summarization (Zhang et al., 2021), question-answering (Lu et al., 2019), and knowledge base population (Kroll et al., 2021). In order to support these applications, OpenIE needs to extract as many different types of relations as possible. One particular relation type of interest is "Inferred Relations". We define an "Inferred

Sample SentenceTokyo, officially Tokyo Metropolis, is the capital city of Japan and one of its 47 prefectures.
Extractive{Tokyo; is; the capital city of Japan}
OpenIE Extractions{Tokyo; is; one of its 47 prefectures}
Abstractive{Tokyo; is; the capital city of Japan}
OpenIE Extractions{Tokyo; is officially; Tokyo Metropolis}
{Tokyo; is; a prefecture} or
{Tokyo; is; one of Japan's 47 prefectures}

Table 1: Examples of relations that extractive OpenIE models can not extract. In this sentence, the apposition "officially Tokyo Metropolis" has no predicate but still has a relation with the noun "Tokyo". In the last abstractive relation, "one of its 47 prefectures" is meaningless without the context of the rest of the sentence. It would be more useful to replace the object with "a prefecture" or "one of Japan's 47 prefectures", neither of which appear in the sentence. Preexisting OpenIE models can not extract these abstractive relations.

Relation" to be a relation where the predicate contains words that are not in the original sentence. For example, given the sentence "Albert Einstein (14 March 1879 - 18 April 1955) was a German-born theoretical physicist", the relation (Albert Einstein, died on, 18 April 1955) can be inferred even though "died on" is not in the original sentence. Extracting inferred relations increases recall, which is explicitly desired by various downstream tasks including question-answering, slot filling, event schema induction, summarization, and knowledge base population (Pei et al., 2022). Based on the number of inferred relations in the manually annotated dataset WiRe57, extracting inferred relations could increase the total number of relations extracted by $50%$ (Léchelle et al., 2018). Existing neural OpenIE models struggle to extract these inferred relations, with only one previous model, OpenIE6, including hand-written rules to extract only some cases of inferred relations (Kolluru et al., 2020a). Table 1 has an example of an inferred relation.

Another problem is that the extraction is very dependent on the sentence's syntax. For downstream

applications using OpenIE, it is important to be able to extract either different surface forms of a relation or its canonical form. The surface form refers to how it appears within the text, while the canonical form refers to the semantic meaning. In question answering (QA), several methods repeatedly paraphrase the questions so that the surface forms of extracted relations match at least one of the question paraphrases, indicating that extracting more surface forms of relation would answer more questions (Fader et al., 2013, 2014; Yin et al., 2015). In addition, the more complex a sentence's syntax is, such as having more clauses, the more difficult it is to extract high-quality relations. An illustrative example of how being limited to extracting surface forms can be found in Table 1.

By design, all existing neural OpenIE models are unable to extract these abstractive relations, which could be utilized by the downstream application. Therefore, in this work, we propose an abstractive Open Information Extraction (abstractive OpenIE) task. The purpose of this task is to extract relation tuples that are far beyond the reach of any existing OpenIE tasks. We define abstractive OpenIE as a task that given an input sentence generates ordered tuples in the form of (subject, predicate, object) for all possible relations (inferred or non-inferred) within the sentence.

Although not explicitly defined as such, existing neural models often treat OpenIE as a labeling problem, where tokens are labeled as being part of the subject, predicate, or object of a relation (Kolluru et al., 2020a; Vasilkovsky et al., 2022). Even in cases where OpenIE is defined as a generative problem, the generated relations don't contain words outside the vocabulary of the original sentence (Kolluru et al., 2020b) (Han and Wang, 2021). Due to the labeling problem definition, prior neural OpenIE models struggle to extract relations with predicates that don't appear in the original sentence. We refer to all preexisting neural OpenIE models as extractive OpenIE methods, because they can only generate relations by extracting tokens from the original sentence.

One such attempt to go beyond extractive OpenIE is the OpenIE6 model Kolluru et al. (2020a). It explicitly concatenates manually defined out-of-vocabulary tokens at the end of each sentence to allow for the extraction of specific inferred relations. However, obtaining such a list is non-trivial and can not scale to every domain. We differ from

OpenIE6 in the sense that abstractive OpenIE models trained on abstractive OpenIE training datasets generate this inferred relation on the fly and do not require defining a list of out-of-vocabulary tokens. Therefore, in this paper, we derive abstractive OpenIE training datasets from existing information extraction datasets and train a baseline machine-learning model that extracts abstractive relations.

Further, we also develop an abstractive OpenIE evaluation metric to evaluate the quality of abstractive OpenIE models. Our problem warrants a new evaluation metric because all the existing OpenIE evaluation metrics are lexical and evaluated based on the token overlap between the predicted relations and the gold standard relation. These lexical metrics are undesirable for the proposed task as the relations extracted using the abstractive OpenIE model do not have to use the tokens present in the input sentence. Therefore, we propose a semantics-based metric for evaluating abstractive OpenIE models.

In summary, our contributions are as follows:

  • We propose an abstractive OpenIE task to expand the scope of OpenIE extractions compared to prior extractive OpenIE models.
  • We derive an abstractive OpenIE training dataset and develop an initial abstractive OpenIE model as a baseline.
  • We propose a general-purpose semantics-based evaluation metric for evaluating any OpenIE model.
  • We perform a comprehensive comparison between abstractive and extractive OpenIE models.

2 Related Work

OpenIE Datasets: Given how data-hungry deep learning models are and how costly it is to manually label OpenIE datasets, most OpenIE training sets are weakly labeled using high-confidence extractions from prior OpenIE models to get "silver-standard" labels. For example, the CopyAttention (Cui et al., 2018), SpanOIE (Zhan and Zhao, 2020), and OIE4 (Kolluru et al., 2020b) training sets are created from high-confidence OpenIE4 extractions from Wikipedia. LSOIE (Solawetz and Larson, 2021) is instead created from examples from the QA-SRL 2.0 dataset. Because traditional OpenIE is

DatasetNumber of SentencesNumber of RelationsNumber of Relations with Inferred PredicatesNumber of Relations with Inferred Predicates or Arguments
Training SetsOIE490K160K00
OIE4 Backtranslated44K61K19K48K
OIE4 with SuRE Relations90K178K16K16K
OIE4 Back translated with SuRE Relations44K69K26K56K
Test SetsWiRe5757343116120
CaRB6342715736798
ReOIE20166831508155156
LSOIE2402537100

Table 2: Comparison of the attributes of different datasets. SuRE is the relation extraction model we use to obtain additional inferred relations for training (Lu et al., 2022).

Sample SentenceThe purse contains the seal of Order of the Garter.
Back Translated SentenceIn the handbag is the seal of the Order of the Garter.
Relations{The purse; contains; the seal of Order of the Garter}

Table 3: An example of paraphrasing via back translation. The sentence is from the OIE4 training set.

extractive, there are no inferred relations in OpenIE training sets, with only hand-labeled benchmarks containing inferred relations. As a result, these training sets are not well-suited for training an abstractive OpenIE model.

In contrast, there are several benchmarks with inferred relations. WiRe57 (Léchelle et al., 2018) is 57 manually annotated sentences. CaRB (Bhardwaj et al., 2019) uses crowdsourcing to re- annotate the sentences in the OIE2016 benchmark, the first commonly used OpenIE benchmark (Stanovsky and Dagan, 2016). ReOIE2016 (Zhan and Zhao, 2020) is a different manual re-annotation of OIE2016 to attempt to resolve problems arising from incorrect extractions. LSOIE also has its own benchmark created using the same method as its training set. WiRe57, CaRB, and ReOIE2016 all contain inferred relations, making them useful for evaluating abstractive OpenIE.

OpenIE Models: OpenIE6 is a neural OpenIE model that performs BIOES tagging for the subject, predicate, and object of each relation (Kolluru et al., 2020a). At the end of each sentence, it appends the tokens "be", "of", and "from" so that they can also be tagged as part of the predicate. However, this method limits inferred relation extraction to only those containing the tokens they manually specify and doesn't help with the issue of extracting only the surface form of the relation.

IMoJIE is an OpenIE model that tries to reduce the redundancy of relations by appending extracted

relations to the end of each sentence (Kolluru et al., 2020b). This new sentence is then given as input so the model can identify what relations have previously been extracted at the cost of significantly reduced extraction speed. Although it uses a generative neural model, IMoJIE relies on its copy mechanism to extract relations, so its vocabulary is limited so that it only generates tokens that are within the original sentence. In addition, the focus on reducing redundancy means it is also constrained to extracting only a single surface form of each relation in each sentence.

Gen2OIE is an OpenIE model that fine-tunes a seq2seq model to generate relations (Kolluru et al., 2022). It follows a two-stage approach, where predicates are first extracted, then arguments are extracted for each predicate. Unlike previous OpenIE models, Gen2OIE can generate relations using tokens that do not appear in the original sentence.

Closed Information Extraction (CIE) is a related task where relations within an existing KB are extracted from unstructured text. GenIE proposes a generative model to perform this task (Josifoski et al., 2021). However, CIE is an inherently more limited task than OpenIE due to its dependence on a preexisting domain. CIE models are unable to extract relations from new and emerging domains and require human effort to transfer to new domains.

OpenIE Evaluation Metrics: Existing OpenIE metrics are lexical. This means that extracted relations are evaluated based on the token overlap

Sample SentenceIn 569, unopposed, Alboin took northern Italy's main city, Milan.
Extractive Relations{Alboin; took; In 569 nothern Italy's main city, Milan}
SuRE-Extracted Relations{northern Italy's main city; is also known as; Milan}

Table 4: An example of data augmentation via relation extraction. The method used for relation extraction is SuRE (Lu et al., 2022). The sentence is from the OIE4 training set.

between the predicted relations and the gold standard relations. In particular, OIE2016 is based on tuple-level matching, treating relations extraction as a binary classification problem where a gold standard relation is extracted if a predicted relation contains a majority of tokens in the gold standard relation (Stanovsky and Dagan, 2016). WiRE57 and CaRB use token-level matching, where predicted relations are evaluated based on the token overlap between the best matches between the predicted and gold standard relations (Léchelle et al., 2018) (Bhardwaj et al., 2019). Because the abstractive relations extracted using abstractive OpenIE do not have to use the original sentence's tokens, evaluating them using lexical metrics is undesirable.

There has been previous interest in semantics-based metrics for evaluating abstractive summarization and machine translation. BERTScore is a popular metric that calculates the cosine similarity between the BERT contextual embeddings of each token in the document and each token in the summary. The highest total similarity score possible from the mapping of tokens in the document to tokens in the summary is then chosen as the BERTScore (Zhang et al., 2019). In theory, this metric would take into account the context of each word, which would capture the semantics of each word. However, it has been found that BERTScore may still be insufficient in cases where individual tokens like negations significantly change the meaning of the sentence, even if it is marginally better than lexical methods like BLEU, ROUGE, and METEOR (Saadany and Orasan, 2021).

3 Abstractive OpenIE

Abstractive OpenIE is defined as a task that generates ordered tuples in the form of (subject, predicate, object) for all possible relations (inferred or non-inferred) within a given sentence. In this section, we will describe all the pieces required to accomplish this task.

3.1 Training Sets

Although there are existing OpenIE training sets, they do not fit our goals because they are purely extractive. The training set needs to contain inferred relations so that trained models can extract inferred relations. To address this problem, we use two methods to derive abstractive OpenIE training sets from OIE4, a preexisting OpenIE training set:

Paraphrasing Via Back Translation

Back translation is the translation of a text into a different language, then translation back into the original language (Edunov et al., 2018). The resulting text should retain the same semantic meaning, but may differ in the specific words or syntax used. To generate abstractive OpenIE training data, we generate back translations of the sentences but retain the gold standard relations. Because the back translated sentences use different words and syntax, the gold standard relations may no longer consist of only words from the original sentence, thus becoming inferred relations. We provide an example in Table 3.

When generating paraphrases, we need to make sure that the paraphrased sentence has the same semantic meaning as the original sentence and contains the same relations. Thus, we perform a validation step where we use entailment to measure the quality of the paraphrase. During this step, we use three measures to ensure the quality of the paraphrase. We measure whether the original sentence entails the paraphrase to ensure the paraphrase doesn't contain extraneous information not in the original sentence. We measure whether the paraphrase entails the original sentence to ensure the paraphrase contains all information present in the original sentence. Finally, we measure whether the paraphrased sentence entails all of the gold standard relations to ensure that the relations are the same for the original sentence and the paraphrase. If any of these hypotheses does not have an entailment confidence above a certain threshold, then we do not use the paraphrase in the training data.

Data Augmentation Via Relation Extraction

Although paraphrasing can create inferred rela

Sample SentenceSharon had been in a coma since suffering a stroke in January 2006.
Sample Relations{Sharon; had been; in a coma} {Sharon; suffering; a stroke in January 2006}
Sample Predicate Prediction Inputpredicates: Sharon had been in a coma since suffering a stroke in January 2006. [pred] had been [pred] suffering
Sample Argument Prediction Inputsargs: Sharon had been in a coma since suffering a stroke in January 2006. [pred] had been [arg1] Sharon [arg2] in a coma args: Sharon had been in a coma since suffering a stroke in January 2006. [pred] had been [pred] suffering [arg1] Sharon [arg2] a stroke in January 2006

Table 5: Illustrative training example. For each sentence, there is one predicate prediction example and a number of argument prediction examples equal to the number of gold standard relations. The model first extracts all predicates, then for each predicate extracts the arguments.

tions in that the words used may not match the sentence exactly, the relations remain fundamentally the same. The inferred relations that the benchmarks such as WiRe57 contain are not derived from paraphrases of the sentence, so creating paraphrases as training data for them is not appropriate. Instead, we augment the data with additional inferred relations derived using relation extraction (RE). We provide an example in Table 4.

RE also aims to extract relations from unstructured text, but instead of being completely open domain, RE is limited to extracting a specific set of relations that must be defined beforehand (Bach and Badaskar, 2007). However, those relations may take a variety of surface forms. For instance, the relation "country_of_birth" could take the form "Einstein was born in Ulm", "Einstein (born 14 March 1879 in Ulm)", other forms. We thus use RE models to extract additional inferred relations for abstractive OpenIE training. To ensure quality and prevent redundancy, we only keep extracted relations above a certain level of confidence and which are not entailed by or entail preexisting OpenIE gold standard relations.

3.2 Benchmarks

In contrast to existing OpenIE training dataset, there are several OpenIE benchmarks which contain inferred relations because they were manually annotated or used crowdsourcing for annotation. For evaluation, we use WiRe57, CaRB, Re-OIE2016, and LSOIE test sets. Each of these benchmarks contains a different proportion of inferred relations, in Table 2. In particular, the manual annotation of WiRe57 makes prior extractive OpenIE methods perform poorly compared to their performance on other OpenIE benchmarks. Unlike

the other benchmarks, LSOIE contains no inferred relations at all, meaning in theory extractive OpenIE methods should be able to extract all relations. Thus, we can use performance on LSOIE to directly compare abstractive OpenIE and extractive OpenIE models on the extractive OpenIE task.

Statistics for the derived training sets and benchmarks is available in Table 2.

3.3 Abstractive Tuple Generator

Prior OpenIE models are not suited for the proposed task because all existing models are extractive models. As a result, we use generative models to generate relations for a given sentence. We choose to fine-tune T5, a text-to-text transformer model, to generate relations from a sentence (Raffel et al., 2020).

Inspired by Multi $^2$ OIE, we perform relation generation in two stages, a predicate and an argument stage (Ro et al., 2020). In the predicate stage, all predicates are extracted from the sentence at once. The input for this stage is the sentence, while the gold standard is the predicates of all gold standard relations separated by the special "[pred]" token. Although the order of relations in our output doesn't matter, we need to enforce a strict order for the model to learn. Thus, we order the predicates by their position within the sentence.

For the argument prediction stage, for each predicate the model predicts the arguments for the relation with that predicate. Because multiple relations may have the same predicate, we specify the predicate by including all predicates before it in the sentence. For each relation, we assume there are two arguments, which the model extracts simultaneously. The input for this stage is the sentence with the predicate concatenated to the end, separated by

a "[pred]" special token, while the gold standard is the arguments for the gold relation corresponding to that predicate, in Table 5.

3.4 Semantic-based Evaluation Metrics

CaRB is a popular metric for evaluating OpenIE models, but it requires the predicates of the prediction and gold standard to match to score a given prediction. Although it serves as a good proxy for a semantic metric in extractive OpenIE, it is significantly less useful for abstractive OpenIE where the space of all possible predicates is much larger than just the tokens in the sentence.

To evaluate abstractive OpenIE, we require a semantics-based metric rather than a lexical metric based on token matching. Although previous semantics-based evaluation metrics like BERTScore exist, we do not find them to be appropriate for our use case. Previous semantics-based evaluation metrics do not work well for cases where a single token can dramatically change the semantics of a statement, such as negations like "not" (Saadany and Orasan, 2021). Thus, we introduce a set of 3 evaluation metrics based on entailment for more accurate semantic evaluation. Each of these metrics measures semantic coherence at different granularities, and which granularity is most important will depend on the application and properties of the datasets. We demonstrate this necessity with an example in Table 6.

When calculating the entailment score for a relation, we remove special characters so that it resembles a sentence. For instance, for the relation triple {Sharon; had been; in a coma}, we form the statement "Sharon had been in a coma."

Sentence-tuple entailment The first metric we propose is sentence-tuple entailment. For recall, we combine all the relations together and see if the combined relations entail the sentence. If the combined relations do not entail the sentence, that means the sentence contains information not in any relation and thus the extracted relations as a whole have poor recall. For precision, we take the average of the entailment score obtained when seeing if the sentence entails an individual relation for all extracted relations. If the relation is not entailed, that means it contains information not in the sentence and thus has poor precision.

Combined tuple-tuple entailment The second metric we propose is combined tuple-tuple entailment. This metric is inspired by a metric proposed

by (Dusek and Kasner, 2020). For this metric, we use the gold standard relations to evaluate the extracted tuples. The combined tuple in this case refers to the combination of all gold standard relations. For recall, we combine all the predicted relations together and see if the combined relations entail the combined gold relations. If the combined predictions do not entail the combined gold, that means the gold relations contains information not in any prediction and thus the extracted relations as a whole have poor recall. For precision, we take the average of the entailment score obtained when seeing if the combined gold entails an individual relation for all extracted relations. If the prediction is not entailed, that means it contains information not in any gold relation and thus has poor precision. Compared to the sentence-tuple entailment metric, this one excludes any extraneous information in the sentence not in the gold standard relations from evaluation.

Tuple-tuple entailment The third metric we propose is tuple-tuple entailment. This metric is based on the OpenIE metric CaRB (Bhardwaj et al., 2019). For recall, for each gold standard relation we calculate the entailment for each extracted relation if the gold standard entails that prediction. Then, for each gold standard relation its recall is equal to the highest entailment score achieved by any of the predictions. The recall for the sentence is the average of the recall of its relations. Note that the highest recall for multiple gold standard relations can be achieved by the same predicted relation if the predicted relation contains all of those gold standard relations. For precision, for each gold standard relation we calculate the entailment for each extracted relation if the prediction entails that gold standard relation. Then, we find the optimal matching of gold standard relations to extracted relations that results in the highest average precision. Unlike recall, when calculating precision a predicted relation can only entail a single gold standard relation. This is because we want the number of predictions to match the number of gold relations.

4 Experimental Setup

Datasets and Metrics

We evaluate the trained abstractive OpenIE model on four benchmarks: WiRe57, CaRB, Re-OIE2016, and LSOIE-wiki with respectively decreasing proportion of inferred relations.

SentenceRival political factions were unable to resolve disagreements.
Gold Standard{Rival political factions unable to; resolve; disagreements}
Prediction{Rival political factions to; resolve; disagreements}
Evaluation MetricCaRB F1 ScoreROUGE-1 ScoreBERTScore F1 ScoreTuple-Tuple Entailment F1 Score
0.9230.9230.9760.005

Table 6: Comparison of different evaluation metrics on an example from the training set. CaRB is a popular lexical metric used to evaluate OpenIE (Bhardwaj et al., 2019). ROUGE-1 is a popular lexical metric to evaluate summarization (Lin, 2004). BERTScore is a previous semantics-based metric used to evaluate summarization (Zhang et al., 2019). Tuple-Tuple Entailment is a new semantics-based metric we propose.


Figure 1: Comparison of Sentence-Tuple Entailment F1 Score of different OpenIE models on all relations in the benchmarks. All models are trained on OIE4.

Since OIE4 trained OpenIE models showed superior F1 performance on all these benchmarks as compared to other OpenIE training sets we derive abstractive training data from this dataset. We generate four different versions of OIE4 using the methods we describe in Section 3.1. The first version is the original extractive dataset, the second version uses backtranslation for paraphrasing, the third version is augmented by relation extraction, and the fourth uses both backtranslation and relation extraction for augmentation. For backtranslation we use Facebook-FAIR's WMT'19 German-English and English-German models (Ng et al., 2019) and retain only those back translated sentences whose entailment confidence is above $80%$ . For relation extraction, we use a pretrained SuRE model, a state-of-the-art relation extraction model (Lu et al., 2022) without any additional fine-tuning and keep all relations with confidence above $80%$ . These confidence thresholds are hyperparameters that may be adjusted.

We compare performance using the preexisting CaRB metric, as well as our own introduced semantics-based metrics of tuple-tuple entailment,

combined tuple-tuple entailment, and sentence-tuple entailment. The entailment model we use for our datasets and evaluation metrics is a BERT-based encoder model trained on MNLI, SNLI, and the Hans dataset (Gao et al., 2021).

Models and Hyperparameters

We fine-tune the T5-base model for our experiments. We fine-tuned T5 for 5 epochs with an initial learning rate of 2e-5 and batch size of 12. We validate T5 on a subset of the OIE4 training set using the tuple-tuple entailment metric. We also compare our model with Multi $^2$ OIE, a state-of-the-art neural extractive OpenIE model (Ro et al., 2020). We train Multi $^2$ OIE on the original OIE4 dataset with no paraphrasing. We use the default hyperparameters of Multi $^2$ OIE.

5 Results and Analysis

For this section, we focus on the sentence-tuple semantic score because it offers a holistic comparison of the extracted relations and the sentence and does not rely upon potentially incomplete or faulty gold relations. Full tables with our empirical results in


Figure 2: Comparison of Sentence-Tuple Entailment Recall of different combinations of OpenIE models on all relations in the benchmarks. All models are trained on OIE4.

including other metrics can be found in Appendix A.

We first compare performance on all relations in Figure 1. In general, abstractive OpenIE leads to better performance the higher the proportion of inferred relations in the test set. This is expected because Multi $^2$ OIE can not extract inferred relations at all. When considering the full benchmarks, of the data augmentation methods we use, SuRE augmentation works the best. Training on back-translated OIE4 degrades the performance compared to the base extractive OIE4 data. This may be because back translation reduces the amount of training data. Additionally, back translation often just replaces the gold standard predicate with a synonym instead of changing the syntax of the sentence, which does not help in the extraction of inferred relations.

To demonstrate the complementary nature of abstractive OpenIE to extractive OpenIE, we combine their extractions. When combining their extractions, we remove redundant relations by removing relations that are entailed by any other relations. If two relations entail each other, then we keep the longer one. A comparison of combined models can be found in Figure 2. When combining model predictions, we observe that back translation actually helps more than SuRE augmentation. This suggests that SuRE augmentation helps extractive OpenIE relations, while back translation is more useful for increasing the recall to inferred relations that could not be extracted by Multi $^{2}$ OIE. The more inferred relations in the benchmark, the more beneficial merging extractions are.

We also evaluate our abstractive OpenIE models

OpenIE ModelDocumentsMRRP@1Hit@5
Multi2OIETop 100.1930.1270.267
Abstractive OIE4Top 100.1540.0800.240
Abstractive Back Translated OIE4Top 100.1670.1000.227
Abstractive SuRE Augmented OIE4Top 100.1570.0930.220
Abstractive SuRE Augmented Back Translated OIE4Top 100.1810.0930.287

Table 7: Performance of QUEST on the CQ-W dataset using the Top 10 Google documents (Lu et al., 2019).

on only the inferred relations within each benchmark. To do this, we remove non-inferred relations from the gold standards. We can only measure the resulting recall of the models because the models are trained to generate both inferred and non-inferred relations and the metrics we use penalize the precision when there are too many predicted relations for a given sentence, which would be the case for any sentence that had non-inferred relations. Figure 3 shows the results of these experiments. As before, the more inferred relations in the benchmark, the better suited an abstractive OpenIE model is for the task.

Upon a manual examination of the generated relations of each model, we observe that fine-tuning T5 on SuRE-augmented data results in generated relations replacing some of its predicates with the predicates from SuRE. Table 8 demonstrates one example of a model generating a predicate that does not exist within the sentence but is a common predicate among the SuRE-augmented relations.


Figure 3: Comparison of Sentence-Tuple Entailment Recall of different combinations of OpenIE models on only the inferred relations in the benchmarks. All models are trained on OIE4.

SentenceFormerly known as Edo, it has been the de facto seat of government since 1603 when Shogun Tokugawa Ieyasu made the city his headquarters.
T5 Fine-Tuned on OIE4(it; has been; the de facto seat of government since 1603)
T5 Fine-Tuned on SuRE-Augmented OIE4(it; is also known as; the de facto seat of government since 1603)

Table 8: A demonstration that T5 fine-tuned on OIE4 augmented with SuRE extractions generates predicates from the SuRE extractions rather than the sentence. This sentence is from the WiRE57 test set.

Case Study

To further test the applicability of abstractive OpenIE, we evaluate its performance on QUEST, a downstream Complex QA task that uses OpenIE in its pipeline (Lu et al., 2019). QUEST specifically desires higher recall from its OpenIE model, which can be achieved by extracting inferred relations. We show the results in Table 7. The results show that augmenting the training data improves downstream performance, indicating that including more inferred relations in the training data is helpful for this task.

6 Conclusion

In this paper, we introduce abstractive OpenIE, an alternative to what we call extractive OpenIE, the paradigm all current OpenIE models currently follow, in order to address the problems of inferred relations and surface form extraction. We find that existing OpenIE datasets and metrics are ill-suited for this task. As a result, we introduce abstractive training set, model, and metrics. We then compare our models trained on different abstractive

training sets and the state-of-the-art extractive OpenIE model using preexisting OpenIE benchmarks. Overall, we find that our models achieve higher performance on inferred relations, which extractive OpenIE models have previously struggled with. We believe abstractive OpenIE has potential as a task that will greatly benefit downstream applications that use OpenIE in their pipeline.

7 Limitations

In this work, we used a relatively smaller T5-base model. A model with more parameters may have led to improved performance. Further, the corpora we chose are all limited to English. As a result, our results are not generalizable to any downstream task that relies on different languages.

Ethics Statement

We did not create any of the models, datasets, or applications covered in this paper. Any ethical issues with the preexisting OpenIE datasets we use in this paper will reflect on this work.

Acknowledgements

This material is based upon work supported by the National Science Foundation IIS 16-19302 and IIS 16-33755, Zhejiang University ZJU Research 083650, Futurewei Technologies HF2017060011 and 094013, IBM-Illinois Center for Cognitive Computing Systems Research (C3SR) and IBM-Illinois Discovery Accelerator Institute (IIDAI), grants from eBay and Microsoft Azure, UIUC OVCR CCIL Planning Grant 434S34, UIUC CSBS Small Grant 434C8U, and UIUC New Frontiers Initiative. Any opinions, findings, conclusions, or recommendations expressed in this publication are

those of the author(s) and do not necessarily reflect the views of the funding agencies.

References

Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 344-354.
Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Literature review for Language and Statistics II, 2:1-15.
Sangnie Bhardwaj, Samarth Aggarwal, and Mausam Mausam. 2019. Carb: A crowdsourced benchmark for open ie. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6262-6267.
Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. arXiv preprint arXiv:1805.04270.
Ondrej Dusek and Zdenek Kasner. 2020. Evaluating semantic accuracy of data-to-text generation with natural language inference. arXiv preprint arXiv:2011.10819.
Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381.
Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. Communications of the ACM, 51(12):68-74.
Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1608-1618.
Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1156-1165.
Yang Gao, Nicolo Colombo, and Wei Wang. 2021. Adapting by pruning: A case study on bert. arXiv preprint arXiv:2105.03343.
Jiabao Han and Hongzhi Wang. 2021. Generative adversarial networks for open information extraction. Advances in Computational Intelligence, 1(4):1-11.
Martin Josifoski, Nicola De Cao, Maxime Peyrard, and Robert West. 2021. Genie: generative information extraction. arXiv preprint arXiv:2112.08340.

Keshav Kolluru, Vaibhav Adlakha, Samarth Aggarwal, Soumen Chakrabarti, et al. 2020a. Openie6: Iterative grid labeling and coordination analysis for open information extraction. arXiv preprint arXiv:2010.03147.
Keshav Kolluru, Samarth Aggarwal, Vipul Rathore, Soumen Chakrabarti, et al. 2020b. Imojie: Iterative memory-based joint open information extraction. arXiv preprint arXiv:2005.08178.
Keshav Kolluru, Muqeeth Mohammed, Shubham Mittal, Soumen Chakrabarti, et al. 2022. Alignment-augmented consistent translation for multilingual open information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2502-2517.
Hermann Kroll, Jan Pirklbauer, and Wolf-Tilo Balke. 2021. A toolbox for the nearly-unsupervised construction of digital library knowledge graphs. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in.
William Léchelle, Fabrizio Gotti, and Philippe Langlais. 2018. Wire57: A fine-grained benchmark for open information extraction. arXiv preprint arXiv:1809.08962.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
Keming Lu, I Hsu, Wenxuan Zhou, Mingyu Derek Ma, Muhao Chen, et al. 2022. Summarization as indirect supervision for relation extraction. arXiv preprint arXiv:2205.09837.
Xiaolu Lu, Soumajit Pramanik, Rishiraj Saha Roy, Abdalghani Abujabal, Yafang Wang, and Gerhard Weikum. 2019. Answering complex questions by joining multi-document evidence with quasi knowledge graphs. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 105-114.
Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523-534, Jeju Island, Korea. Association for Computational Linguistics.
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook fair's wmt19 news translation task submission. arXiv preprint arXiv:1907.06616.
Kevin Pei, Ishan Jindal, Kevin Chen-Chuan Chang, Chengxiang Zhai, and Yunyao Li. 2022. When to use what: An in-depth comparative empirical analysis of openie systems for downstream applications. arXiv preprint arXiv:2211.08228.

Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551.

Youngbin Ro, Yukyung Lee, and Pilsung Kang. 2020. Multi $^2$ oie: Multilingual open information extraction based on multi-head attention with bert. arXiv preprint arXiv:2009.08128.

Hadeel Saadany and Constantin Orasan. 2021. Bleu, meteor, bertscore: Evaluation of metrics performance in assessing critical translation errors in sentiment-oriented text. arXiv preprint arXiv:2109.14250.

Jacob Solawetz and Stefan Larson. 2021. Lsoie: A large-scale dataset for supervised open information extraction. arXiv preprint arXiv:2101.11177.

Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2300-2305.

Michael Vasilkovsky, Anton Alekseev, Valentin Malykh, Ilya Shenbin, Elena Tutubalina, Dmitriy Salikhov, Mikhail Stepnov, Andrey Chertok, and Sergey Nikolko. 2022. Detie: Multilingual open information extraction inspired by object detection. In Proceedings of the 36th AAAI Conference on Artificial Intelligence.

Pengcheng Yin, Nan Duan, Ben Kao, Junwei Bao, and Ming Zhou. 2015. Answering questions with complex semantic constraints on open knowledge bases. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 1301-1310.

Junlang Zhan and Hai Zhao. 2020. Span model for open information extraction on accurate corpus. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9523-9530.

Mengli Zhang, Gang Zhou, Wanting Yu, and Wenfen Liu. 2021. Far-ass: Fact-aware reinforced abstractive sentence summarization. Information Processing & Management, 58(3):102478.

Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.

A Empirical Results

We present our empirical results in tables 9, 10, and 11.

ModelTraining SetBenchmarkCaRB ScoreSentence-Tuple EntailmentCombined Tuple-Tuple EntailmentTuple-Tuple Entailment
PRF1PRF1PRF1PRF1
Multi2OIEOIE4LSOIE-wiki0.3960.3180.3530.9530.3810.5450.5950.4880.5360.5910.4670.522
Abstractive T5OIE4LSOIE-wiki0.4960.3690.4230.9640.4320.5960.6140.5250.5660.6080.4990.548
Abstractive T5OIE4 Back TranslatedLSOIE-wiki0.50.4830.4910.9610.4390.6030.6270.5460.5840.6400.5100.568
Abstractive T5OIE4 with SuRE RelationsLSOIE-wiki0.5180.490.5040.9630.4360.6010.6320.5650.5970.6450.5110.570
Abstractive T5OIE4 Back Translated with SuRE RelationsLSOIE-wiki0.5380.5270.5320.9740.5710.7200.6450.6700.6570.6600.6110.634
Multi2OIEOIE4ReOIE20160.5650.3730.4490.9390.3510.5110.8350.5040.6290.7630.4770.587
Abstractive T5OIE4ReOIE20160.7330.4490.5570.9530.4250.5880.8610.5800.6930.7790.5430.640
Abstractive T5OIE4 Back TranslatedReOIE20160.7060.5650.6280.9480.4180.5800.8550.5820.6930.8060.5310.640
Abstractive T5OIE4 with SuRE RelationsReOIE20160.7570.5720.6520.9530.4240.5870.8710.6020.7120.8140.5310.643
Abstractive T5OIE4 Back Translated with SuRE RelationsReOIE20160.8130.6470.720.9760.5740.7230.8940.7360.8080.8230.6840.747
Multi2OIEOIE4CaRB0.5250.3090.3890.9350.3570.5170.8560.5380.6610.6820.4870.568
Abstractive T5OIE4CaRB0.6190.3360.4360.9490.4310.5930.8820.5920.7090.6940.5260.599
Abstractive T5OIE4 Back TranslatedCaRB0.5920.3940.4730.9450.4220.5830.8430.5780.6860.6820.4910.571
Abstractive T5OIE4 with SuRE RelationsCaRB0.6190.3890.4780.9510.4280.5910.8620.5840.6970.7010.4950.580
Abstractive T5OIE4 Back Translated with SuRE RelationsCaRB0.6470.4420.5250.9750.5720.7210.8840.7070.7860.7020.6190.658
Multi2OIEOIE4WiRe570.450.3430.3890.9600.3620.5260.6680.5720.6170.3780.5740.456
Abstractive T5OIE4WiRe570.5190.3570.4230.9880.3550.5230.6650.6130.6380.3610.5860.447
Abstractive T5OIE4 Back TranslatedWiRe570.5020.3990.4450.9460.4750.6320.6420.6750.6580.2900.6610.403
Abstractive T5OIE4 with SuRE RelationsWiRe570.5060.3910.4410.9810.4690.6350.6330.6700.6510.2840.6780.401
Abstractive T5OIE4 Back Translated with SuRE RelationsWiRe570.5370.370.4390.9900.3710.5390.6650.6110.6370.3770.5560.449

Table 9: Empirical results of different models on different benchmarks. Differences in the number of inferred relations in each of the benchmarks influences the relative performance of each model. The benchmarks are listed from lowest to highest proportion of relations with inferred predicates.

ModelTraining SetBenchmarkCaRB ScoreSentence-Tuple EntailmentCombined Tuple-Tuple EntailmentTuple-Tuple Entailment
PRF1PRF1PRF1PRF1
Multi2OIEOIE4Re-OIE20160.8130.6470.720.9760.5740.7230.8940.7360.8080.8230.6840.747
Multi2OIE + Abstractive T5OIE4Re-OIE20160.6010.5610.5810.9720.5630.7130.8690.7370.7980.7910.6750.729
Multi2OIE + Abstractive T5OIE4 Back TranslatedRe-OIE20160.640.5540.5940.9640.6780.7960.8640.7890.8250.7570.7220.739
Multi2OIE + Abstractive T5OIE4 with SuRE RelationsRe-OIE20160.5840.5690.5770.9620.5910.7320.8600.7560.8050.7880.6890.735
Multi2OIE + Abstractive T5OIE4 Back Translated with SuRE RelationsRe-OIE20160.5990.5420.5690.9650.6830.8000.8460.7680.8050.7640.7010.731
Multi2OIEOIE4CaRB0.6470.4420.5250.9750.5720.7210.8840.7070.7860.7020.6190.658
Multi2OIE + Abstractive T5OIE4CaRB0.5250.4040.4570.9700.5680.7170.8590.6990.7700.6780.6210.648
Multi2OIE + Abstractive T5OIE4 Back TranslatedCaRB0.5440.4020.4630.9580.6840.7980.8740.7670.8170.6510.6700.661
Multi2OIE + Abstractive T5OIE4 with SuRE RelationsCaRB0.5180.4130.460.9610.5950.7350.8510.7190.7800.6700.6300.649
Multi2OIE + Abstractive T5OIE4 Back Translated with SuRE RelationsCaRB0.5310.4090.4620.9570.6900.8020.8590.7570.8050.6340.6630.648
Multi2OIEOIE4WiRe570.5370.370.4390.9900.3710.5390.6650.6110.6370.3770.5560.449
Multi2OIE + Abstractive T5OIE4WiRe570.4810.3760.4220.9920.6210.7640.6250.7550.6840.2640.7320.388
Multi2OIE + Abstractive T5OIE4 Back TranslatedWiRe570.4760.390.4290.9880.5640.7180.6390.7300.6810.3120.6910.430
Multi2OIE + Abstractive T5OIE4 with SuRE RelationsWiRe570.4820.390.4310.9580.6000.7380.6510.7280.6880.2830.7220.407
Multi2OIE + Abstractive T5OIE4 Back Translated with SuRE RelationsWiRe570.4580.3980.4260.9480.6560.7750.6330.7270.6770.2940.7230.418

Table 10: Empirical results where the relations extracted by Multi $^2$ OIE and abstractive OpenIE are combined. Redundant relations are removed after the combination of extractions. Redundant relations are relations that are entailed by at least one other relation in the same sentence. If two relations entail each other, the shorter one is removed.

ModelTraining SetBenchmarkCaRB ScoreSentence-Tuple EntailmentCombined Tuple-Tuple EntailmentTuple-Tuple Entailment
RRRR
Multi2OIEOIE4ReOIE2016 Inferred Predicates orArgs0.2310.3040.4520.411
Abstractive T5OIE4ReOIE2016 Inferred Predicates orArgs0.2310.2190.3940.343
Abstractive T5OIE4 Back TranslatedReOIE2016 Inferred Predicates orArgs0.1520.2360.4240.380
Abstractive T5OIE4 with SuRE RelationsReOIE2016 Inferred Predicates orArgs0.2230.2360.3860.408
Abstractive T5OIE4 Back Translated with SuRE RelationsReOIE2016 Inferred Predicates orArgs0.0870.1890.4810.475
Multi2OIEOIE4CaRB Inferred Predicates orArgs0.1160.5200.6410.605
Abstractive T5OIE4CaRB Inferred Predicates orArgs0.1090.3620.5220.482
Abstractive T5OIE4 Back TranslatedCaRB Inferred Predicates orArgs0.0820.3460.5090.495
Abstractive T5OIE4 with SuRE RelationsCaRB Inferred Predicates orArgs0.1280.3600.5110.490
Abstractive T5OIE4 Back Translated with SuRE RelationsCaRB Inferred Predicates orArgs0.0990.2890.5320.498
Multi2OIEOIE4WiRe57 Inferred Predicates orArgs0.0510.3460.5460.536
Abstractive T5OIE4WiRe57 Inferred Predicates orArgs0.0590.4270.5440.599
Abstractive T5OIE4 Back TranslatedWiRe57 Inferred Predicates orArgs0.0570.3250.5010.523
Abstractive T5OIE4 with SuRE RelationsWiRe57 Inferred Predicates orArgs0.0670.4330.5740.605
Abstractive T5OIE4 Back Translated with SuRE RelationsWiRe57 Inferred Predicates orArgs0.0430.3410.4680.509

Table 11: Empirical results of models where the gold standard consists only of relations with inferred predicates or arguments. We only measure recall in this case because relations are extracted per-sentence, so relations that do not have inferred predicates will also be extracted, which will lower the precision.