ACL-OCL / Base_JSON /prefixN /json /nsurl /2021.nsurl-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:49:01.997249Z"
},
"title": "NSURL-2021 Task 1: Semantic Relation Extraction in Persian",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Taghizadeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tehran Tehran",
"location": {
"country": "Iran"
}
},
"email": "nsr.taghizadeh@ut.ac.ir"
},
{
"first": "Ali",
"middle": [],
"last": "Ebrahimi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tehran Tehran",
"location": {
"country": "Iran"
}
},
"email": "ali96ebrahimi@ut.ac.ir"
},
{
"first": "Heshaam",
"middle": [],
"last": "Faili",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tehran Tehran",
"location": {
"country": "Iran"
}
},
"email": "hfaili@ut.ac.ir"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic Relation Extraction aims to identify whether a semantic relation of pre-defined types is held between two entities in a text. Relation extraction is a preliminary task in many applications such as knowledge base construction and information retrieval. To investigate the challenges and opportunities of relation extraction in Persian, we run a shared task as part of the second workshop on NLP Solutions for Under-Resourced Languages (NSURL 2021). This paper presents the approaches of the participating teams, their results, and the finding of the shared task. The data set prepared for this task is made publicly available 1 to support further researches on Persian relation extraction.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic Relation Extraction aims to identify whether a semantic relation of pre-defined types is held between two entities in a text. Relation extraction is a preliminary task in many applications such as knowledge base construction and information retrieval. To investigate the challenges and opportunities of relation extraction in Persian, we run a shared task as part of the second workshop on NLP Solutions for Under-Resourced Languages (NSURL 2021). This paper presents the approaches of the participating teams, their results, and the finding of the shared task. The data set prepared for this task is made publicly available 1 to support further researches on Persian relation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The process of extracting structured information from unstructured text, known as information extraction, mostly consists of finding named entities (Taghizadeh et al., 2019) , linking entities together, and extracting relations between them. Relation Extraction (RE) is a key component for building knowledge graphs, and it is of crucial significance to NLP applications such as structured search, question answering, and summarization.",
"cite_spans": [
{
"start": 148,
"end": 173,
"text": "(Taghizadeh et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RE is a well-studied task in English (Geng et al., 2020) , Arabic (Taghizadeh et al., 2018) and Chinese (Li et al., 2019) , regarding data sets of ACE, SemEval, TACRED, etc. However, due to the lack of public annotated corpora, the task is not highly examined in low-resource languages. Therefore, NSURL-2021 shared task 1 focuses on the relation extraction in Persian. The goal of the task is to specify whether a relationship exists between two entities in a Persian sentence, given a pre-defined set of semantic relations. SemEval-2010 task 8 data set (Hendrickx et al., 2010) is de facto standard for RE. There is a machine-translated version of this data set in Persian, that was post-edited by humans, called PER-LEX (Asgari-Bidhendi et al., 2020) . PERLEX was used for training RE systems in Persian by running some of the state-of-art methods. Although this data set facilitates studying the task of RE in Persian, there is still a high need for an annotated data set developed from scratch, derived from Persian corpus, and reflects the common entities and new named entities appearing in Persian articles, news, social media, etc. Therefore, we prepared a data set of 1500 instances annotated with the semantic relations to be used as the test data of the shared task.",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "(Geng et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 66,
"end": 91,
"text": "(Taghizadeh et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 104,
"end": 121,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 555,
"end": 579,
"text": "(Hendrickx et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 723,
"end": 753,
"text": "(Asgari-Bidhendi et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a brief description of the participating teams, their approaches, the results, and the finding of the shared task. All solutions are based on the pre-trained language models (Devlin et al., 2018; Farahani et al., 2020) , which are fined-tuned for RE. Proposed approaches differ in pre-processing steps, using syntactic features, and the architecture of deep models. The best F 1 score was obtained by an adaptation of an existing method, RIFRE (Zhao et al., 2021) on the Persian data set. Although, RIFRE obtained 91.3% of F 1 on SemEval 2010-task 8 data set, its score on the test set of PERLEX and test set of the shared task is 83.82% and 67.67%, respectively. Analysis of the results shows that new entities, misleading keywords, and complex grammatical structures are some reasons for the drop of the performance.",
"cite_spans": [
{
"start": 194,
"end": 215,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 216,
"end": 238,
"text": "Farahani et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 464,
"end": 483,
"text": "(Zhao et al., 2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: In Section 2, the definition of the shared task is presented. Section 3 contains an overview of the related works. Next, Section 4 describes the data set of the shared task. Section 5 includes the proposed solutions, their scores, and analytical results. Finally, Section 6 presents the conclusion remarks. (Hendrickx et al., 2010) .",
"cite_spans": [
{
"start": 355,
"end": 379,
"text": "(Hendrickx et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cause-Effect(X, Y) X is the cause of Y, or that X causes/makes/produces/emits/... Y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Instrument-Agency(X, Y) X is the instrument (tool) of Y or, equivalently, that Y uses X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Product-Producer(X, Y) X is a product of Y, or Y produces X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Content-Container(X, Y) X is or was (usually temporarily) stored or carried inside Y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Entity-Origin(X, Y) Y is the origin of an entity X (rather than its location), and X is coming or derived from Y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Entity-Destination(X, Y) Y is the destination of X in the sense of X moving (in a physical or abstract sense) toward Y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Component-Whole(X,Y) X has a functional relation with Y and X has an operating or usable purpose within Y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Member-Collection(X, Y) X is a member of Y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Message-Topic(X, Y) X is a communicative message containing information about Y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Definition",
"sec_num": null
},
{
"text": "Persian is among the low-resource languages which suffer from lack of annotated data and preprocessing tools. However, language-specific features of Persian motivates researchers to develop customized machine learning methods. Therefore, it is crucial to create annotated data sets for different NLP tasks in Persian. Given two entities in a text, the task is to predict the type of semantic relation between them, given a pre-defined set of relation types. Two entity mentions are tagged with e 1 and e 2 in the sentence. Each entity is a span over the sentence. Entities don't have a specific type and the numbering simply reflects the order of mentions in the sentence. The relation types of the shared task include 9 bidirectional relations defined in SemEval 2010-task 8, which are presented in Table 1 . We defined two sub-tasks:",
"cite_spans": [],
"ref_spans": [
{
"start": 800,
"end": 807,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "\u2022 Sub-Task A: Mono-Lingual Relation Extraction: In this subtask, the training data is in Persian. The aim is to use this data set for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "\u2022 Sub-Task B: Bi-Lingual English-Persian Relation Extraction: In this subtask, the training data is a parallel English-Persian data set. The aim is to employ the bi-lingual data to train the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The prominent approach for both sub-tasks is to formulate them as a classification problem, however, the learning methods such as distant supervision, and bootstrapping are also applicable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Relation extraction has been extensively studied and a broad range of semantic relations has been examined by different researchers. ACE released a series of data sets in which the relations within the family, organization, society, etc. are mostly considered (Walker et al., 2005) . SNPPhenA (Bokharaeian et al., 2017) considered the biological entities and relationships.",
"cite_spans": [
{
"start": 260,
"end": 281,
"text": "(Walker et al., 2005)",
"ref_id": "BIBREF21"
},
{
"start": 293,
"end": 319,
"text": "(Bokharaeian et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "3"
},
{
"text": "Since the importance of the RE, several shared tasks were held in different languages. Recently, SemEval-2020 Task 6 (DeftEval) (Spala et al., 2020) considered the problem of definition extraction, in which three subtasks are defined, one of them is to extract relation between terms and definitions. SemEval-2018 task 7 (G\u00e1bor et al., 2018) focused on relation extraction and classification in scientific paper abstracts, to extract specialized knowledge from domain corpora. In contrast, SemEval-2018 task 10 (Krebs et al., 2018) examined the task of identifying semantic difference which is a ternary relation between two concepts (e.g. apple, banana) and a discriminative attribute (e.g. red) that characterizes the first concept but not the other. WNUT-2020 Task 1 considered extracting entities and relations from wet-lab protocols. Wet-lab protocols consist of the guidelines from different lab procedures which involve chemicals, drugs, or other materials in liquid solutions or volatile phases (Tabassum et al., 2020) .",
"cite_spans": [
{
"start": 128,
"end": 148,
"text": "(Spala et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 321,
"end": 341,
"text": "(G\u00e1bor et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 1003,
"end": 1026,
"text": "(Tabassum et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "3"
},
{
"text": "There are a huge amount of researches on relation extraction. Recent methods are mainly based on the pre-trained language models such as BERT (Devlin et al., 2018) , which are used to make a representation of samples with the same relation to be close to the representation of the corresponding relation in an embedding space. Cohen et al. (2020) proposed to utilize span-predictions models as used in question-answering models, by creating some questions based on sentences, then trying to find relations based on answers to these questions.",
"cite_spans": [
{
"start": 142,
"end": 163,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "3"
},
{
"text": "Graph neural networks have been employed to update sentence representation by message passing in the network to find a suitable relation for entities (Zhao et al., 2021 (Zhao et al., , 2019 . Peters et al. (2019) used a knowledge graph to enhance the representations of the words.",
"cite_spans": [
{
"start": 150,
"end": 168,
"text": "(Zhao et al., 2021",
"ref_id": "BIBREF23"
},
{
"start": 169,
"end": 189,
"text": "(Zhao et al., , 2019",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "3"
},
{
"text": "Many researchers showed that the syntactic features of the sentence are highly informative for the task of RE. Veyseh et al. 2020utilized Ordered-Neuron Long-Short Term Memory Networks (ON-LSTM) to infer the model-based importance scores for RE for every word in the sentences that are then regulated to be consistent with the syntax-based scores to enable syntactic information injection. Tao et al. (2019) combined syntactic indicator and sequential context for relation prediction.",
"cite_spans": [
{
"start": 390,
"end": 407,
"text": "Tao et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "3"
},
{
"text": "Since the lack of labeled data in many languages, multi-lingual and cross-lingual methods were proposed to benefit from the labeled data of highresource languages in low-source languages. In this regard, Generative Adversarial Network (GAN) is used to transfer feature representations from one language with rich annotated data to another language with few annotated data (Zou et al., 2018) . Taghizadeh et al. (2022) presented two deep CNN networks to employ syntactic features of the shortest dependency path between entities based on the Universal Dependencies.",
"cite_spans": [
{
"start": 372,
"end": 390,
"text": "(Zou et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 393,
"end": 417,
"text": "Taghizadeh et al. (2022)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "3"
},
{
"text": "In this section, the data sets used for the development and evaluation of Persian RE systems are described.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotated Corpus",
"sec_num": "4"
},
{
"text": "The data set that used in the development phase is PERLEX, which is the translation of the SemEval-2010 task 8 data set. This data set has been already split into train and test with 8000 and 2717 samples, respectively. The test part can be used as the development set, or both parts can be combined and then divided randomly into the training and development sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Development Data",
"sec_num": "4.1"
},
{
"text": "We have developed a data set of 1500 sentences annotated with two entities and the relationship held between them. Regarding language models such as BERT (Devlin et al., 2018) , which improves the task of natural language understanding, some limitations of the old data sets like SemEval-2010 task 8 can be released in new data sets. Specifically, in the SemEval data set, entities are base Noun Phrases (NP) whose head is a common noun. We take into account 1) complex NPs (those NP with attached prepositional phrases), 2) nouns within verbal phrases, and 3) named entities in few instances, in addition to the base NPs. Moreover, in some instances, two entities are not in one sentence rather in two consecutive sentences. This data set also contains informal sentences. Table 3 shows some examples. Similar to the SemEval data set, we do not annotate examples whose interpretation relies on the discourse knowledge, and sentences with negation (e.g. no, not) whose scope contains the relation.",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 774,
"end": 781,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Test Data",
"sec_num": "4.2"
},
{
"text": "In the process of making the test set of the shared task, first, we collected a corpus of 50K sentences from the Virgool website. Virgool is a social network for sharing Persian articles 2 . This corpus was pre-processed, tokenized, and annotated by Part Of Speech (POS) tags. All nouns were considered as potential entities whose borders were revised later by human annotators. Next, we trained a state-of-the-art method using the PERLEX data set, to automatically annotate the relation held between every pair of entities in the sentences. At the next step, two human annotators corrected the automatic labels based on the annotation guideline of SemEval 2010-task 8. Since the semantic relations are language-independent, the English guideline is also useful for annotating Persian text. Finally, after several revisions of annotations, 1500 samples were selected. Table 2 shows the distribution of this data in different classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 868,
"end": 875,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Test Data",
"sec_num": "4.2"
},
{
"text": "The annotators faced some challenges during the annotation of semantic relations. One chal- Orange e1 and tomato are the sources of vitamin C e2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "4.2"
},
{
"text": "Considering the guideline of the shared task, Component-Whole shows the functional relationship between two entities, while Content-Container means that one entity is stored or carried inside another one. Therefore, Entity-Origin is the true label, which means that one entity is coming or derived from another one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "4.2"
},
{
"text": "In this section, we describe the participating teams, and then their results on the test data of the shared task. Finally, the analytical findings of the shared task are presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The shared task was managed using the CodaLab competition platform 3 for result submission. A total of 4 systems has been submitted for sub-task A and no system for sub-task B. In the following, we describe the methodologies used by them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Teams",
"sec_num": "5.1"
},
{
"text": "HooshYar This team presented two methods for Persian RE. In both methods, they utilized the pretrained language model of ParsBERT (Farahani et al., 2020) and fine-tuned it on the task of RE.",
"cite_spans": [
{
"start": 130,
"end": 153,
"text": "(Farahani et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Teams",
"sec_num": "5.1"
},
{
"text": "\u2022 In the first method, U-BERT, they attended to the class distribution of data and tried to 3 https://competitions.codalab.org/ competitions/31979 improve the accuracy of the model using oversampling of the instances of smaller classes. In addition, based on the fact that Other class contains many samples with diverse relations beyond the nine desired classes, they employed the Pairwise ranking loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Teams",
"sec_num": "5.1"
},
{
"text": "\u2022 In the second method, T-BERT, they focused on the syntactic features of the sentence. Many researchers used the shortest dependency path between two entities in the dependency tree of the sentence to recognize the relation held between them. Therefore, syntactic features inspire the use of a new embedding layer at the input of the BERT network. In this step, the vector for each word is reinforced with POS Tag and dependency tree tag. They used available tools in the Persian language to extract POS and dependency tree tags of the sentences. In the last layer of their network, they used the vector of average entity words in addition to the CLS token for classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Teams",
"sec_num": "5.1"
},
{
"text": "SBU-NLP This team performed some preprocessing steps on PERLEX. Since it is a semiautomatic translated data set, they removed those samples with more than one entity marker (<e1> and </e1>), or unclear translation. Moreover, they used data augmentation techniques and backtranslation methods to increase training data size. They inspired the R-BERT model (Wu and He, 2019) and examined several changes in the architectures of this network to improve model accuracy including 1) averaging both of the three final segments in the R-BERT rather than a concatenation of them, 2) concatenation of all of the tokens in the entities rather than average them, 3) using the last (first) token instead of average all of the to-kens in the entities, and 4) using the Multilingual BERT (mBERT) (Devlin et al., 2018) and Pars-BERT (Farahani et al., 2020) to reaching the best decision.",
"cite_spans": [
{
"start": 355,
"end": 372,
"text": "(Wu and He, 2019)",
"ref_id": "BIBREF22"
},
{
"start": 782,
"end": 803,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 818,
"end": 841,
"text": "(Farahani et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Teams",
"sec_num": "5.1"
},
{
"text": "Customizing the available methods One of the participating teams adapted the method proposed by We and He (2019), called R-BERT. They used ParsBERT (Farahani et al., 2020) , a pre-trained language model for Persian, and set the parameters of the model to the best-fit values on the PERLEX data set. Therefore, we refer to this method as R-BERT+ParsBERT. Table 4 shows a summary of results for the participating teams. We reported the F 1 score for every relation in addition to the macro-average F 1 considering the direction of the relations. The first part of Table 4 contains the evaluation results on the official test set of the shared task, where all data of PERLEX (10,717 samples) can be used for training the systems. The second part of Table 4 presents the F 1 scores of the same methods when trained with the training part of PERLEX (8000 samples) and evaluated by the test part of PERLEX (2717 samples).",
"cite_spans": [
{
"start": 148,
"end": 171,
"text": "(Farahani et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 562,
"end": 569,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 746,
"end": 753,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Participating Teams",
"sec_num": "5.1"
},
{
"text": "For better comparison, we also reported the result of the state-of-the-art method of Zhao et al. (2021) , named RIFRE. They used graph neural networks and modeled relations and words as nodes on the graph and fuse the two types of semantic nodes by the message passing mechanism iteratively to obtain nodes representation that is more suitable for the RE task. We used ParsBERT as the encoder layer of the network and fine-tuned it on PERLEX. This method obtained the top rank on the English data set of SemEval 2010-task 8.",
"cite_spans": [
{
"start": 85,
"end": 103,
"text": "Zhao et al. (2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "As Table 4 shows, the F 1 scores on shared task data are much lower than PERLEX test data for all methods. Among five methods, the state-of-theart methods of RIFRE+ParsBERT obtained the highest F 1 scores on both test data of the shared task, 67.67% F 1 , and PERLEX, 83.82% F 1 ; while this method obtained 91.3% score of F 1 on English equivalent data set (SemEval 2010-task 8).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Due to the several improvements over R-BERT+ParsBERT made by the method proposed by Moein Salimi (Salimi Sartakhti et al., 2021) , this method outperformed R-BERT+ParsBERT on PERLEX test data, however, it obtained a lower F 1 score on the test set of the shared task.",
"cite_spans": [
{
"start": 97,
"end": 128,
"text": "(Salimi Sartakhti et al., 2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Although the state-of-the-art RE methods obtained more than 90% of F 1 score on SemEval 2010-task 8 data set (Cohen et al., 2020; Zhao et al., 2021) , their performances drop in Persian. We investigate the impact of new entities, misleading keywords, and complex grammatical structures.",
"cite_spans": [
{
"start": 109,
"end": 129,
"text": "(Cohen et al., 2020;",
"ref_id": "BIBREF2"
},
{
"start": 130,
"end": 148,
"text": "Zhao et al., 2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "New Entities Comparing the F 1 scores which are obtained on the test data of PERLEX with those reported on the test data of the shared task in Table 4 reveals that there is a drop in results. One reason is that the shared task test data contains the new entities that do not appear in PERLEX. Statistics show about 70% of entities are new. Moreover, the shared task test data contains some samples that flout the guidelines of SemEval 2010-task 8 regarding the locality of entities, nominal expression, etc., as depicted in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 525,
"end": 532,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "Misleading Keywords Have a deeper look at the performance of the models, several keywords specify each class. For example, Cause-Effect is usually specified by words such as \"cause/ caused by/ result/ generate/ triggered/ due/ effect\" (Taghizadeh and Faili, 2021) . There are similar keywords in Persian such as \"",
"cite_spans": [
{
"start": 235,
"end": 263,
"text": "(Taghizadeh and Faili, 2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "\u060c \u060c \u060c \u202b\u06cc\u202c \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "However, some sentences have these keywords but lack the corresponding relation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "e2 \u202b\u06cc\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c e1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": ". \u202b\u06a9\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c \u060c \u202b\u06cc\u202c The elderly e1 should avoid taking this drug due to its effect on bleeding e2 and lack of coordination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "The relation of this example is Other, not Cause-Effect. We intentionally gathered such examples in the test data of the shared task. Most models fail to recognize the true relation of these samples. Therefore these models mainly memorize the keywords surrounding the entities rather than understanding the semantic relations between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "On the other hand, some relation instances lack any keywords, such as the following example, where a Cause-Effect relation is held between entities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "\u202b\u06a9\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c \u202b\u06a9\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c . e2 e1 \u202b\u06a9\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c The only thing that can change the current situation and act as propulsion e1 , is trading e2 . Complex Syntactic Structures Many researchers used the shortest dependency path between entities to detect their relation type. However, when two entities are in separate sentences or complex structures, syntax-based methods usually fail to predict the correct relation, mainly due to the low accuracy of the dependency parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "In this paper, we described the Persian relation extraction shared task that was organized in NSURL-2021. We developed test data that is publicly available. This Persian corpus was developed from scratch, against PERLEX data set that is a semiautomatic translated data. This corpus facilitates further researches on Persian RE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/nasrin-taghizadeh/ NSURL-Persian-RelationExtraction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://virgool.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Behrooz Janfada, and Behrouz Minaei-Bidgoli. 2020. Perlex: A bilingual persian-english gold dataset for relation extraction",
"authors": [
{
"first": "Mehrdad",
"middle": [],
"last": "Majid Asgari-Bidhendi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nasser",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.06588"
]
},
"num": null,
"urls": [],
"raw_text": "Majid Asgari-Bidhendi, Mehrdad Nasser, Behrooz Jan- fada, and Behrouz Minaei-Bidgoli. 2020. Perlex: A bilingual persian-english gold dataset for relation extraction. arXiv preprint arXiv:2005.06588.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SNPPhenA: a corpus for extracting ranked associations of single-nucleotide polymorphisms and phenotypes from literature",
"authors": [
{
"first": "Behrouz",
"middle": [],
"last": "Bokharaeian",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Nasrin",
"middle": [],
"last": "Taghizadeh",
"suffix": ""
},
{
"first": "Hamidreza",
"middle": [],
"last": "Chitsaz",
"suffix": ""
},
{
"first": "Ramyar",
"middle": [],
"last": "Chavoshinejad",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of biomedical semantics",
"volume": "8",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Behrouz Bokharaeian, Alberto Diaz, Nasrin Taghizadeh, Hamidreza Chitsaz, and Ramyar Chavoshinejad. 2017. SNPPhenA: a corpus for extracting ranked associations of single-nucleotide polymorphisms and phenotypes from literature. Journal of biomedical semantics, 8(1):14.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Relation classification as two-way spanprediction",
"authors": [
{
"first": "Shachar",
"middle": [],
"last": "Amir Dn Cohen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Rosenman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.04829"
]
},
"num": null,
"urls": [],
"raw_text": "Amir DN Cohen, Shachar Rosenman, and Yoav Gold- berg. 2020. Relation classification as two-way span- prediction. arXiv preprint arXiv:2010.04829.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Parsbert: Transformer-based model for persian language understanding",
"authors": [
{
"first": "Mehrdad",
"middle": [],
"last": "Farahani",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Gharachorloo",
"suffix": ""
},
{
"first": "Marzieh",
"middle": [],
"last": "Farahani",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Manthouri",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.12515"
]
},
"num": null,
"urls": [],
"raw_text": "Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, and Mohammad Manthouri. 2020. Pars- bert: Transformer-based model for persian language understanding. arXiv preprint arXiv:2005.12515.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SemEval-2018 task 7: Semantic relation extraction and classification in scientific papers",
"authors": [
{
"first": "Kata",
"middle": [],
"last": "G\u00e1bor",
"suffix": ""
},
{
"first": "Davide",
"middle": [],
"last": "Buscaldi",
"suffix": ""
},
{
"first": "Anne-Kathrin",
"middle": [],
"last": "Schumann",
"suffix": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Qasemizadeh",
"suffix": ""
},
{
"first": "Ha\u00effa",
"middle": [],
"last": "Zargayouna",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Charnois",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "679--688",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1111"
]
},
"num": null,
"urls": [],
"raw_text": "Kata G\u00e1bor, Davide Buscaldi, Anne-Kathrin Schu- mann, Behrang QasemiZadeh, Ha\u00effa Zargayouna, and Thierry Charnois. 2018. SemEval-2018 task 7: Semantic relation extraction and classification in scientific papers. In Proceedings of The 12th Inter- national Workshop on Semantic Evaluation, pages 679-688, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantic relation extraction using sequential and tree-structured lstm with attention",
"authors": [
{
"first": "Zhiqiang",
"middle": [],
"last": "Geng",
"suffix": ""
},
{
"first": "Guofei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yongming",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Information Sciences",
"volume": "509",
"issue": "",
"pages": "183--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ZhiQiang Geng, GuoFei Chen, YongMing Han, Gang Lu, and Fang Li. 2020. Semantic relation extrac- tion using sequential and tree-structured lstm with attention. Information Sciences, 509:183-192.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semeval-2010 task 8: Multiway classification of semantic relations between pairs of nominals",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "\u00d3",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions",
"volume": "",
"issue": "",
"pages": "94--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid \u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi- way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Se- mantic Evaluations: Recent Achievements and Fu- ture Directions, pages 94-99s.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving pre-trained language model for relation extraction using syntactic information in persian",
"authors": [
{
"first": "Somayyeh",
"middle": [],
"last": "Mohammad Mahdi Jafari",
"suffix": ""
},
{
"first": "Alireza",
"middle": [],
"last": "Behmanesh",
"suffix": ""
},
{
"first": "Ali Nadian",
"middle": [],
"last": "Talebpour",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ghomsheh",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of The Second International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2021) co-located with ICNLSP 2021 -Short Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Mahdi Jafari, Somayyeh Behmanesh, Alireza Talebpour, and Ali Nadian Ghomsheh. 2021. Improving pre-trained language model for relation extraction using syntactic information in persian. In Proceedings of The Second International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2021) co-located with ICNLSP 2021 -Short Papers, Trento, Italy.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SemEval-2018 task 10: Capturing discriminative attributes",
"authors": [
{
"first": "Alicia",
"middle": [],
"last": "Krebs",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Paperno",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "732--740",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1117"
]
},
"num": null,
"urls": [],
"raw_text": "Alicia Krebs, Alessandro Lenci, and Denis Paperno. 2018. SemEval-2018 task 10: Capturing discrimi- native attributes. In Proceedings of The 12th Inter- national Workshop on Semantic Evaluation, pages 732-740, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Chinese relation extraction with multi-grained information and external linguistic knowledge",
"authors": [
{
"first": "Ziran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4377--4386",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziran Li, Ning Ding, Zhiyuan Liu, Haitao Zheng, and Ying Shen. 2019. Chinese relation extraction with multi-grained information and external linguistic knowledge. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4377-4386.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Knowledge enhanced contextual word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Neumann",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Logan",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Vidur",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.04164"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Robert L Lo- gan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. arXiv preprint arXiv:1909.04164.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Persian relation extraction using ParsBERT on the PERLEX dataset",
"authors": [
{
"first": "Moein",
"middle": [],
"last": "Salimi Sartakhti",
"suffix": ""
},
{
"first": "Romina",
"middle": [],
"last": "Etezadi",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Shamsfard",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of The Second International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2021) co-located with ICNLSP 2021 -Short Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moein Salimi Sartakhti, Romina Etezadi, and Mehrnoosh Shamsfard. 2021. Persian relation ex- traction using ParsBERT on the PERLEX dataset. In Proceedings of The Second International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2021) co-located with ICNLSP 2021 -Short Papers, Trento, Italy.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "SemEval-2020 task 6: Definition extraction from free text with the DEFT corpus",
"authors": [
{
"first": "Sasha",
"middle": [],
"last": "Spala",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Dockhorn",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "336--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sasha Spala, Nicholas Miller, Franck Dernoncourt, and Carl Dockhorn. 2020. SemEval-2020 task 6: Defi- nition extraction from free text with the DEFT cor- pus. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 336-345, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WNUT-2020 task 1 overview: Extracting entities and relations from wet lab protocols",
"authors": [
{
"first": "Jeniya",
"middle": [],
"last": "Tabassum",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)",
"volume": "",
"issue": "",
"pages": "260--267",
"other_ids": {
"DOI": [
"10.18653/v1/2020.wnut-1.33"
]
},
"num": null,
"urls": [],
"raw_text": "Jeniya Tabassum, Wei Xu, and Alan Ritter. 2020. WNUT-2020 task 1 overview: Extracting entities and relations from wet lab protocols. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 260-267, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "NSURL-2019 task 7: Named entity recognition for Farsi",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Taghizadeh",
"suffix": ""
},
{
"first": "Zeinab",
"middle": [],
"last": "Borhanifard",
"suffix": ""
},
{
"first": "Mojgan",
"middle": [],
"last": "Melika Golestani Pour",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Farhoodi",
"suffix": ""
},
{
"first": "Masoumeh",
"middle": [],
"last": "Mahmoudi",
"suffix": ""
},
{
"first": "Hesham",
"middle": [],
"last": "Azimzadeh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Faili",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of The First International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2019) co-located with ICNLSP 2019 -Short Papers",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Taghizadeh, Zeinab Borhanifard, Me- lika Golestani Pour, Mojgan Farhoodi, Maryam Mahmoudi, Masoumeh Azimzadeh, and Hesham Faili. 2019. NSURL-2019 task 7: Named entity recognition for Farsi. In Proceedings of The First International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2019) co-located with ICNLSP 2019 -Short Papers, pages 9-15, Trento, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Crosslingual adaptation using universal dependencies",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Taghizadeh",
"suffix": ""
},
{
"first": "Heshaam",
"middle": [],
"last": "Faili",
"suffix": ""
}
],
"year": 2021,
"venue": "Transactions on Asian and Low-Resource Language Information Processing",
"volume": "20",
"issue": "",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Taghizadeh and Heshaam Faili. 2021. Cross- lingual adaptation using universal dependencies. Transactions on Asian and Low-Resource Language Information Processing, 20(4):1-23.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Crosslingual transfer learning for relation extraction using universal dependencies",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Taghizadeh",
"suffix": ""
},
{
"first": "Heshaam",
"middle": [],
"last": "Faili",
"suffix": ""
}
],
"year": 2022,
"venue": "Computer Speech & Language",
"volume": "71",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Taghizadeh and Heshaam Faili. 2022. Cross- lingual transfer learning for relation extraction using universal dependencies. Computer Speech & Lan- guage, 71:101265.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Cross-language learning for arabic relation extraction",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Taghizadeh",
"suffix": ""
},
{
"first": "Heshaam",
"middle": [],
"last": "Faili",
"suffix": ""
},
{
"first": "Jalal",
"middle": [],
"last": "Maleki",
"suffix": ""
}
],
"year": 2018,
"venue": "Procedia computer science",
"volume": "142",
"issue": "",
"pages": "190--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Taghizadeh, Heshaam Faili, and Jalal Maleki. 2018. Cross-language learning for arabic relation ex- traction. Procedia computer science, 142:190-197.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Enhancing relation extraction using syntactic indicators and sentential contexts",
"authors": [
{
"first": "Qiongxing",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Xiangfeng",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)",
"volume": "",
"issue": "",
"pages": "1574--1580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiongxing Tao, Xiangfeng Luo, Hao Wang, and Richard Xu. 2019. Enhancing relation extraction using syntac- tic indicators and sentential contexts. In 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), pages 1574-1580. IEEE.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Exploiting the syntax-model consistency for neural relation extraction",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Amir Pouran",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Veyseh",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Thien Huu",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8021--8032",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou, and Thien Huu Nguyen. 2020. Exploiting the syntax-model consistency for neural relation extrac- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8021-8032.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Ace 2005 multilingual training corpus-linguistic data consortium",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Medero",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2005. Ace 2005 multilingual training corpus-linguistic data consortium. URL: https://catalog. ldc. upenn. edu/LDC2006T06.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Enriching pretrained language model with entity information for relation classification",
"authors": [
{
"first": "Shanchan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM international conference on information and knowledge management",
"volume": "",
"issue": "",
"pages": "2361--2364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification. In Proceedings of the 28th ACM international conference on information and knowledge management, pages 2361-2364.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction. Knowledge-Based Systems",
"authors": [
{
"first": "Kang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Xiaoteng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.knosys.2021.106888"
]
},
"num": null,
"urls": [],
"raw_text": "Kang Zhao, Hua Xu, Yue Cheng, Xiaoteng Li, and Kai Gao. 2021. Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction. Knowledge-Based Systems, page 106888.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving relation classification by entity pair graph",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Huaiyu",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianwei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Youfang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Asian Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1156--1171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Zhao, Huaiyu Wan, Jianwei Gao, and Youfang Lin. 2019. Improving relation classification by entity pair graph. In Asian Conference on Machine Learning, pages 1156-1171. PMLR.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Adversarial feature adaptation for crosslingual relation classification",
"authors": [
{
"first": "Bowei",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Zengzhuang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "437--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bowei Zou, Zengzhuang Xu, Yu Hong, and Guodong Zhou. 2018. Adversarial feature adaptation for cross- lingual relation classification. In Proceedings of the 27th International Conference on Computational Lin- guistics, pages 437-448.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "Relation types of SemEval 2010-task 8 dataset",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF1": {
"text": "Distribution of the task evaluation set in different semantic classes.",
"type_str": "table",
"content": "<table><tr><td>Class</td><td colspan=\"3\">(e1, e2) (e2, e1) Total</td></tr><tr><td>Cause-Effect</td><td>107</td><td>46</td><td>153</td></tr><tr><td>Component-Whole</td><td>86</td><td>45</td><td>131</td></tr><tr><td>Content-Container</td><td>62</td><td>51</td><td>113</td></tr><tr><td>Entity-Destination</td><td>137</td><td>20</td><td>157</td></tr><tr><td>Entity-Origin</td><td>108</td><td>30</td><td>138</td></tr><tr><td>Instrument-Agency</td><td>48</td><td>69</td><td>117</td></tr><tr><td>Member-Collection</td><td>92</td><td>48</td><td>140</td></tr><tr><td>Message-Topic</td><td>98</td><td>48</td><td>146</td></tr><tr><td>Product-Producer</td><td>80</td><td>90</td><td>170</td></tr><tr><td>Other</td><td>235</td><td/><td>235</td></tr><tr><td>Total</td><td/><td/><td>1500</td></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"text": "Examples of entities in test set of the shared task.",
"type_str": "table",
"content": "<table><tr><td>Entity</td><td>English Equivalent</td><td/><td colspan=\"4\">Persian Example</td></tr><tr><td>complex NP</td><td>Even those e1 whose job is not subject to</td><td/><td/><td/><td colspan=\"2\">\u202b\u06a9\u202c e1 \u202b\u06cc\u202c \u202b\u06a9\u202c \u202b\u06cc\u202c</td></tr><tr><td/><td>Corona's restrictions suffer from the economic</td><td>\u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td><td colspan=\"3\">\u060c \u202b\u06cc\u202c \u202b\u06cc\u06cc\u202c \u202b\u06a9\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c</td></tr><tr><td/><td>impact of this epidemic e2.</td><td/><td colspan=\"2\">. \u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td><td>e2 \u202b\u06cc\u202c \u202b\u06cc\u202c \u202b\u06cc\u202c</td></tr><tr><td>noun in VP</td><td>Sometimes exam pressure e1 can make you</td><td/><td colspan=\"3\">\u202b\u06cc\u202c e1 \u202b\u06a9\u202c \u202b\u06a9\u202c</td><td>\u060c</td><td>\u202b\u06cc\u202c</td></tr><tr><td/><td>scared e2.</td><td/><td/><td/><td colspan=\"2\">. \u202b\u06a9\u202c e2</td></tr><tr><td>Named Entities</td><td>Nazanin e1 is the only daughter in the</td><td>.</td><td>e2</td><td/><td/><td>e1</td><td>\u202b\u06cc\u202c</td></tr><tr><td/><td>family e2.</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">entities in two sentences The height of this waterfall e1 is about 7</td><td>\u202b\u06cc\u06a9\u202c</td><td/><td/><td>e1</td><td>\u202b\u06cc\u202c</td></tr><tr><td/><td>meters and it falls down from a rock wall e2.</td><td/><td colspan=\"2\">. \u202b\u06cc\u202c \u202b\u06cc\u202c \u202b\u06cc\u06cc\u202c</td><td colspan=\"2\">e2 \u202b\u06cc\u202c</td><td>\u202b\u06cc\u202c</td></tr><tr><td>informal words</td><td>I can say that the first week of taking the</td><td/><td/><td/><td>\u202b\u06a9\u202c</td><td>\u202b\u06cc\u202c</td></tr><tr><td/><td>medication e1 I was just asleep e2.</td><td/><td>.</td><td>e2</td><td/><td>e1</td></tr><tr><td colspan=\"2\">lenge relates to the confusion of classes. For</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">example, the relationship between entities in</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">the following sentence may be confused among</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Component-Whole, Content-Container,</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">and Entity-Origin:</td><td/><td/><td/><td/></tr><tr><td>.</td><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"text": "Results of the participating teams against the state-of-the-are approaches for mono-lingual RE (Sub-Task A).",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF4": {
"text": "T-Bert (Jafari et al., 2021) 56.74 56.05 49.14 71.43 56.93 59.93 43.87 60.95 63.32 57.60 U-BERT (Jafari et al., 2021) 58.33 55.75 50.91 69.48 59.06 66.92 47.23 65.93 61.35 59.44 SBU-NLP (Salimi Sartakhti et al., 2021) 61.70 66.44 59.26 76.01 58.04 75.54 32.85 76.06 76.13 64.67 R-BERT (Wu and He, 2019) + ParsBERT 62.76 62.14 55.37 75.17 66.19 74.72 50.66 73.00 79.13 66.57 RIFRE (Zhao et al., 2021) + ParsBERT 72.11 59.93 51.25 76.77 71.79 74.36 53.95 70.73 78.15 67.67 Test set of PERLEX T-Bert (Jafari et al., 2021) 88.11 74.14 80.00 84.81 75.39 61.05 72.53 81.80 74.90 76.97 U-BERT (Jafari et al., 2021) 88.72 74.41 82.38 85.01 76.98 72.85 73.57 78.57 77.02 78.83 R-BERT (Wu and He, 2019) + ParsBERT 87.91 73.29 79.81 85.97 76.60 74.07 73.89 83.11 77.35 79.11 SBU-NLP (Salimi Sartakhti et al., 2021) 89.37 77.45 82.13 88.58 79.84 76.07 76.60 85.92 79.91 81.76 RIFRE (Zhao et al., 2021) + ParsBERT 93.07 80.54 80.11 85.76 81.92 80.39 85.40 90.41 76.79 83.82",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}