| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:43:52.005714Z" |
| }, |
| "title": "Incorporate Semantic Structures into Machine Translation Evaluation via UCCA", |
| "authors": [ |
| { |
| "first": "Jin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Peking University", |
| "location": {} |
| }, |
| "email": "jinxu@pku.edu.cn" |
| }, |
| { |
| "first": "Yinuo", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Key Laboratory of Computational Linguistics", |
| "institution": "Peking University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Junfeng", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Key Laboratory of Computational Linguistics", |
| "institution": "Peking University", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Copying mechanism has been commonly used in neural paraphrasing networks and other text generation tasks, in which some important words in the input sequence are preserved in the output sequence. Similarly, in machine translation, we notice that there are certain words or phrases appearing in all good translations of one source text, and these words tend to convey important semantic information. Therefore, in this work, we define words carrying important semantic meanings in sentences as semantic core words. Moreover, we propose an MT evaluation approach named Semantically Weighted Sentence Similarity (SWSS). It leverages the power of UCCA to identify semantic core words, and then calculates sentence similarity scores on the overlap of semantic core words. Experimental results show that SWSS can consistently improve the performance of popular MT evaluation metrics which are based on lexical similarity.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Copying mechanism has been commonly used in neural paraphrasing networks and other text generation tasks, in which some important words in the input sequence are preserved in the output sequence. Similarly, in machine translation, we notice that there are certain words or phrases appearing in all good translations of one source text, and these words tend to convey important semantic information. Therefore, in this work, we define words carrying important semantic meanings in sentences as semantic core words. Moreover, we propose an MT evaluation approach named Semantically Weighted Sentence Similarity (SWSS). It leverages the power of UCCA to identify semantic core words, and then calculates sentence similarity scores on the overlap of semantic core words. Experimental results show that SWSS can consistently improve the performance of popular MT evaluation metrics which are based on lexical similarity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Machine Translation Evaluation (MTE) is to evaluate the quality of sentences produced by Machine Translation (MT) systems. Most automatic MT evaluation metrics compare the candidate sentences from MT systems with reference sentences from human translation to produce a similarity score, in contrast some other reference-less metrics directly compare candidate sentences and source sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "According to the observation of well-translated sentences, we find out that there are certain words or phrases appearing in all good translations of one source text. This phenomenon is consistent with the intuition of copying mechanism (Gu et al., 2016) , which has been widely used in lots of text generation tasks. In the field of MT evaluation, Meteor++ (Guo et al., 2018) firstly proposes the concept of copy knowledge to define the words with copy property, and it further incorporates the copy knowledge into Meteor (Denkowski and Lavie, 2014) to improve its performance. Specifically, it attempts to find copy words of references and candidate sentences, and uses the overlap of these words to modify the calculation of precision and recall of Meteor. However, Meteor++ uses named entities as an alternative to copy knowledge in its experiments, resulting in a limited range of selected copy words and a slight improvement.", |
| "cite_spans": [ |
| { |
| "start": 236, |
| "end": 253, |
| "text": "(Gu et al., 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 357, |
| "end": 375, |
| "text": "(Guo et al., 2018)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 522, |
| "end": 549, |
| "text": "(Denkowski and Lavie, 2014)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we argue that words undertaking important semantic meanings should be exactly expressed during the translation procedure, which we define as semantic core words. This concept is much more general and closer to linguistic intuition compared to the copy knowledge used in Meteor++. In order to apply semantic core words in the process of MT evaluation, we design a mechanism named Semantically Weighted Sentence Similarity (SWSS) illustrated in Figure 1 . Firstly, SWSS extracts semantic core words according to the annotated semantic labels in Universal Conceptual Cognitive Annotation (UCCA) (Abend and Rappoport, 2013) , a multi-layered semantic representation. UCCA is an appealing candidate for this mechanism as it includes a lot of fundamental semantic phenomena, such as verbal, nominal and adjectival argument structures and their inter-relations. Also, semantic units in UCCA are anchored in the text, which simplifies the aligning procedure a lot. With the assumption that all high-quality translations should have the same semantic core words, SWSS then calculates precision and recall based on the overlap of semantic core words between sentence pairs and their corresponding F1 scores. Finally, we modify the F1 score according to the differences of two UCCA representations. For example, Scenes are involved in the penalties, which are essential nodes in UCCA indicating actions and states of the sentences. Our experimental results show that SWSS can be combined with other popular MT evaluation metrics to improve their performance significantly.", |
| "cite_spans": [ |
| { |
| "start": 606, |
| "end": 633, |
| "text": "(Abend and Rappoport, 2013)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 457, |
| "end": 465, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "BLEU (Papineni et al., 2002) and Meteor are two most popular MT evaluation metrics. BLEU measures n-grams overlapping between the candidate sentences and reference sentences, while Meteor aligns words and phrases to calculate a modified weighted F-score. The two metrics are based on lexical similarity but somehow neglect semantic structure information of the sentences.", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 28, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Machine Translation Evaluation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Efforts have been made to incorporate linguistic features and resources into MT evaluation. RED (Yu et al., 2014) makes use of dependency tree and MEANT (Lo et al., 2012 ) makes use of semantic parser. Categories such as part-ofspeech (Avramidis et al., 2011) and named entity (Buck, 2012) also have their effects. In order to complement WordNet (Miller, 1998) and paraphrase table in Meteor, Meteor++2.0 (Guo and Hu, 2019) applies syntactic-level paraphrase knowledge.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 113, |
| "text": "(Yu et al., 2014)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 153, |
| "end": 169, |
| "text": "(Lo et al., 2012", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 235, |
| "end": 259, |
| "text": "(Avramidis et al., 2011)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 277, |
| "end": 289, |
| "text": "(Buck, 2012)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 346, |
| "end": 360, |
| "text": "(Miller, 1998)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Machine Translation Evaluation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Semantic representation focuses on how meaning is expressed in a sentence. Some semantic representation frameworks such as UNL (Uchida and Zhu, 2001 ) and AMR (Banarescu et al., 2013) use concept nodes to represent content words of sentence, and use directed edges with labels to indicate the semantic relation between nodes.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 148, |
| "text": "(Uchida and Zhu, 2001", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 159, |
| "end": 183, |
| "text": "(Banarescu et al., 2013)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "UCCA is a novel multi-layered semantic representation framework, which converts a sentence into a directed acyclic graph (DAG). Leaf nodes of UCCA graph correspond to words in the sentence, There are two scenes in this sentence, the whole sentence and \"I sold (sofa)\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "and a non-leaf node represents the combination of meanings of its child nodes. A parent node and a child node are connected by a directed edge which demonstrates the semantic role of the child node in the meaning of the parent node. Figure 2 is an example of UCCA representation. Scene is an essential concept in UCCA. A scene describes some movement, action or a state in the sentence. Scene nodes in UCCA representation may be connected to the root node, or embedded in other scenes as arguments or modifiers. A scene node has a main relation, either a Process or a State, and may have some Participants or some Adverbials. These non-scene nodes may also have inner structure.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 233, |
| "end": 241, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "UCCA has been applied in many fields of Natural Language Processing. SAMSA (Sulem et al., 2018 ) is a Text Simplification evaluation metric that defines minimal center of UCCA representation and compares simplified text with the minimal centers of original sentences. It is also used in evaluation of faithfulness in Grammatical Error Correlation (Choshen and Abend, 2018) and human MT evaluation (Birch et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 94, |
| "text": "(Sulem et al., 2018", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 347, |
| "end": 372, |
| "text": "(Choshen and Abend, 2018)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 397, |
| "end": 417, |
| "text": "(Birch et al., 2016)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The most popular MT evaluation metrics such as BLEU and Meteor are based on lexical similarity. This kind of metrics cannot obtain insight into semantic structure of the whole sentence. Our proposed semantic core words are extracted from UCCA semantic structures and used to improve these lexical metrics as we expect them to play the role of copy words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Core Words", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "It is a linguistic intuition that some words carry more semantic information than other words in a sentence. For example, a modifier is usually less important than the word it modifies. In this paper, We define words that have important semantic information as semantic core words. According to their semantic importance, they are expected to be accurately translated during translation. Therefore, we assume that in all good translation results of a specific sentence, the set of semantic core words should be the same, behaving like copy words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Core Words", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We extract semantic core words of a sentence from its UCCA semantic representation. The lowest semantic role label in the representation for each word is considered, which also indicates the most basic semantic role of a word. A word whose lowest semantic role is Process, State, Participant or Center is identified as semantic core words. Figure 3 marks semantic core words of the example sentence. The result is consistent with our intuition of which word has important meaning in this sentence.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 340, |
| "end": 346, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Core Words", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "After semantic core words are extracted from UCCA representations, a word matching algorithm should be applied in order to match all words between the two sentences. In this paper, we use a stemming algorithm. Two words are matched if they have the same stem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Matching", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We count how many semantic core words in a candidate sentence can be matched to any semantic core words in the reference sentence, and compute the proportion as precision. Similarly, we calculate the matched proportion of semantic core words in reference sentence as recall. In our word matching algorithm, it is possible that a word in a sentence is matched to multiple words in the other sentence because they all have the same word stem. However, just like what is conducted in BLEU, a word cannot be contained in multiple matching pairs. The precision and recall are then used to calculate F1 score. We use F1 score here to ensure that SWSS is symmetrical and can be directly used as a sentence similarity metric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Matching", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P = i w(h i ) \u2022 m(h i ) i w(h i ) R = i w(r i ) \u2022 m(r i ) i w(r i ) F 1 = 2P \u2022 R P + R", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Word Matching", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Take the calculation of precision as an example. h i means each semantic core word in the candidate sentence, and w(h i ) is its weight. Though in this paper the weight is fixed to 1, it can be fine-tuned or trained in future work. If h i can be matched to any semantic core word in the reference sentence, m(h i ) is set to 1, otherwise m(h i ) is set to 0. However, m(h i ) can also be different values related to matching type like the operation in Meteor, which might be conducted in future work. A fact is that the UCCA parser we used might occasionally produce an analysis result in which there are no semantic core words in a sentence, which causes division by zero during calculation. In these cases a fixed score \u03c9 is used as an alternative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Matching", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "According to the intuition that good translation results of a specific sentence should have similar semantic structures, we introduce three penalties concerning statistical differences of two UCCA representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 The ratio between counts of scenes of two representations. Let S 1 , S 2 be the counts of scenes, the penalty P S is 1 \u2212 min(S 1 , S 2 )/ max(S 1 , S 2 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 The ratio between counts of nodes of two representations. Let N 1 , N 2 be the counts of nodes, the penalty", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "P N is 1 \u2212 min(N 1 , N 2 )/ max(N 1 , N 2 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 The ratio between counts of edges towards critical semantic roles of two representations, which are Process, State and Participant. This count is the sum of count of scenes and count of all arguments in the sentence. Let E 1 , E 2 be the counts of these edges, the penalty", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "P E is 1 \u2212 min(E 1 , E 2 )/ max(E 1 , E 2 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The three penalties are set to 0 if the counts are equal and their upper bound is 1. Additionally, we also notice that the average word count of a sentence pair can act as another penalty Len. Applying the four penalties, the final score is calculated by the equation below. All parameters here are tunable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Score = F 1 \u2022 exp( \u2212 \u03b1 1 \u2022 P S \u2212 \u03b1 2 \u2022 P N \u2212 \u03b1 3 \u2022 P E \u2212 \u03b1 4 \u2022 Len)", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The SWSS score is calculated independently. Therefore, as a semantic structure-based component, it can be further combined with other MT evaluation metrics to obtain a more accurate evaluation metric. For example, we can obtain a simple weighted model of SWSS and Meteor by tuning the weight \u03b2 below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "SW SS M eteor = M eteor + \u03b2 \u2022 Score (3) 4 Experiments 4.1 Data", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "SWSS is evaluated on WMT15 (Stanojevi\u0107 et al., 2015) and WMT16 metric task evaluation sets and is tuned on WMT17 metric task (Bojar et al., 2017) evaluation set. The datasets are composed of pairs of system output sentences and reference sentences, and also corresponding human evaluation scores for the output sentences. The evaluation set of WMT15 has 4 language pairs and each has 500 sentence pairs. WMT16 dataset has 6 language pairs and WMT17 dataset has 7 language pairs, and each has 560 sentence pairs. Performance of a metric is evaluated by Pearson correlation between scores provided by the metric and the human evaluation scores.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 52, |
| "text": "(Stanojevi\u0107 et al., 2015)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 125, |
| "end": 145, |
| "text": "(Bojar et al., 2017)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Penalty and Combination", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The parameters of SWSS are tuned on the dataset from WMT17 metric task and are listed in Table 2 . We use SpaCy library 1 for word tokenization. Word stems are extracted with Porter stemming algorithm (Porter et al., 1980) . UCCA representations are parsed with the pre-trained model of TUPA (Hershcovich et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 223, |
| "text": "(Porter et al., 1980)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 293, |
| "end": 319, |
| "text": "(Hershcovich et al., 2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 89, |
| "end": 97, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "SWSS is combined with base models including BLEU, Meteor and Meteor++. Table 3 : Results of ablation experiments. \"+UCCA\" is the complete SWSS model combined with Meteor, \"-repr\" means the penalties based on UCCA representation (P S , P N , P E ) are removed, \"-len\" means the length penalty is removed, and \"None\" contains only Meteor without SWSS.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 71, |
| "end": 78, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "words is clearly a good and large-scale representation of copy words, according to the results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We also conduct ablation study to figure out whether the penalties we have introduced are redundant or not. The base model is the combination of SWSS and Meteor. If we remove the representation penalties or the length penalty from the base model, it can be found out from Table 3 that the modified models have lower correlation than the complete model. The result with p < 0.05 proves that these penalties have a positive effect on the mechanism.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 272, |
| "end": 279, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In this paper, we propose Semantically Weighted Sentence Similarity (SWSS), which leverages the power of UCCA to identify semantic core words, and then calculates a similarity score for machine translation evaluation. Inspired by copying mechanism used in sequence generation tasks, we argue that semantic core words, which carry important meaning in the sentence, should exist in all good translations. Additionally, SWSS also uses penalties based on the differences between UCCA structures and sentence lengths, including the concept of Scene in UCCA, in order to make the output scores more accurate. Experimental results show that SWSS can produce a higher correlation in MT evaluation when combined with lexical MT evaluation metrics such as BLEU and Meteor.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://spacy.io/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Universal conceptual cognitive annotation (ucca)", |
| "authors": [ |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "228--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omri Abend and Ari Rappoport. 2013. Universal con- ceptual cognitive annotation (ucca). In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 228-238.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Evaluate with confidence estimation: Machine ranking of translation outputs using grammatical features", |
| "authors": [ |
| { |
| "first": "Eleftherios", |
| "middle": [], |
| "last": "Avramidis", |
| "suffix": "" |
| }, |
| { |
| "first": "Maja", |
| "middle": [], |
| "last": "Popovic", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Vilar", |
| "suffix": "" |
| }, |
| { |
| "first": "Aljoscha", |
| "middle": [], |
| "last": "Burchardt", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "65--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eleftherios Avramidis, Maja Popovic, David Vilar, and Aljoscha Burchardt. 2011. Evaluate with confidence estimation: Machine ranking of translation outputs using grammatical features. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 65-70. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Abstract meaning representation for sembanking", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Banarescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Bonial", |
| "suffix": "" |
| }, |
| { |
| "first": "Shu", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Madalina", |
| "middle": [], |
| "last": "Georgescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Griffitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 7th linguistic annotation workshop and interoperability with discourse", |
| "volume": "", |
| "issue": "", |
| "pages": "178--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with dis- course, pages 178-186.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "HUME: Human UCCA-based evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| }, |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1264--1274", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D16-1134" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexandra Birch, Omri Abend, Ond\u0159ej Bojar, and Barry Haddow. 2016. HUME: Human UCCA-based evaluation of machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1264-1274, Austin, Texas. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Results of the WMT17 metrics shared task", |
| "authors": [ |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Kamran", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Second Conference on Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "489--513", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ond\u0159ej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489-513, Copenhagen, Denmark. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Results of the wmt16 metrics shared task", |
| "authors": [ |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Kamran", |
| "suffix": "" |
| }, |
| { |
| "first": "Milo\u0161", |
| "middle": [], |
| "last": "Stanojevi\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the First Conference on Machine Translation", |
| "volume": "2", |
| "issue": "", |
| "pages": "199--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ond\u0159ej Bojar, Yvette Graham, Amir Kamran, and Milo\u0161 Stanojevi\u0107. 2016. Results of the wmt16 met- rics shared task. In Proceedings of the First Con- ference on Machine Translation: Volume 2, Shared Task Papers, pages 199-231.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Black box features for the wmt 2012 quality estimation shared task", |
| "authors": [ |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Buck", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "91--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christian Buck. 2012. Black box features for the wmt 2012 quality estimation shared task. In Proceed- ings of the Seventh Workshop on Statistical Machine Translation, pages 91-95.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Referenceless measure of faithfulness for grammatical error correction", |
| "authors": [ |
| { |
| "first": "Leshem", |
| "middle": [], |
| "last": "Choshen", |
| "suffix": "" |
| }, |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "2", |
| "issue": "", |
| "pages": "124--129", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N18-2020" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leshem Choshen and Omri Abend. 2018. Reference- less measure of faithfulness for grammatical error correction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 124- 129, New Orleans, Louisiana. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Meteor universal: Language specific translation evaluation for any target language", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Denkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the ninth workshop on statistical machine translation", |
| "volume": "", |
| "issue": "", |
| "pages": "376--380", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376-380.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Incorporating copying mechanism in sequence-to-sequence learning", |
| "authors": [ |
| { |
| "first": "Jiatao", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [ |
| "K" |
| ], |
| "last": "Victor", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1603.06393" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Meteor++ 2.0: Adopt syntactic level paraphrase knowledge into machine translation evaluation", |
| "authors": [ |
| { |
| "first": "Yinuo", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Junfeng", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Fourth Conference on Machine Translation", |
| "volume": "2", |
| "issue": "", |
| "pages": "501--506", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yinuo Guo and Junfeng Hu. 2019. Meteor++ 2.0: Adopt syntactic level paraphrase knowledge into ma- chine translation evaluation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 501-506.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Me-teor++: Incorporating copy knowledge into machine translation evaluation", |
| "authors": [ |
| { |
| "first": "Yinuo", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Chong", |
| "middle": [], |
| "last": "Ruan", |
| "suffix": "" |
| }, |
| { |
| "first": "Junfeng", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "740--745", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yinuo Guo, Chong Ruan, and Junfeng Hu. 2018. Me- teor++: Incorporating copy knowledge into machine translation evaluation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 740-745.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A transition-based directed acyclic graph parser for UCCA", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Hershcovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1127--1138", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1104" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1127- 1138, Vancouver, Canada. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Fully automatic semantic mt evaluation", |
| "authors": [ |
| { |
| "first": "Chi-Kiu", |
| "middle": [], |
| "last": "Lo", |
| "suffix": "" |
| }, |
| { |
| "first": "Anand", |
| "middle": [], |
| "last": "Karthik Tumuluru", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "243--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chi-kiu Lo, Anand Karthik Tumuluru, and Dekai Wu. 2012. Fully automatic semantic mt evaluation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 243-252. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "WordNet: An electronic lexical database", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A Miller. 1998. WordNet: An electronic lexical database. MIT press.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "An algorithm for suffix stripping", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Porter", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "", |
| "volume": "14", |
| "issue": "", |
| "pages": "130--137", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin F Porter et al. 1980. An algorithm for suffix stripping. Program, 14(3):130-137.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Results of the wmt15 metrics shared task", |
| "authors": [ |
| { |
| "first": "Milo\u0161", |
| "middle": [], |
| "last": "Stanojevi\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Kamran", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "256--273", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milo\u0161 Stanojevi\u0107, Amir Kamran, Philipp Koehn, and Ond\u0159ej Bojar. 2015. Results of the wmt15 metrics shared task. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 256-273.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Semantic structural evaluation for text simplification", |
| "authors": [ |
| { |
| "first": "Elior", |
| "middle": [], |
| "last": "Sulem", |
| "suffix": "" |
| }, |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "685--696", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N18-1063" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018. Semantic structural evaluation for text simplifica- tion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 685-696, New Orleans, Louisiana. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "The universal networking language beyond machine translation", |
| "authors": [ |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Uchida", |
| "suffix": "" |
| }, |
| { |
| "first": "Meiying", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "International Symposium on Language in Cyberspace", |
| "volume": "", |
| "issue": "", |
| "pages": "26--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroshi Uchida and Meiying Zhu. 2001. The univer- sal networking language beyond machine transla- tion. In International Symposium on Language in Cyberspace, Seoul, pages 26-27.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Red: A reference dependency based mt evaluation metric", |
| "authors": [ |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaofeng", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenbin", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shouxun", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COLING 2014, the 25th international conference on computational linguistics: technical papers", |
| "volume": "", |
| "issue": "", |
| "pages": "2042--2051", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hui Yu, Xiaofeng Wu, Jun Xie, Wenbin Jiang, Qun Liu, and Shouxun Lin. 2014. Red: A reference depen- dency based mt evaluation metric. In Proceedings of COLING 2014, the 25th international conference on computational linguistics: technical papers, pages 2042-2051.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "An illustration of the process of SWSS.", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "UCCA representation of sentence \"John and Mary bought the sofa I sold together\". Labels include Parallel Scene (H), Participant (A), Process (P), Adverbial (D), Center (C), Connector (N), Elaborator (E). Dash line indicates a secondary semantic relation.", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "An example of semantic core words. The sentence is the same withFigure 2. All semantic core words are bold and the semantic labels of related edges are italic.", |
| "num": null |
| }, |
| "TABREF2": { |
| "text": "Segment-level Pearson correlation comparison between base model and the combination of SWSS and base model. The smoothing parameter X of Meteor++ is set to 8, which is used on WMT15 dataset in its paper.", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Parameters of SWSS in experiments.", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF4": { |
| "text": "", |
| "content": "<table><tr><td>shows that</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |