ACL-OCL / Base_JSON /prefixK /json /K15 /K15-2008.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-2008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:53.328874Z"
},
"title": "The CLaC Discourse Parser at CoNLL-2015",
"authors": [
{
"first": "Majid",
"middle": [],
"last": "Laali",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Concordia University",
"location": {
"settlement": "Montreal",
"region": "Quebec",
"country": "Canada"
}
},
"email": "mlaali@encs.concordia.ca"
},
{
"first": "Elnaz",
"middle": [],
"last": "Davoodi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Concordia University",
"location": {
"settlement": "Montreal",
"region": "Quebec",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Leila",
"middle": [],
"last": "Kosseim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Concordia University",
"location": {
"settlement": "Montreal",
"region": "Quebec",
"country": "Canada"
}
},
"email": "kosseim@encs.concordia.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our submission (kos-seim15) to the CoNLL-2015 shared task on shallow discourse parsing. We used the UIMA framework to develop our parser and used ClearTK to add machine learning functionality to the UIMA framework. Overall, our parser achieves a result of 17.3 F 1 on the identification of discourse relations on the blind CoNLL-2015 test set, ranking in sixth place.",
"pdf_parse": {
"paper_id": "K15-2008",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our submission (kos-seim15) to the CoNLL-2015 shared task on shallow discourse parsing. We used the UIMA framework to develop our parser and used ClearTK to add machine learning functionality to the UIMA framework. Overall, our parser achieves a result of 17.3 F 1 on the identification of discourse relations on the blind CoNLL-2015 test set, ranking in sixth place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Today, discourse parsers typically consist of several independent components that address the following problems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Discourse Connective Classification: The concern of this problem is the identification of discourse usage of discourse connectives within a text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Argument Labeling: This problem focuses on labeling the text spans of the two discourse arguments, namely ARG1 and ARG2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Explicit Sense Classification: This problem can be reduced to the sense disambiguation of the discourse connective in an explicit discourse relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. Non-Explicit Sense Classification: The target of this problem is the identification of implicit discourse relations between two consecutive sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To illustrate these tasks, consider Example (1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We would stop index arbitrage when the market is under stress. 1 The task of Discourse Connective Classification is to determine if the marker \"when\" is used to mark a discourse relation or not. Argument Labeling should segment the two arguments ARG1 and ARG2 (in this example, ARG1 is italicized while ARG2 is bolded). Finally, Explicit Sense Classification should identify which discourse relation is signaled by \"when\" -in this case CON-TINGENCY.CONDITION.",
"cite_spans": [
{
"start": 63,
"end": 64,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we report on the development and results of our discourse parser for the CoNLL 2015 shared task. Our parser, named CLaC Discourse Parser, was built from scratch and took about 3 person-month to code. The focus of the CLaC Discourse Parser is the treatment of explicit discourse relations (i.e. problem 1 to 3 above).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We developed our parser based on the UIMA framework (Ferrucci and Lally, 2004) and we used ClearTK (Bethard et al., 2014) to add machine learning functionality to the UIMA framework. The parser was written in Java and its source code is distributed under the BSD license 2 . Figure 1 shows the architecture of the CLaC Discourse Parser. Motivated by Lin et al. (2014) , the architecture of the CLaC Discourse Parser is a pipeline that consists in five components: CoNLL Syntax Reader, Discourse Connective Annotator, Argument Labeler, Discourse Sense Annotator and CoNLL JSON Exporter. Due to lack of time, we did not implement a Non-Explicit Classification in our pipeline and only focused on explicit discourse relations.",
"cite_spans": [
{
"start": 52,
"end": 78,
"text": "(Ferrucci and Lally, 2004)",
"ref_id": "BIBREF1"
},
{
"start": 91,
"end": 121,
"text": "ClearTK (Bethard et al., 2014)",
"ref_id": null
},
{
"start": 350,
"end": 367,
"text": "Lin et al. (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architecture of the CLaC Discourse Parser",
"sec_num": "2"
},
{
"text": "The CoNLL Syntax Reader and the CoNLL JSON Exporter were added to the CLaC Discourse Parser in order for the input and the output of the parser to be compatible with the CoNLL Figure 1 : Components of the CLaC Discourse Parser 2015 Shared Task specifications. The CoNLL Syntax Reader parses syntactic information (i.e. POS tags, constituent parse trees and dependency parses). CoNLL organisers and adds this syntactic information to the documents in the UIMA framework. To create a stand-alone parser, the CoNLL Syntax Reader can be easily replaced with the cleartk-berkeleyparser component in the CLaC discourse Parser pipeline. This component is a wrapper around the Berkeley syntactic parser (Petrov and Klein, 2007) and distributed with ClearTK. The Berkeley syntactic parser was actually used in the CoNLL shared task to parse texts and generate the syntactic information.",
"cite_spans": [
{
"start": 695,
"end": 719,
"text": "(Petrov and Klein, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architecture of the CLaC Discourse Parser",
"sec_num": "2"
},
{
"text": "The CoNLL JSON Exporter reads the output discourse relations annotated in the UIMA documents and generates a JSON file in the format required for the CoNLL shared task. We will discuss the other components in details in the next sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture of the CLaC Discourse Parser",
"sec_num": "2"
},
{
"text": "To annotate discourse connectives, the Discourse Connective Annotator first searches the input texts for terms that match a pre-defined list of discourse connectives. This list of discourse connectives was built solely from the CoNLL training dataset of around 30K explicit discourse relations and contains 100 discourse connectives. Each match of discourse connective is then checked to see if it occurs in discourse usage or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Connective Annotator",
"sec_num": "2.1"
},
{
"text": "Inspired by (Pitler et al., 2009) , we built a binary classifier with six local syntactic and lexicalized features of discourse connectives to classify discourse connectives as discourse usage or nondiscourse usage. These features are listed in Table 1 in the row labeled Connective Features.",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "(Pitler et al., 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Connective Annotator",
"sec_num": "2.1"
},
{
"text": "When ARG1 and ARG2 appear in the same sentence, we can exploit the syntactic tree to label boundaries of the discourse arguments. Motivated by (Lin et al., 2014) , we first classify each constituent in the parse tree into to three categories: part of ARG1, part of ARG2 or NON (i.e. is not part of any discourse argument). Then, all constituents which were tagged as part of ARG1 or as part of ARG2 are merged to obtain the actual boundaries of ARG1 and ARG2.",
"cite_spans": [
{
"start": 143,
"end": 161,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Labeler",
"sec_num": "2.2"
},
{
"text": "Previous studies have shown that learning an argument labeler classifier when all syntactic constituents are considered suffers from many instances being labeled as NON (Kong et al., 2014) . In order to avoid this, we used the approach proposed by Kong et al. (2014) to prune constituents with a NON label. This approach uses only the nodes in the path from the discourse connective (or SelfCat see Table 1 ) to the root of the sentence (Connective-Root path nodes) to limit the number of the candidate constituents. More formally, only constituents that are directly connected to one of the Connective-Root path nodes are considered for the classification.",
"cite_spans": [
{
"start": 169,
"end": 188,
"text": "(Kong et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 248,
"end": 266,
"text": "Kong et al. (2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 399,
"end": 406,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Argument Labeler",
"sec_num": "2.2"
},
{
"text": "For example, consider the parse tree of Example (1) shown in Figure 2 . The path from the discourse connective \"when\" to the root of the sentence contains these nodes: {WRB, WHADVP, SBAR, VP 2 , VP 1 , S 1 }. Therefore, we only consider {S 2 , NP 2 , VB, MD, NP 1 } for obtaining discourse arguments.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Argument Labeler",
"sec_num": "2.2"
},
{
"text": "If the classifier does not classify any constituent as a part of ARG1, we assume that the ARG1 is not in the same sentence as ARG2. In such a scenario, we consider the whole text of the previous sentence as ARG1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Labeler",
"sec_num": "2.2"
},
{
"text": "In the current implementation, we made the assumption that discourse connectives cannot be multiword expressions. Therefore, the Argument Labeler cannot identify the arguments of parallel discourse connectives (e.g. either..or, on one hand..on the other hand, etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Labeler",
"sec_num": "2.2"
},
{
"text": "We used a sub-set of 9 features proposed by Kong et al. (2014) for the Argument Labeler classifier. The complete list of features is listed in Table 1.",
"cite_spans": [
{
"start": 44,
"end": 62,
"text": "Kong et al. (2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Labeler",
"sec_num": "2.2"
},
{
"text": "Although some discourse connectives can signal different discourse relations, the na\u00efve approach that labels each discourse connective with its most Pitler et al. (2009) , such an approach can achieve an accuracy of 85.86%. Due to lack of time, we implemented this na\u00efve approach for the Discourse Sense Annotator, using the 100 connectives mined from the dataset (see Section 2.1) and their most frequent relation as mined from the CoNLL training dataset.",
"cite_spans": [
{
"start": 149,
"end": 169,
"text": "Pitler et al. (2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Sense Annotator",
"sec_num": "2.3"
},
{
"text": "As explained in Section 2, the CLaC Discourse Parser contains two main classifiers, one for the Discourse Connective Annotator and one for the Argument Labeler. We used the off-the-shelf implementation of the C4.5 decision tree classifier (Quinlan, 1993) available in WEKA (Hall et al., 2009) for the two classifiers and trained them us- ing the CoNLL training dataset. Although the CLaC discourse parser only considers explicit discourse relations (which only accounts for about half of the relations), the parser ranked 6 th among the 17 submitted discourse parsers. The overall F 1 score of the parser and the individual performance of the Discourse Connective Classifier and the Argument Labeler in the blind CoNLL test data are shown in Table 2. As Table 2 shows, the performance of the parser is consistently above the average. In addition, the performance of the Discourse Connective Classifier is very close to the best result.",
"cite_spans": [
{
"start": 239,
"end": 254,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF6"
},
{
"start": 273,
"end": 292,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 742,
"end": 762,
"text": "Table 2. As Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "Note that all numbers presented in Table 2 were obtained when errors propagate through the pipeline. That is to say, if a discourse connective is not correctly identified by the Discourse Connective Classifier for example, the arguments of this discourse connective will not be identified. Thus, the recall of the Argument Labeler will be affected.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "The CoNLL 2015 results of the submitted parsers show that the identification of ARG1 is more difficult than ARG2. In line with this, the CLaC Discourse Parser performed better on the identification of ARG2 (with the F 1 score of 69.18%) than ARG1 (with the F 1 score of 45.18%). Table 3 provides a summary of the results for the identification of Arg1 and Arg2. An important source of errors in the identification of ARG1 is that attribute spans are contained within ARG1. For example in (2), the CLaC Discourse Parser incorrectly includes the text \"But the RTC also requires \"working\" capital\" within ARG1.",
"cite_spans": [],
"ref_spans": [
{
"start": 279,
"end": 286,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "3"
},
{
"text": "Arg2 (2) But the RTC also requires \"working\" capital to maintain the bad assets of thrifts that are sold until the assets can be sold separately. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arg1",
"sec_num": null
},
{
"text": "With regards to the identification of ARG2, we observed that subordinate and coordinate clauses are an important source of errors. For example in (3), the subordinate clause \"before we can move forward\" is erroneously included in the ARG2 span when the CLaC Discourse Parser parses the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arg1",
"sec_num": null
},
{
"text": "The cause of such errors are usually rooted in an incorrect syntax parse tree that was fed to the parser. For instance in (3), the text \"we have collected on those assets before we can move forward\" was incorrectly parsed as a single clause covered by an S node with the subordinate \"before we can move forward\" as a child of this S node. However, in the correct parse tree the subordinate clause should be a sibling of the S node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arg1",
"sec_num": null
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arg1",
"sec_num": null
},
{
"text": "We would have to wait until we have collected on those assets before we can move forward. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arg1",
"sec_num": null
},
{
"text": "In this paper, we described the CLaC Discourse Parser which was developed from scratch for the CoNLL 2015 shared task. This 3 person-month effort focused on the task of the Discourse Connective Classification and Argument Labeler. We used a na\u00efve approach for sense labelling and consider only explicit relations. Yet, the parser achieves an overall F 1 measure of 17.38%, ranking in 6 th place out of the 17 parsers submitted to the CoNLL 2015 shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "work was financially supported by NSERC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "The example is taken from the CoNLL 2015 trial dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All the source codes can be downloaded from https://github.com/mjlaali/CLaCDiscourseParser.git",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "AcknowledgementThe authors would like to thank the CoNLL 2015 organisers and the anonymous reviewers. This3 The example is taken from the CoNLL 2015 development dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ClearTK 2.0: Design patterns for machine learning in UIMA",
"authors": [
{
"first": "[",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References [Bethard et al.2014] Steven Bethard, Philip Ogren, and Lee Becker. 2014. ClearTK 2.0: Design patterns for machine learning in UIMA. LREC.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "UIMA: An architectural approach to unstructured information processing in the corporate research environment",
"authors": [
{
"first": "David",
"middle": [],
"last": "Ferrucci",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lally",
"suffix": ""
}
],
"year": 2004,
"venue": "Natural Language Engineering",
"volume": "10",
"issue": "3-4",
"pages": "327--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Ferrucci and Lally2004] David Ferrucci and Adam Lally. 2004. UIMA: An architectural approach to unstructured information processing in the corporate research environment. Natural Language Engineer- ing, 10(3-4):327-348.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The WEKA data mining software: An update",
"authors": [
{
"first": "[",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM SIGKDD explorations newsletter",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Hall et al.2009] Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: An update. ACM SIGKDD explorations newsletter, 11(1):10-18.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Constituent-Based Approach to Argument Labeling with Joint Inference in Discourse Parsing",
"authors": [
{
"first": "[",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "68--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Kong et al.2014] Fang Kong, Hwee Tou Ng, and Guodong Zhou. 2014. A Constituent-Based Ap- proach to Argument Labeling with Joint Inference in Discourse Parsing. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 68-77, Doha, Qatar, October.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A PDTB-styled end-to-end discourse parser",
"authors": [
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "20",
"issue": "02",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Lin et al.2014] Ziheng Lin, Hwee Tou Ng, and Min- Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. Natural Language Engineering, 20(02):151-184.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic sense prediction for implicit discourse relations in text",
"authors": [
{
"first": "[",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Klein2007] Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 47th Annual Meeting of the ACL and the 4th IJC-NLP of the AFNLP",
"volume": "",
"issue": "",
"pages": "683--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Petrov and Klein2007] Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Pars- ing. In Proceedings of NAACL HLT 2007, page 404-411, Rochester, NY, April. [Pitler et al.2009] Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for im- plicit discourse relations in text. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJC- NLP of the AFNLP, page 683-691, Suntec, Singa- pore, August.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "C4.5: Programs for Machine Learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publish- ers Inc., San Francisco, CA, USA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "discourse connective text in lowercase. when 2. The categorization of the case of the connective: all lowercase, all uppercase and initial uppercase all lowercase 3. The highest node in the parse tree that covers the connective words but nothing more WRB 4. The parent of SelfCat WHADVP 5. The left sibling of SelfCat null 6. The right sibling of SelfCat S Syntactic Node Features 7. The path from the node to the SelfCat node in the parser tree S \u2191 SBAR \u2193 W HADV P 8. The context of the node in the parse tree. The context of a node is defined by its label the label of its parent, the label of left and right sibling in the parse tree. S-SBAR-WHADVP-null 9. The position of the node relative to the SelfCat node: left or right left"
},
"TABREF0": {
"html": null,
"text": "Features Used in the CLaC Discourse Parser frequent relation performs rather well. According to",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "Summary of the Results of the CLaC Discourse Parser in the CoNLL 2015 Shared Task.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "Results of the Identification of ARG1 and ARG2.",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}