ACL-OCL / Base_JSON /prefixS /json /smm4h /2020.smm4h-1.32.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:20.921261Z"
},
"title": "CLaC at SMM4H 2020: Birth defect mention detection",
"authors": [
{
"first": "Parsa",
"middle": [],
"last": "Bagherzadeh",
"suffix": "",
"affiliation": {
"laboratory": "CLaC Labs",
"institution": "Concordia University Montreal",
"location": {
"country": "Canada"
}
},
"email": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Bergler",
"suffix": "",
"affiliation": {
"laboratory": "CLaC Labs",
"institution": "Concordia University Montreal",
"location": {
"country": "Canada"
}
},
"email": "bergler@cse.concordia.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "For the detection of personal tweets, where a parent speaks of a child's birth defect, CLaC combines ELMo word embeddings and gazetteer lists from external resources with a GCNN (for encoding dependencies), in a multi layer, transformer inspired architecture. To address the task, we compile several gazetteer lists from resources such as MeSH and GI. The proposed system obtains .69 for \u00b5F1 score in the SMM4H 2020 Task 5 where the competition average is .65.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "For the detection of personal tweets, where a parent speaks of a child's birth defect, CLaC combines ELMo word embeddings and gazetteer lists from external resources with a GCNN (for encoding dependencies), in a multi layer, transformer inspired architecture. To address the task, we compile several gazetteer lists from resources such as MeSH and GI. The proposed system obtains .69 for \u00b5F1 score in the SMM4H 2020 Task 5 where the competition average is .65.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Introduction Tweets potentially offer a record that is distilled from messages that are written for other purposes. They contain many particular, directly observed experiences, which are of great interest to epidemiologists and health monitors. Situating congenital abnormalities (birth defects) in time and space is important to identify causes of birth defects, find opportunities to prevent them, and improve the health of those living with them. Therefore, identifying tweets mentioning birth defect experiences by a family is helpful, which motivated the SMM4H 2020 Task 5 (Klein et al., 2020) .",
"cite_spans": [
{
"start": 578,
"end": 598,
"text": "(Klein et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This paper reports our effort toward building a predictive system to address the task. We try to leverage medical annotations, family relation annotations, as well as dependency relations in our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Task description and data The SMM4H 2020 task 5 is a 3-way classification problem. Class 1 tweets refer to the user's child and indicate that he/she has a birth defect. Class 2 tweets are ambiguous about whether someone is the user's child or has a birth defect mentioned in the tweet. Class 3 tweets merely mention birth defects. Training and development sets provided by the organizers include 14705 and 3677 tweets respectively and the evaluation set includes 4603 tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Preprocessing and annotations Tweets are tokenized using the ANNIE tweet tokenizer (Cunningham et al., 2002) as well as the hashtag tokenizer (Maynard and Greenwood, 2014) . URLs and user mentions are removed. We use the Stanford parser (Klein and Manning, 2003) to encode dependencies and ANNIE Names for named entity recognition.",
"cite_spans": [
{
"start": 83,
"end": 108,
"text": "(Cunningham et al., 2002)",
"ref_id": "BIBREF1"
},
{
"start": 142,
"end": 171,
"text": "(Maynard and Greenwood, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 237,
"end": 262,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We compiled gazetteer lists for birth defect detection from MeSH (Lipscomb, 2000) , namely BirthDef, a list of congenital, hereditary, and neonatal diseases and abnormalities compiled from MeSH C16 and PregComp, a list of pregnancy complications compiled from MeSH C13.703. Each MeSH sub-tree is traversed depth-first and all entry terms for each heading (both for internal nodes and leaves) are added to the gazetteers. A word list of family terms, FamilyRel, that includes terms like son, daughter, cousin, etc. is also complied. Moreover, a list Acquaintance which contains terms like friend, colleague, neighbor, etc. is extracted from the General Inquirer (Stone et al., 1966) .",
"cite_spans": [
{
"start": 65,
"end": 81,
"text": "(Lipscomb, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 661,
"end": 681,
"text": "(Stone et al., 1966)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These four gazetteer lists form the parameter Annotation or A = {BirthDef, PregComp, FamilyRel, Acquaintance} and form the four annotation types. Deep model Our system consists of four stacked layers. Each layer l \u2265 2 receives token represenations h l\u22121 i and passes new represenations h l i to the next layer. The layers are: Layer 1: annotation embedding and token embedding. Annotation types are embedded using a matrix M \u2208 R 4\u00d71024 (four rows corresponding to four annotation types). M is initialized randomly and learned as a parameter during training for the main classification task. To embed tokens we use ELMo (Peters et al., 2018) . The new representations h 1 i is obtained by summing annotation embeddings to token embeddings (see Figure 1 ).",
"cite_spans": [
{
"start": 619,
"end": 640,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 743,
"end": 751,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Layer 2: the encoder part of the Transformer (Vaswani et al., 2017) . The encoder gets the representations h 1 i and outputs representations h 2 i . The number of heads in the multi-head attention is n heads = 4 and the dimensionality of the feed-forward layer is d F F = 1024.",
"cite_spans": [
{
"start": 45,
"end": 67,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Layer 3: graph convolutional network (GCNN) (Kipf and Welling, 2017) for dependency relation encoding following (Marcheggiani and Titov, 2017) . In GCCN each token is represented based on its adjacent tokens in dependency parse by",
"cite_spans": [
{
"start": 112,
"end": 142,
"text": "(Marcheggiani and Titov, 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "h l i = ReLU ( j\u2208N (i) W L(i,j) h l j + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "where N (i) is the set of tokens adjacent to token i and L(j, i) is the label of the arc from token j to token i. Note that the network is not tied, i.e. W L(i,j) depends on the arc labels. GCNN receives h 2 i and outputs token-wise representations h 3 i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Layer 4: attention (Bahdanau et al., 2015) , which calculates importance scores e i = w T att h 3 i using a latent context vector w att and normalizes the scores using softmax (\u03b1 i = exp(e i ) j e j ) for a weighted sum",
"cite_spans": [
{
"start": 19,
"end": 42,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "H = i \u03b1 i * h 3 i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Attention produces an output vector H \u2208 R 1024 for the tweet, which is fed into three decision neurons for classifying into the three output classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our model is implemented using PyTorch (Paszke et al., 2017) and optimized with the Adam optimizer (Kingma and Ba, 2015) with a learning rate of lr = 0.0005 for 7 to 10 epochs. Submitted system Due to a submission issue, only one of three planned configurations was submitted. Ablation studies on our validation data showed that all four layers of the architecture add to the overall performance: The results on our development set suggest that the best performance is achieved by the full system (indicated in boldface). The official competition results are provided on the right side of Table1. The micro average scores for our submission is close to its development score, which we consider an important sign of robustness in our system.",
"cite_spans": [
{
"start": 39,
"end": 60,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "+ + + + + + + + + Token Embedding E M ilo E has E Hydrocephalus Ecausing E him Eto E need E brain E shunt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Conclusion For the SMM4H 2020 Task 5 competition we proposed a multi-layer system to leverage gazetteer list annotations as well as a dependency parse. The reported experiments here suggest that incorporating external knowledge in form of textual annotation has the potential to enhance the performance of models trained on moderate-sized training sets in a robust and efficient manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations, ICLR'15",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, ICLR'15.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "GATE: A framework and graphical development environment for robust NLP tools and applications",
"authors": [
{
"first": "H",
"middle": [],
"last": "Cunningham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tablan",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL'02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H Cunningham, D Maynard, K Bontcheva, and V Tablan. 2002. GATE: A framework and graphical development environment for robust NLP tools and applications. In ACL'02.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "",
"middle": [],
"last": "Dp Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3Proceedings of the 3rd International Conference for Learning Representations (ICLR 2015)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "DP Kingma and J Ba. 2015. Adam: A method for stochastic optimization. In 3Proceedings of the 3rd Interna- tional Conference for Learning Representations (ICLR 2015). arXiv:1412.6980 [cs.LG].",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semi-supervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In In Proceedings of ICLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Overview of the fifth Social Media Mining for Health Applications (SMM4H) Shared Tasks at COLING 2020",
"authors": [
{
"first": "Ari",
"middle": [
"Z"
],
"last": "Klein",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Flores",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Anne-Lyse",
"middle": [],
"last": "Minard",
"suffix": ""
},
{
"first": "Karen",
"middle": [
"O"
],
"last": "Connor",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications (SMM4H) Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Z. Klein, Ivan Flores, Arjun Magge, Anne-Lyse Minard, Karen O'Connor, Abeed Sarker, Elena Tutubalina, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2020. Overview of the fifth Social Media Mining for Health Applications (SMM4H) Shared Tasks at COLING 2020. In Proceedings of the Fifth Social Media Mining for Health Applications (SMM4H) Workshop & Shared Task.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Medical subject headings (MeSH)",
"authors": [
{
"first": "Carolyn",
"middle": [
"E"
],
"last": "Lipscomb",
"suffix": ""
}
],
"year": 2000,
"venue": "Bulletin of the Medical Library Association",
"volume": "88",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carolyn E Lipscomb. 2000. Medical subject headings (MeSH). Bulletin of the Medical Library Association, 88(3).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Encoding sentences with graph convolutional networks for semantic role labeling",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1506--1515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506-1515.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis",
"authors": [
{
"first": "D",
"middle": [
"G"
],
"last": "Maynard",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Greenwood",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DG Maynard and MA Greenwood. 2014. Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In LREC 2014.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic differentiation in PyTorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS 2017.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227-2237.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The general inquirer: A computer approach to content analysis",
"authors": [
{
"first": "J",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stone",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dexter",
"suffix": ""
},
{
"first": "Marshall S",
"middle": [],
"last": "Dunphy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip J Stone, Dexter C Dunphy, and Marshall S Smith. 1966. The general inquirer: A computer approach to content analysis. MIT press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Additive annotation embedding. Ma is the row in M that corresponds to annotation type a \u2208 A",
"num": null
},
"TABREF0": {
"content": "<table><tr><td>System</td><td>Layer</td><td/><td>Validation set</td><td/><td/><td/><td/><td>Test set</td></tr><tr><td/><td/><td colspan=\"7\">Class 1 F1 Class 2 F1 \u00b5P \u00b5R \u00b5F1 \u00b5P \u00b5R \u00b5F1</td></tr><tr><td>TE+Trans</td><td>1,2,4</td><td>.70</td><td colspan=\"3\">.63 .72 .60</td><td>.65</td><td>-</td><td>-</td><td>-</td></tr><tr><td>TE+GCNN</td><td>1,3,4</td><td>.72</td><td colspan=\"3\">.64 .74 .60</td><td>.67</td><td>-</td><td>-</td><td>-</td></tr><tr><td>TE+AE+Trans</td><td>1,2,4</td><td>.75</td><td colspan=\"3\">.65 .70 .69</td><td>.70</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">TE+AE+Trans+GCNN 1,2,3,4</td><td>.76</td><td colspan=\"3\">.68 .72 .69</td><td colspan=\"3\">.71 .71 .67</td><td>.69</td></tr><tr><td>competition mean</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td colspan=\"3\">-.62 .68</td><td>.65</td></tr></table>",
"text": "Ablations and performance. TE denotes token embedding, AE denotes annotation embeddings",
"num": null,
"type_str": "table",
"html": null
}
}
}
}