| { |
| "paper_id": "S19-1008", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:45:49.430663Z" |
| }, |
| "title": "Pre-trained Contextualized Character Embeddings Lead to Major Improvements in Time Normalization: a Detailed Analysis", |
| "authors": [ |
| { |
| "first": "Dongfang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Arizona Tucson", |
| "location": { |
| "region": "AZ" |
| } |
| }, |
| "email": "dongfangxu9@email.arizona.edu" |
| }, |
| { |
| "first": "Egoitz", |
| "middle": [], |
| "last": "Laparra", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Arizona Tucson", |
| "location": { |
| "region": "AZ" |
| } |
| }, |
| "email": "laparra@email.arizona.edu" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bethard", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Arizona Tucson", |
| "location": { |
| "region": "AZ" |
| } |
| }, |
| "email": "bethard@email.arizona.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Recent studies have shown that pre-trained contextual word embeddings, which assign the same word different vectors in different contexts, improve performance in many tasks. But while contextual embeddings can also be trained at the character level, the effectiveness of such embeddings has not been studied. We derive character-level contextual embeddings from Flair (Akbik et al., 2018), and apply them to a time normalization task, yielding major performance improvements over the previous state-of-the-art: 51% error reduction in news and 33% in clinical notes. We analyze the sources of these improvements, and find that pre-trained contextual character embeddings are more robust to term variations, infrequent terms, and cross-domain changes. We also quantify the size of context that pretrained contextual character embeddings take advantage of, and show that such embeddings capture features like part-of-speech and capitalization.", |
| "pdf_parse": { |
| "paper_id": "S19-1008", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Recent studies have shown that pre-trained contextual word embeddings, which assign the same word different vectors in different contexts, improve performance in many tasks. But while contextual embeddings can also be trained at the character level, the effectiveness of such embeddings has not been studied. We derive character-level contextual embeddings from Flair (Akbik et al., 2018), and apply them to a time normalization task, yielding major performance improvements over the previous state-of-the-art: 51% error reduction in news and 33% in clinical notes. We analyze the sources of these improvements, and find that pre-trained contextual character embeddings are more robust to term variations, infrequent terms, and cross-domain changes. We also quantify the size of context that pretrained contextual character embeddings take advantage of, and show that such embeddings capture features like part-of-speech and capitalization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Pre-trained language models (LMs) such as ELMo (Peters et al., 2018) , ULMFiT (Howard and Ruder, 2018) , OpenAI GPT (Radford et al., 2018) , Flair (Akbik et al., 2018) and Bert (Devlin et al., 2018) have shown great improvements in NLP tasks ranging from sentiment analysis to named entity recognition to question answering. These models are trained on huge collections of unlabeled data and produce contextualized word embeddings, i.e., each word receives a different vector representation in each context, rather than a single common vector representation regardless of context as in word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 68, |
| "text": "(Peters et al., 2018)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 78, |
| "end": 102, |
| "text": "(Howard and Ruder, 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 116, |
| "end": 138, |
| "text": "(Radford et al., 2018)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 147, |
| "end": 167, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 177, |
| "end": 198, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 595, |
| "end": 617, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 628, |
| "end": 653, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Research is ongoing to study these models and determine where their benefits are coming from (Peters et al., 2018; Radford et al., 2018; Khandelwal et al., 2018; Zhang and Bowman, 2018) . The analyses have focused on wordlevel models, yet character-level models have been shown to outperform word-level models in some NLP tasks, such as text classification (Zhang et al., 2015) , named entity recognition (Kuru et al., 2016) , and time normalization (Laparra et al., 2018a) . Thus, there is a need to study pre-trained contextualized character embeddings, to see if they also yield improvements, and if so, to analyze where those benefits are coming from.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 114, |
| "text": "(Peters et al., 2018;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 115, |
| "end": 136, |
| "text": "Radford et al., 2018;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 137, |
| "end": 161, |
| "text": "Khandelwal et al., 2018;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 162, |
| "end": 185, |
| "text": "Zhang and Bowman, 2018)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 357, |
| "end": 377, |
| "text": "(Zhang et al., 2015)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 405, |
| "end": 424, |
| "text": "(Kuru et al., 2016)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 450, |
| "end": 473, |
| "text": "(Laparra et al., 2018a)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "All of the pre-trained word-level contextual embedding models include some character or subword components in their architecture. For example, Flair is a forward-backward LM trained over characters using recurrent neural networks (RNNs), that generates pre-trained contextual word embeddings by concatenating the forward LM's hidden state for the word's last character and the backward LM's hidden state for the word's first character. Flair achieves state-of-the-art or competitive results on part-of-speech tagging and named entity tagging (Akbik et al., 2018) . Though they do not pre-train a LM, Bohnet et al. (2018) similarly apply a bidirectional long short term memory network (LSTM) layer on all characters of a sentence and generate contextual word embeddings by concatenating the forward and backward LSTM hidden states of the first and last character in each word. Together with other techniques, they achieve state-of-the-art performance on part-of-speech and morphological tagging. However, both Akbik et al. (2018) and Bohnet et al. (2018) discard all other contextual character embeddings, and no analyses of the models are performed at the character-level.", |
| "cite_spans": [ |
| { |
| "start": 542, |
| "end": 562, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 600, |
| "end": 620, |
| "text": "Bohnet et al. (2018)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1009, |
| "end": 1028, |
| "text": "Akbik et al. (2018)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1033, |
| "end": 1053, |
| "text": "Bohnet et al. (2018)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the current paper, we derive pre-trained contextual character embeddings from Flair's forwardbackward LM trained on a 1-billion word corpus of English (Chelba et al., 2014) , and observe if these embeddings yield the same large improvements for character-level tasks as yielded by pre-trained contextual word embeddings for word-level tasks. We aim to analyze where improvements come from (e.g., term variations, low frequency words) and what they depend on (e.g., embedding size, context size). We focus on the task of parsing time normalizations (Laparra et al., 2018b) , where large gains of character-level models over word-level models have been observed (Laparra et al., 2018a) . This task involves finding and composing pieces of a time expression to infer time intervals, so for example, the expression 3 days ago could be normalized to the interval [2019-03-01, 2019-03-02).", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 175, |
| "text": "(Chelba et al., 2014)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 551, |
| "end": 574, |
| "text": "(Laparra et al., 2018b)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 663, |
| "end": 686, |
| "text": "(Laparra et al., 2018a)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We first take a state-of-the-art neural network for parsing time normalizations (Laparra et al., 2018a) and replace its randomly initialized character embeddings with pre-trained contextual character embeddings. After showing that this yields major performance improvements, we analyze the improvements to understand why pre-trained contextual character embeddings are so useful. Our contributions are:", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 103, |
| "text": "(Laparra et al., 2018a)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We derive pre-trained contextual character embeddings from Flair (Akbik et al., 2018) , apply them to a state-of-the art time normalizer (Laparra et al., 2018a) , and obtain major performance improvements over the previous state-of-the-art: 51% error reduction in news and 33% error reduction in clinical notes. \u2022 We demonstrate that pre-trained contextual character embeddings are more robust to term variations, infrequent terms, and crossdomain changes. \u2022 We quantify the amount of context leveraged by pre-trained contextual character embeddings. \u2022 We show that pre-trained contextual character embeddings remove the need for features like part-of-speech and capitalization.", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 87, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 139, |
| "end": 162, |
| "text": "(Laparra et al., 2018a)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The parsing time normalizations task is based on the Semantically Compositional Annotation of Time Expressions (SCATE) schema (Bethard and Parker, 2016) , in which times are annotated as compositional time entities. Laparra et al. (2018a) decomposes the Parsing Time Normalizations task into two subtasks: a) time entity identification using a character-level sequence tagger which detects the spans of characters that belong to each time expression and labels them with their corresponding time entity; and b) time entity composition using a simple set of rules that links relevant entities together while respecting the entity type constraints imposed by the SCATE schema. These two tasks are run sequentially using the predicted output of the sequence tagger as input to the rule-based time entity composition system. In this paper, We focus on the character-level time entity identifier that is the foundation of Laparra et al. (2018a)'s model. The sequence tagger is a multi-output RNN with three different input features, shown in Figure 1 . Features are mapped through an embedding layer, then fed into stacked bidirectional Gated Recurrent Units (bi-GRUs), and followed by a softmax layer. There are three types of outputs per Laparra et al. (2018a)'s encoding of the SCATE schema, so there is a separate stack of bi-GRUs and a softmax for each output type. We keep the original neural architecture and parameter settings in Laparra et al. (2018a) , and experiment with the following embedding layers: Rand(128): the original setting of Laparra et al. (2018a) ning Flair forward-backward character-level LM Flair's forward and backward character-level language models over the text, and concatenating the hidden states from forward and backward character-level LMs for each character . We evaluate in the clinical and news domains, the former being more than 9 times larger and the latter having a more diverse set of labels. Three different evaluation metrics are used for parsing time normalization tasks: identification of time entities, which evaluates the predicted span (offsets) and the SCATE type for each entity; parsing of time entities, which evaluates the span, the SCATE type, and properties for each time entity; interval extraction, which interprets parsed annotations as intervals along the timeline and measures the fraction of the correctly parsed intervals. The SemeEval task description paper (Laparra et al., 2018b) has more details on dataset statistics and evaluation metrics. Table 1 shows that the model using pre-trained contextual character embeddings, Cont(4096), outperforms the model of Laparra et al. (2018a) on all three metrics: identification of time entities, parsing, and interval extraction. For identification, our primary focus as we are only modifying the identification portion of Laparra et al. (2018a) , Cont(4096) reduces error by 51% (59.4 to 80.3 F 1 ) on news, and by 33% (92.8 to 95.2 F 1 ) on clinical notes. For the following experiments, we only use the identification metric to evaluate the performance. 4 Where the improvements come from 4.1 Larger character embeddings Table 2 compares different embedding sizes. Moving from random 128-dimensional to random 4096dimensional embeddings improves the model: Rand(4096) statistically outperforms 2 Rand(128) on news dev (p = 0.0001), news test (p = 0.0291), and clinical test (p = 0.0301), though it is not statistically different on clinical dev (p = 0.2524).", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 152, |
| "text": "(Bethard and Parker, 2016)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 216, |
| "end": 238, |
| "text": "Laparra et al. (2018a)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1433, |
| "end": 1455, |
| "text": "Laparra et al. (2018a)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1545, |
| "end": 1567, |
| "text": "Laparra et al. (2018a)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 2421, |
| "end": 2444, |
| "text": "(Laparra et al., 2018b)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 2830, |
| "end": 2852, |
| "text": "Laparra et al. (2018a)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1037, |
| "end": 1045, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 2508, |
| "end": 2515, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 3131, |
| "end": 3138, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Framework", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Pre-trained contextual embeddings provide additional benefits: Cont(4096) significantly outperforms Rand(4096) on all datasets (p < 0.001 in all cases). We conclude that pre-trained contextual character embeddings provide more than just greater model capacity. Table 3 shows how pre-trained contextual character embeddings improve performance on both term variations and low frequency words. We define term variations as time entities that appear in the training data in the following patterns: both upper-case and lower-case, e.g., DAY, Day, and day; abbreviation with and without punctuation, e.g., AM and A.M.; or same stem, e.g., Month and Months, previously and previous. In the dev and test sets, 30. over Rand(4096) on time entities with (+var) and without (-var) term variations. Cont(4096) is always better than Rand(4096) so all differences are positive, but the improvements in +var are much larger than those of -var in the news domain (+8.4 vs. +1.6 and +15.0 vs. +8.7). In the clinical domain, where 9 times more training data is available, both +var and -var yield similar improvements. We conclude that pre-trained contextual character embeddings are mostly helpful with term variations in low data scenarios. We define infrequent terms as time entities that occur in the training set 10 or fewer times. In the dev and test sets, 73.9-86.9% of terms are infrequent, with about one third of infrequent terms being numerical 3 . The bottom two rows of table 3 show the improvements in F 1 of the Cont(4096) over Rand(4096) on frequent (>10) and infrequent (\u226410) terms. Cont(4096) is always better than Rand(4096), and in both domains the improvements on low frequency terms are always greater than those on high frequency terms (+8.1 vs. +2.4 in news dev, +17.6 vs. +5.0 in news test, etc.). We conclude that pre-trained contextual character embeddings improve the representations of low frequency words in both low and high data settings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 261, |
| "end": 268, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To illustrate the ability of pre-trained contextual character embeddings to handle unseen data, we train the models in one domain and evaluate in the other, as shown in Table 4 . We find that Rand(128) and Rand(4096) achieve similar cross-domain performance, e.g., Rand(128) achieves 63.4% of F 1 on news dev and Rand(4096) achieves 62.6% F 1 . But Cont(4096) achieves much better cross-domain performance than Rand(128) or Rand(4096): 78.5% vs. 65.5% or 66.9% F 1 on news test, 59.5% vs. 46.3% or 44.3% on clinical test, etc. All these improvements are significant (p < 0.001). We conclude that pre-trained contextual character embeddings generalize better across domains.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 169, |
| "end": 176, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Robustness to domain differences", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Inspired by Khandelwal et al. (2018) 's analysis of the effective context size of a word-based language model, we present an ablation study to measure performance when contextual information is removed. Specifically, when evaluating models, we retain only the characters in a small window around each time entity in the dev and test sets, and replace all other characters with padding characters.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 36, |
| "text": "Khandelwal et al. (2018)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Greater reliance on nearby context", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Figures 2a and 2b evaluate the Cont(4096), Rand(4096) and Rand(128) models across different context window sizes on the news dev and test set, respectively. Rand(128) performs similarly across all context sizes, suggesting that it makes little use of context information. Both Rand 4096and Cont(4096) depend heavily of context: without any context information (context size 0), they perform worse than Rand(128). Cont(4096) is sensitive to the nearby context, with a \u223c10 point gain on news dev and \u223c15 point gain on news test from just the first 10 characters of context, putting it easily above Rand(128). Rand(4096) doesn't exceed the performance of Rand(128) until at least 50 characters of context. Figures 2c and 2d shows similar trends in the clinical domain, except that the Rand(128) model now shows a small dependence on context, with a \u223c5 point drop on clinical dev and a \u223c3 drop on clinical test in the no-context setting. Cont(4096) again makes large improvements in just the first 10 characters, and Rand(4096) now takes close to 100 characters of context to reach the performance of Rand(128). We conclude that pre-trained contextual character embeddings make better use of local context, especially within the first 10 characters.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 703, |
| "end": 720, |
| "text": "Figures 2c and 2d", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Greater reliance on nearby context", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We perform a feature ablation to see if pre-trained contextual character embeddings capture basic syntax (e.g., part-of-speech) like pre-trained contextual word embeddings do (Peters et al., 2018; Akbik et al., 2018) . Table 5 shows that removing both part-of-speech and unicode category features from Cont(4096) does not significantly change performance: news dev (p = 0.8813), news test (p = 0.1672), clinical dev (p = 0.5367), clinical test (p = 0.8537). But ablating part-of-speech tags and unicode character categories does decrease per- ", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 196, |
| "text": "(Peters et al., 2018;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 197, |
| "end": 216, |
| "text": "Akbik et al., 2018)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 219, |
| "end": 226, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Encoding word categories", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "Dev Test Dev Test Rand(128) C 73.6 56.1 91.9 92.1 Rand(128) C U P 76.5 59.4 92.9 92.8 Rand(4096) C 80.5 62.4 91.7 92.2 Rand(4096) C U P 82.7 64.8 92.6 93.2 Cont(4096) C 87.9 78.1 94.7 95.5 Cont(4096) C U P 87.4 80.3 94.7 95.2 Table 5 : Effect of features on performance: Performance (F 1 ) with different feature sets, including characters (C), part-of-speech tags (P), and unicode character categories (U).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 226, |
| "end": 233, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "News Clinical Set", |
| "sec_num": null |
| }, |
| { |
| "text": "formance for both Rand(128) and Rand(4096) in all cases. For example, Rand(4096) with all features achieves 82.7 F 1 on news dev, significantly better than the 80.5 F 1 of using only characters (p = 0.0467). We conclude that pre-trained contextual character embeddings encode a variety of word category information such as part-of-speech, capitalization, and punctuation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "News Clinical Set", |
| "sec_num": null |
| }, |
| { |
| "text": "We derive pre-trained character-level contextual embeddings from Flair (Akbik et al., 2018) , a word-level embedding model, inject these into a state-ofthe-art time normalization system, and achieve major performance improvements: 51% error reduction in news and 33% in clinical notes. Our detailed analysis concludes that pre-trained contextual character embeddings are more robust to term variations, infrequent terms, and cross-domain changes; that they benefit most from the first 10 characters of context; and that they encode part-of-speech, capitalization, and punctuation information.", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 91, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "6 Acknowledgements", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We upgraded Keras from 1.2 to 2.1 and fixed a code bug that allowed predictions to be made on padding tokens.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We used a paired bootstrap resampling significance test.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Numbers are common in time expressions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for helpful comments on an earlier draft of this paper. This work was supported by National Institutes of Health grants R01GM114355 from the National Institute of General Medical Sciences (NIGMS) and R01LM012918 from the National Library of Medicine (NLM). The computations were done in systems supported by the National Science Foundation under Grant No. 1228509. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or National Science Foundation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| }, |
| { |
| "text": "We analyzed a few examples where Cont(4096) makes correct predictions, but Rand(4096) does not.Robustness to variants \". . . with year-earlier profit of millions. . . \" In this sentence, the Cont(4096) model labeled earlier correctly, while the Rand(4096) model missed it. In the news training set, earlier occurs a few times, but none of them have \"-\" nearby.Robustness to frequency \". . . in the first days after President. . . \" In this sentence, the Cont(4096) model labeled first correctly, while the Rand(4096) model labeled it incorrectly. In the news training set, first only occurred once when followed by another time entity, but there were several similar sentences for second and third in the training set.Robustness to word order \". . . until twenty years after the first astronauts. . . \" \". . . comes barely a month after Qantas. . . \" \". . . Retaliating 13 days after the deadly. . . \" In each of the sentences above, the Cont(4096) model labeled after correctly, while Rand(4096) labeled it incorrectly. In the training set, there were a few examples where after occurred near a time entity, but always before the time entity (e.g., after ten years, after 22 months, after three days, after a 16-hour flight) rather than after it as in the examples above. Cont(4096) may have learned a better representation for after that allows it to be less dependent on exact word order.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.1 Examples of the improvement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Contextual string embeddings for sequence labeling", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Akbik", |
| "suffix": "" |
| }, |
| { |
| "first": "Duncan", |
| "middle": [], |
| "last": "Blythe", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Vollgraf", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1638--1649", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A semantically compositional annotation scheme for time normalization", |
| "authors": [ |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Parker", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven Bethard and Jonathan Parker. 2016. A semanti- cally compositional annotation scheme for time nor- malization. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC 2016), Paris, France. European Lan- guage Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Morphosyntactic tagging with a meta-bilstm model over context sensitive token encodings", |
| "authors": [ |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Gon\u00e7alo", |
| "middle": [], |
| "last": "Sim\u00f5es", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Andor", |
| "suffix": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Pitler", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Maynez", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "2642--2652", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernd Bohnet, Ryan McDonald, Gon\u00e7alo Sim\u00f5es, Daniel Andor, Emily Pitler, and Joshua Maynez. 2018. Morphosyntactic tagging with a meta-bilstm model over context sensitive token encodings. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2642-2652.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "One billion word benchmark for measuring progress in statistical language modeling", |
| "authors": [ |
| { |
| "first": "Ciprian", |
| "middle": [], |
| "last": "Chelba", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Fifteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2014. One billion word benchmark for mea- suring progress in statistical language modeling. In Fifteenth Annual Conference of the International Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Universal language model fine-tuning for text classification", |
| "authors": [ |
| { |
| "first": "Jeremy", |
| "middle": [], |
| "last": "Howard", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ruder", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "328--339", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 328-339.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Sharp nearby, fuzzy far away: How neural language models use context", |
| "authors": [ |
| { |
| "first": "Urvashi", |
| "middle": [], |
| "last": "Khandelwal", |
| "suffix": "" |
| }, |
| { |
| "first": "He", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "284--294", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Urvashi Khandelwal, He He, Peng Qi, and Dan Ju- rafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceed- ings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 284-294. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Charner: Character-level named entity recognition", |
| "authors": [ |
| { |
| "first": "Onur", |
| "middle": [], |
| "last": "Kuru", |
| "suffix": "" |
| }, |
| { |
| "first": "Deniz", |
| "middle": [], |
| "last": "Ozan Arkan Can", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "911--921", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Onur Kuru, Ozan Arkan Can, and Deniz Yuret. 2016. Charner: Character-level named entity recognition. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 911-921.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "From characters to time intervals: New paradigms for evaluation and neural parsing of time normalizations", |
| "authors": [ |
| { |
| "first": "Egoitz", |
| "middle": [], |
| "last": "Laparra", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongfang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bethard", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "6", |
| "issue": "", |
| "pages": "343--356", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Egoitz Laparra, Dongfang Xu, and Steven Bethard. 2018a. From characters to time intervals: New paradigms for evaluation and neural parsing of time normalizations. Transactions of the Association of Computational Linguistics, 6:343-356.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Semeval 2018 task 6: Parsing time normalizations", |
| "authors": [ |
| { |
| "first": "Egoitz", |
| "middle": [], |
| "last": "Laparra", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongfang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ahmed", |
| "middle": [], |
| "last": "Elsayed", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "88--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Egoitz Laparra, Dongfang Xu, Ahmed Elsayed, Steven Bethard, and Martha Palmer. 2018b. Semeval 2018 task 6: Parsing time normalizations. In Proceed- ings of The 12th International Workshop on Seman- tic Evaluation, pages 88-96.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Deep contextualized word representations", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "2227--2237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227-2237.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "When and why are pre-trained word embeddings useful for neural machine translation?", |
| "authors": [ |
| { |
| "first": "Ye", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Devendra", |
| "middle": [], |
| "last": "Sachan", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Felix", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "2", |
| "issue": "", |
| "pages": "529--535", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Pad- manabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 2 (Short Pa- pers), volume 2, pages 529-535.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Improving language understanding by generative pre-training", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "Karthik", |
| "middle": [], |
| "last": "Narasimhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Salimans", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. URL https://s3- us-west-2. amazonaws. com/openai-assets/research- covers/language-unsupervised/language under- standing paper. pdf.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis", |
| "authors": [ |
| { |
| "first": "Kelly", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "359--361", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpret- ing Neural Networks for NLP, pages 359-361.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Character-level convolutional networks for text classification", |
| "authors": [ |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Junbo", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yann", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "28", |
| "issue": "", |
| "pages": "649--657", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649-657. Curran Associates, Inc.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Architecture ofLaparra et al. (2018a)'s time identification system. The input is the 4th of May (truncated for space). 4th is a DAY-OF-MONTH, with an implicit LAST over the same span. At the feature layer, 4 is a digit (Nd), t and h are lowercase letters (Ll), and 4th has the cardinal number (CD) part-of-speech tag." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Effect of the context information on the performances for Cont(4096), Rand(4096) and Rand(128) on the dev and test sets. The dashed lines are the performances of models using the original context setting." |
| }, |
| "TABREF2": { |
| "text": "Performance (F 1 ) of time entity identification.", |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"2\">News</td><td>Clinical</td></tr><tr><td/><td>Dev</td><td>Test Dev Test</td></tr><tr><td>Variation</td><td colspan=\"2\">+var +8.4 +15.0 +1.2 +1.3 -var +1.6 +8.7 +1.2 +1.4</td></tr><tr><td>Frequency</td><td colspan=\"2\">\u226410 +8.1 +17.6 +2.0 +4.2 >10 +2.4 +5.0 +1.1 +1.1</td></tr></table>", |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null |
| }, |
| "TABREF5": { |
| "text": "Effect of domain change on performance: (F 1 ) on News and Clinical datasets.", |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null |
| } |
| } |
| } |
| } |