| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:27:01.589353Z" |
| }, |
| "title": "Less is Better: A cognitively inspired unsupervised model for language segmentation", |
| "authors": [ |
| { |
| "first": "Jinbiao", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Radboud University", |
| "location": {} |
| }, |
| "email": "jinbiao.yang@mpi.nl" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [ |
| "L" |
| ], |
| "last": "Frank", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Radboud University", |
| "location": {} |
| }, |
| "email": "s.frank@let.ru.nl" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Language users process utterances by segmenting them into many cognitive units, which vary in their sizes and linguistic levels. Although we can do such unitization/segmentation easily, its cognitive mechanism is still not clear. This paper proposes an unsupervised model, Less-is-Better (LiB), to simulate the human cognitive process with respect to language unitization/segmentation. LiB follows the principle of least effort and aims to build a lexicon which minimizes the number of unit tokens (alleviating the effort of analysis) and number of unit types (alleviating the effort of storage) at the same time on any given corpus. LiB's workflow is inspired by empirical cognitive phenomena. The design makes the mechanism of LiB cognitively plausible and the computational requirement lightweight. The lexicon generated by LiB performs the best among different types of lexicons (e.g. ground-truth words) both from an informationtheoretical view and a cognitive view, which suggests that the LiB lexicon may be a plausible proxy of the mental lexicon.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Language users process utterances by segmenting them into many cognitive units, which vary in their sizes and linguistic levels. Although we can do such unitization/segmentation easily, its cognitive mechanism is still not clear. This paper proposes an unsupervised model, Less-is-Better (LiB), to simulate the human cognitive process with respect to language unitization/segmentation. LiB follows the principle of least effort and aims to build a lexicon which minimizes the number of unit tokens (alleviating the effort of analysis) and number of unit types (alleviating the effort of storage) at the same time on any given corpus. LiB's workflow is inspired by empirical cognitive phenomena. The design makes the mechanism of LiB cognitively plausible and the computational requirement lightweight. The lexicon generated by LiB performs the best among different types of lexicons (e.g. ground-truth words) both from an informationtheoretical view and a cognitive view, which suggests that the LiB lexicon may be a plausible proxy of the mental lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "During language comprehension, we cannot always process an utterance instantly. Instead, we need to segment all but the shortest pieces of text or speech into smaller chunks. Since these chunks are likely the cognitive processing units for language understanding, we call them cognitive units in this paper. A chunk may be any string of letters, characters, or phonemes that occurs in the language, but which chunks serve as the cognitive units? Traditional studies (Chomsky, 1957; Taft, 2013 , for example) often use words as the units in sentence analysis. But speech, as well as some writing systems such as Chinese, lack a clear word boundary. Even for written languages which use spaces as word boundaries, psychological evidence indicates that the morphemes, which are sub-word units, in infrequent or opaque compound words take priority over the whole word (Fiorentino et al., 2014; MacGregor and Shtyrov, 2013) ; at the same time, some supra-word units such as frequent phrases and idioms are also stored in our long-term mental lexicon (Arnon and Snider, 2010; Bannard and Matthews, 2008; Jackendoff, 2002) . The evidence suggests that the cognitive units can be of different sizes; they can be words, or smaller than words, or multi-word expressions.", |
| "cite_spans": [ |
| { |
| "start": 466, |
| "end": 481, |
| "text": "(Chomsky, 1957;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 482, |
| "end": 492, |
| "text": "Taft, 2013", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 864, |
| "end": 889, |
| "text": "(Fiorentino et al., 2014;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 890, |
| "end": 918, |
| "text": "MacGregor and Shtyrov, 2013)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1045, |
| "end": 1069, |
| "text": "(Arnon and Snider, 2010;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1070, |
| "end": 1097, |
| "text": "Bannard and Matthews, 2008;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1098, |
| "end": 1115, |
| "text": "Jackendoff, 2002)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Despite the flexible size of the cognitive units, and the lack of overt segmentation clues, infants are able to implicitly learn the units in their caregivers' speech, and then generate their own utterances. Arguably, children's language intelligence allows them to build their own lexicons from zero knowledge about the basic (cognitive) units in the particular language the child is learning, and then use the lexicon to segment language sequences. Can we mimic this ability of a human language learner in a computer model? This question is often phrased as the task of unsupervised segmentation. Several types of computational models or NLP algorithms have been proposed for segmentation, taking different approaches:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Model the lexicon: A straightforward basis for segmentation is to build a lexicon. One of the lexicon-building algorithms, Byte pair encoding (BPE) (Sennrich et al., 2016) , is popular for NLP preprocessing. It iteratively searches for the most common n-gram pairs and adds them into the ngram lexicon. Some other models such as the Chunk-Based Learner (McCauley and Christiansen, 2019) and PARSER (Perruchet and Vinter, 1998) are also based on the local statistics of tokens (e.g., token frequency, mutual information, or transitional probability).", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 173, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 355, |
| "end": 388, |
| "text": "(McCauley and Christiansen, 2019)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 400, |
| "end": 428, |
| "text": "(Perruchet and Vinter, 1998)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Model the grammar: Some studies attempted to analyze the grammar patterns of sentences and then parse/segment the sentences based on these patterns. To find the optimal grammar, de Marcken (1996) used Minimum Description Length, and Johnson and Goldwater (2009) used the Hierarchical Dirichlet Process.", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 234, |
| "text": "Marcken (1996) used Minimum Description Length, and", |
| "ref_id": null |
| }, |
| { |
| "start": 235, |
| "end": 246, |
| "text": "Johnson and", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 247, |
| "end": 263, |
| "text": "Goldwater (2009)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Model the sequences: Recurrent neural networks and its variations are able to learn the sequential patterns in language and to perform text segmentation (Chung et al., 2017; Kawakami et al., 2019; Sun and Deng, 2018; Zhikov et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 175, |
| "text": "(Chung et al., 2017;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 176, |
| "end": 198, |
| "text": "Kawakami et al., 2019;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 199, |
| "end": 218, |
| "text": "Sun and Deng, 2018;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 219, |
| "end": 239, |
| "text": "Zhikov et al., 2013)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In general, lexicon models capture only the local statistics of the tokens so they tend to be short-sighted at the global level (e.g. long-distance dependencies). The other two types of models, in contrast, learn how the tokens co-occur globally. Yet, the ways grammar models and sequence models learn the global information makes them more complicated and computing-intensive than the lexicon models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we propose a model that builds a lexicon, but does so by using both local and global information. Our model is not only a computational model but also a cognitive model: it is inspired by cognitive phenomena, and it needs only basic and light-weight computations which makes it cognitively more plausible than the grammar-and sequence-learning models mentioned above. We show that our model can effectively detect the cognitive units in language with an efficient procedure. We also show that our model can detect linguistically meaningful units. We further evaluate our model on traditional word segmentation tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2 The Less-is-better Model", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We want our system to mimic human cognitive processes of language unitization/segmentation by simulating not only the behavioral output, but also the cognitive mechanism. We designed such a computational model by emulating three cognitive phenomena: the principle of least effort, larger-first processing, and passive and active forgetting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cognitive principles", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The principle of least effort: The essence of the model is a simple and natural cognitive principle: the principle of least effort (Zipf, 1949) , which says human cognition and behavior are economic; they prefer to spend the least effort or resources to obtain the largest reward. Since a language sequence can be segmented into different sequences of language chunks, we assume the cognitive units are the language chunks in the sequence which follow the principle of least effort.", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 143, |
| "text": "(Zipf, 1949)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cognitive principles", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Larger-first processing: As we mentioned, any language chunk may be the cognitive unit, short or long. A broadly known finding is that global/larger processing has priority over local/smaller processing for visual scene recognition; an effect named \"global precedence\" (Navon, 1977) . This follows from the principle of least effort: the larger the units we process, the fewer processing steps we need to take. For visual word processing, the word superiority effect (Reicher, 1969) shows the precedence of words over recognizing letters. Recent work (Snell and Grainger, 2017; Yang et al., 2020) extends global precedence to the level beyond words, and also shows that we do not process only the larger units: smaller units also have a chance to become the processing units when processing larger units does not aid comprehension. In other words, cognitive units may be of any size, but the larger have priority.", |
| "cite_spans": [ |
| { |
| "start": 269, |
| "end": 282, |
| "text": "(Navon, 1977)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 467, |
| "end": 482, |
| "text": "(Reicher, 1969)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 551, |
| "end": 577, |
| "text": "(Snell and Grainger, 2017;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 578, |
| "end": 596, |
| "text": "Yang et al., 2020)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cognitive principles", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Passive and active forgetting: To mimic human cognition, the model should have a flexible memory to store and update information. Forgetting is critical to prevent the accumulation of an extremely large number of memory engrams. It has been commonly held that forgetting is merely the passive decay of the memory engram over time, but recent studies put forward that forgetting can also be an active process (Davis and Zhong, 2017; Gravitz, 2019) . Passive forgetting by decay can clean up the engrams that are no longer used in our brains. However, our brains may sometimes need to suppress counter-productive engrams immediately. Active forgetting may thus be called upon to eliminate the unwanted engram's memory traces, which enhances the memory management system (Davis and Zhong, 2017; Oehrn et al., 2018).", |
| "cite_spans": [ |
| { |
| "start": 432, |
| "end": 446, |
| "text": "Gravitz, 2019)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cognitive principles", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We assume the cognitive units are the chunks in the language sequence which follow the principle of least effort (Section 2.1). In other words, the less information we need to encode the language material, the better cognitive units we have. This less-is-better assumption grounds our model, so we named it Less-is-Better, or LiB for short. The LiB model can segment any string S into a sequence of chunks (c 1 , ..., c N ) based on the lexicon L. The chunk types in L are ordered based on their importance inferred from the segmentation. The lexicon quality and the segmentation result mutually affect each other: LiB learns from its own segmentation results and updates L accordingly, then improves its next segmentation ( Figure 1 ). The bootstrap procedure makes the model unsupervised. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 725, |
| "end": 733, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "General idea", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Larger-first selection: An S can be segmented in different ways. For example, if both \"going\" and \"goingto\" are in L, and the given S is \"goingtorain\", then the first chunk token can be \"going\" or \"goingto\". The Larger-first principle (Section 2.1) dictates that LiB takes the largest substring of S that matches a chunk type in L (in the example case, it is \"goingto\"), i.e. greedy matching, and selects it as a chunk token (segment). If there is no chunk type in L that matches the current S, the first symbol s becomes the selected chunk token.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "Chunk evaluation: In most cases, selecting larger chunk tokens will reduce the number of tokens N in S, but in some cases it will not. Let us continue the example we gave: If \"goingtor\", \"a\", \"in\", and \"rain\" are also in L, the largest chunk token becomes \"goingtor\", resulting in the segmentation \"goingtor/a/in\". If \"goingto\" had been selected, this would result in \"goingto/rain\". Hence, selecting the largest chunk type resulted in a larger N . The average chunk token sizes of the two segmentations are 5.5 and 3.6 letters, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "In order to test whether the selected chunk type c reduces N , LiB compares the proposed segmentation to the segmentation that results if c is not in L, i.e., if the second largest chunk type in L is selected instead of c. In case L cannot provide a second largest chunk token, there is no evaluation and c is selected directly. Otherwise, c is evaluated as \"Good\" if it results in fewer chunk tokens or in the same number of tokens but with lower total ordinal numbers (i.e., chunks that are higher up in the lexicon):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "segment(S, L) : S \u2192 (c 1 , c 2 , . . . , c N ) segment(S, L \u2212 c) : S \u2192 (c 1 , c 2 , . . . , c N ) evaluate(c) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Good if N < N Bad if N > N if N = N \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 Good if N i=1 \u0398(c i ) \u2264 N i=1 \u0398(c i ) Bad if N i=1 \u0398(c i ) > N i=1 \u0398(c i ) if N = N", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "If evaluate(c) is Good, c is selected; otherwise, the second largest chunk token is selected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "Memorizing: LiB learns new chunks from the segmentation results. There are two types of new chunks in the results: unknown symbols s / \u2208 L and concatenations of known chunks (c i , c i+1 ) (with c i \u2208 L and c i+1 \u2208 L) that occur consecutively in S. L starts empty, learns the symbol chunks, then the smallest chunks construct larger chunks and the larger chunks construct even larger chunks. Thus, L can contain chunks in different sizes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon update", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "The number of all (c i , c i+1 ) in the training corpus can be enormous, and most of them are infrequent chunks. In order to reduce the lexicon size |L|, LiB will memorize all s, but not all (c i , c i+1 ). To recognize the frequent chunks, a strategy is to count all chunks' occurrences and delete the infrequent ones (Perruchet and Vinter, 1998) . However, this strategy asks for storing all chunks at the beginning, which is memory inefficient for both a brain and a computer. Thus, LiB adopts a sampling strategy: The model samples from all possible (c i , c i+1 ) tokens in the current S and memorizes only the tokens which were sampled at least twice. The probability of sampling a chunk pair is the sampling probability \u03b1. The sampling strategy is implicitly sensitive to the chunk token frequency in the text. It makes sure that even without explicit counting, higher-frequency chunks have a higher probability to be memorized. The at-least-twice strategy is not cognitively inspired but heuristic; it helps to prevent memorization of many arbitrary chunks.", |
| "cite_spans": [ |
| { |
| "start": 319, |
| "end": 347, |
| "text": "(Perruchet and Vinter, 1998)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon update", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "Re-ranking and active forgetting: To avoid storing the frequencies of all possible chunk types, and to be more efficient, LiB bypasses explicit frequency counting of chunk types. Instead, LiB encodes the types' importance by their ordinals \u0398(c) in L -the lower the more important. The importance reflects not only the frequency but also the principle of least effort (preference for fewer tokens and fewer types). In general, newly memorized chunk types are less frequent than known chunk types, so new chunk types are appended to the tail of L. The ordinals of known chunk types also need to be adjusted after new training text data comes in. The chunk evaluation we described in Section 2.3.1 is not only for segmentation, but also for importance re-ranking. The \"good\" chunk types, which result in fewer chunk tokens in S, will move closer to the lexicon head (i.e., lower ordinal); The \"bad\" chunk types, which result in more chunk tokens in S, will move closer to the lexicon tail, i.e., they get a higher ordinal number. The updated \u0398(c) of a chunk type is relative to its previous ordinal \u0398 (c) in L:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon update", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "\u0398(c) = \u0398 (c)(1 \u2212 \u2206) if c is good \u0398 (c)(1 + \u2206) if c is bad", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon update", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "where 0 < \u2206 < 1 is the re-ranking rate. In case the updated \u0398(c) > |L|, c will be deleted from L.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon update", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "Passive forgetting: Obviously, the re-ranking also influences other chunk types whose ordinals are between \u0398(c) and \u0398 (c). So even though the sampling strategy of the memorizer may add a few infrequent chunk types into L, the re-ranker will move them closer to the tail of L. Those chunk types, as well as the \"bad\" chunk types, are \"junk chunks\" which increase I(c). The passive forgetter removes them from L to reduce I(c).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon update", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "The junk chunk types tend to be at the tail of L, but the tail may also store some non-junk types. A cognitive strategy to avoid deleting them is waiting for more evidence. So instead of deleting these types immediately, LiB uses a soft deleting strategy: after each training epoch, LiB will select the last \u03c9|L| (at least one) chunk types in L and assign them a probation period \u03c4 . Here, \u03c9 is the forgetting ratio and \u03c4 is the remaining time until deletion; it is initialized at \u03c4 0 and decreases by one after each training epoch (LiB analyzes one document D in each training epoch). Once the probation time is over, when \u03c4 = 0, the chunk is forgotten (i.e., removed from L). If a chunk type was evaluated as \"good\" during its probation period, its probation is cancelled. The c that occur in fewer documents are more likely to be forgotten.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon update", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "We trained the LiB model on both English and Chinese materials ( Table 1) . The English material is BR-phono, which is a branch of the Brent corpus (Bernstein-Ratner, 1987), containing phonetic transcriptions of utterances directed at children. We used it for testing segmentation of spoken language. LiB accepts the document as an input batch in each training epoch but the utterances in the BR-phono corpus have no document boundaries. We randomly sampled 200 utterances (without replacement) from BR-phono to form one document and repeated this 400 times to create 400 documents for model training. The Chinese materials are taken from Chinese Treebank 8.0 (CTB8) (Xue et al., 2013) , which is a hybrid-domain corpus (news reports, government documents, magazine articles, conversations, web discussions, and weblogs). As preprocessing, we replaced all the Roman letters and Arabic numbers with [X] , and regarded all punctuation as sequence boundaries.", |
| "cite_spans": [ |
| { |
| "start": 667, |
| "end": 685, |
| "text": "(Xue et al., 2013)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 898, |
| "end": 901, |
| "text": "[X]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 73, |
| "text": "Table 1)", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Training", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In order to examine the unsupervised performance of LiB, all spaces in the corpora were removed before training. We trained LiB on BR-phono and on CTB8 separately. The parameter settings are shown in Appendix A. The example segmentations with increasing number of training epochs are shown in Appendix B. The related code and preprocessed corpora are available online 1 . BR-phono 400 9,790 33,399 1,321 CTB8 3,007 236,132 1,376,142 65,410 MSR / 18,236 89,917 11,728 PKU / 15,492 88,327 12,422 4 Model Evaluation", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 372, |
| "end": 504, |
| "text": "BR-phono 400 9,790 33,399 1,321 CTB8 3,007 236,132 1,376,142 65,410 MSR / 18,236 89,917 11,728 PKU / 15,492 88,327", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Training", |
| "sec_num": "3" |
| }, |
| { |
| "text": "After training, we evaluated the chunk units in the training corpora from two information-theoretical views that bear a relation to cognitive processing: description length and language model surprisal. We also examined the performance of LiB on word segmentation tasks. However, since LiB can learn new chunks from the concatenation of known chunks, the learned chunks are not only words, but also possible multi-word expressions. For the word segmentation task, we want to know the words in those multi-word expressions, so we had LiB find the subchunks c , which are the chunks inside the original chunks (e.g., \"you\" and \"are\" inside \"youare\"), and regarded the subchunks as the words. LiB defines the subchunk by searching all the potential chunk sequences in the original chunk (c raw ) and selecting the sequence with lowest sum of ordinals unless c raw has the lowest sum:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subchunks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(c 1 , . . . , c n ) = arg min (c 1 ,...,cn) i \u0398(c i ) , where (c 1 , . . . , c n ) = c raw Subchunk(s) of c raw = (c 1 , . . . , c n ) if max i (\u0398(c i )) < \u0398(c raw ) c raw otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subchunks", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Since the LiB lexicon is ordered, we may examine the head of the trained lexicons (Table 9) , which are the highest-ranked chunk units. They show that LiB appears to learn common words and collocations. Among the learned units we observe some collocations (e.g., \"that'sa\") which are not linguistic phrases. The lexicon of LiB trained on CTB8 shows that the high-ranked Chinese chunk units are usually bigrams (Appendix C). The middle and the tail of the trained lexicons are also shown in Appendix C. We present examples of chunk and subchunk segmentation results in Table 3 . The results show the chunk units include common collocations, while the subchunk units are very close to the linguistic words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 82, |
| "end": 91, |
| "text": "(Table 9)", |
| "ref_id": null |
| }, |
| { |
| "start": 568, |
| "end": 575, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Qualitative evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "LiB provides two types of new units to segment language: LiB chunks are the raw segmentation result of LiB, and LiB subchunks are the subchunks inside LiB chunks. In order to examine the encoding efficiency of LiB chunks and LiB subchunks, we compared the description lengths (DL) on different segmentations. The DL is the number of bits required to represent the corpus; it sums the number of bits required to encode the lexicon and the number of bits required to encode the corpus when segmented by the lexicon (Zhikov et al., 2013) :", |
| "cite_spans": [ |
| { |
| "start": 513, |
| "end": 534, |
| "text": "(Zhikov et al., 2013)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description length evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "DL(total) = DL(lexicon) + DL(corpus) = \u2212 #s i=1 Freq(s i ) log 2 P (s i ) \u2212 #u j=1 Freq(u j ) log 2 P (u j )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description length evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Corpus Top 50 entries (translated) in Lexicon BRphono the, yeah, you, what, wanna, can you, two, and, that's, okay, four, now, it, they're, he's, in, look, with, you want, who, he, that, all, your, here, i think, put, that's a, what's, you can, his, my, see, you wanna, no, is that, high, whose, this, good, there's, very , see the, its a, is it, alright, this is, are you, ing, have CTB8 haven't, China, we, economics, already, kid, but, education, can, now, government, country, a, these, self, can't, if, journalist, today, they, although, require, tech, process, this, Xinhua News Agency, wish, issue, is, mainland, because, some, and, all are, so, now, may, Taiwan, should, political, development, also is, also is, society, such, via, continue, isn't, Shanghai, 's Here, #s denotes the number of unique symbols s in L (either as a single-symbol chunk or as part of a larger chunk); Freq(s i ) and P (s i ) are the occurrence count and ratio of s i in L; #u denotes the number of unique units u in the corpus; Freq(u j ) and P (u j ) are the occurrence count and ratio of u j in the corpus.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 321, |
| "text": "yeah, you, what, wanna, can you, two, and, that's, okay, four, now, it, they're, he's, in, look, with, you want, who, he, that, all, your, here, i think, put, that's a, what's, you can, his, my, see, you wanna, no, is that, high, whose, this, good, there's, very", |
| "ref_id": null |
| }, |
| { |
| "start": 398, |
| "end": 770, |
| "text": "China, we, economics, already, kid, but, education, can, now, government, country, a, these, self, can't, if, journalist, today, they, although, require, tech, process, this, Xinhua News Agency, wish, issue, is, mainland, because, some, and, all are, so, now, may, Taiwan, should, political, development, also is, also is, society, such, via, continue, isn't, Shanghai, 's", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description length evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As benchmarks, we used Symbol (the indivisible units; in our two corpora, phonemes and characters respectively), Word (the words presegmented in the corpora), and BPE subword (the Byte Pair generated by SentencePiece (Kudo and Richardson, 2018) with default parameters setting). The DL result (Table 4) shows that LiB chunks result in shortest DL; they minimze the information; they are the most concise encodings.", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 244, |
| "text": "(Kudo and Richardson, 2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 293, |
| "end": 303, |
| "text": "(Table 4)", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Description length evaluation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Besides the DL, which compares the information efficiencies of different lexicons, we are also interested in whether the LiB lexicon can reflect the mental lexicon. We lack a ground truth of what is in the putative mental lexicon. However, we can regard natural language material as a large-scale result of human language use and language behavior. Trained on a very large corpus, a recent study by Brown et al. (2020) shows that Language Models (LMs) can closely predict human performance on various language tasks. LMs capture the probabilistic constraints in natural language and perform the tasks by making predictions, which is a fundamental cognitive function (Bar, 2007) . So, by measuring the prediction surprisal in the corpus segmented by different lexicons, we can evaluate different lexicons from a cognitive view, and we presume that the lexicon that gets the best LM performance is a better approximation of the mental lexicon.", |
| "cite_spans": [ |
| { |
| "start": 399, |
| "end": 418, |
| "text": "Brown et al. (2020)", |
| "ref_id": null |
| }, |
| { |
| "start": 666, |
| "end": 677, |
| "text": "(Bar, 2007)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language model evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Many studies have shown that word surprisal is positively correlated with human word-reading time (Monsalve et al., 2012; Smith and Levy, 2013) and size of the N400 component in EEG (Frank et al., 2015) . From the cognitive principle of least effort, it follows that readers try to minimize reading time. Table 5 : Bits-per-character scores on different segmentations.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 121, |
| "text": "(Monsalve et al., 2012;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 122, |
| "end": 143, |
| "text": "Smith and Levy, 2013)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 178, |
| "end": 202, |
| "text": "EEG (Frank et al., 2015)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 305, |
| "end": 312, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Language model evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Hence, it follows that readers would try to find lexical units such that total surprisal is also minimized.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language model evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Surprisal, defined as \u2212 log 2 (P (w|context)), is not comparable between models with different segmentations. Instead we use bits per character (BPC) (Graves, 2013) , which is average surprisal/|c|, where |c| is the average chunk length over the whole test set. We tested the segmentations 2 on both bigram and trigram language models and the results show that the corpora represented by LiB chunks achieve the lowest surprisal (Table 5) .", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 164, |
| "text": "(Graves, 2013)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 428, |
| "end": 437, |
| "text": "(Table 5)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Language model evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "As we already illustrated in Table 3 , subchunk units tend to be close to linguistic words. We thus tested LiB subchunks as a resource for word segmentation. To evaluate LiB on English word segmentation, we compared LiB with Adaptor Grammar (AG) (Johnson and Goldwater, 2009) , which achieves state-of-the-art performance on the segmentation task of BR-phono. AG requires grammar construction rules that encode prior linguistic knowledge. These rules presuppose knowledge about unigrams only, or unigrams+collocations, or unigrams+collocations+syllables. This yields three versions of AG. Table 6a shows that AG(syllable), whose rules carry extra linguistic knowledge (Johnson and Goldwater, 2009) , achieves the highest score. The score of LiB is higher than AG(unigram) and slightly lower than AG(collocations), the two versions of AG comparable to our approach. AG(syllable) presumes knowledge that our model does not have (and that could possibly benefit LiB).", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 275, |
| "text": "(Johnson and Goldwater, 2009)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 668, |
| "end": 697, |
| "text": "(Johnson and Goldwater, 2009)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 29, |
| "end": 36, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 589, |
| "end": 597, |
| "text": "Table 6a", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word segmentation evaluation", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "In the Chinese segmentation task. we compared LiB with three popular word segmentation toolboxes: Jieba 3 , THULAC (Sun et al., 2016) , and pkuseg (Luo et al., 2019) . These toolboxes are supervised, learning the ground truth (word boundaries) during training. For comparison, we also modified a su-pervised LiB (LiB(sup)) for the word segmentation task. LiB(sup) skips the training phase. Instead, it counts all the ground-truth words in the training set and adds them as the chunk types to L. The higher the frequency of a type in the training set, the smaller its ordinal in L. We trained and tested the models on CTB8. To test the generalization performance of the models in the word segmentation task, we also test the training result on two additional corpora: MSR and PKU (Table 1) provided by the Second International Chinese Word Segmentation Bakeoff (Emerson, 2005) . The segmentation rules are slightly different among MSR, PKU, and CTB8. MSR and PKU are news domain, which is different from CTB8. MSR and PKU were preprocessed in the same way as CTB8. Table 6b shows that the scores of the unsupervised original version of LiB are lower than the supervised models 4 , but the scores of the supervised version of LiB are close to the supervised models and are even higher on MSR. Due to the low out-of-vocabulary (OOV) rate of MSR (Emerson, 2005) , the good performance on MSR shows that the lexicon is important for LiB. The only difference between the two versions of LiB is in their lexicons: the original LiB learned the lexicon from zero and the supervised LiB directly uses the ground-truth words in its lexicon. It shows that the segmentation module in LiB is appropriate for the word segmentation task.", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 133, |
| "text": "(Sun et al., 2016)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 147, |
| "end": 165, |
| "text": "(Luo et al., 2019)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 860, |
| "end": 875, |
| "text": "(Emerson, 2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1342, |
| "end": 1357, |
| "text": "(Emerson, 2005)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 779, |
| "end": 788, |
| "text": "(Table 1)", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 1064, |
| "end": 1072, |
| "text": "Table 6b", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word segmentation evaluation", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "[a] ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word segmentation evaluation", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "This paper presented an unsupervised model, LiB, to simulate the human cognitive process of language unitization/segmentation. Following the principles of least effort, larger-first processing, and passive and active forgetting, LiB incrementally builds a lexicon which can minimize the number of unit tokens (alleviating the effort of analysis) and unit types (alleviating the effort of storage) at the same time on any given corpus. Moreover, it is able to segment the corpus, or any other text in the same language, based on the induced lexicon. The computations in LiB are light-weight, which makes it very efficient. The LiB-generated lexicon shows optimal performances among different types of lexicons (e.g., ground-truth words) both in terms of description length and in terms of statistical language model surprisal, both of which are associated with cognitive processing. The workflow design and the computation requirement make LiB cognitively plausible, and the results suggest that the LiB lexicon may be a useful proxy of the mental lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Future work will be to allow skip-gram units in the lexicon. Skip-grams may help to capture longer-distance dependencies, and further lessen the cognitive effort by reducing the number of unit types/tokens. Furthermore, as the word segmentation results of the current LiB are not ideal, we hypothesize that skip-gram units may also benefit the detection of infrequent named entities (e.g., the skip-gram \"Mr. said\" helps to detect \"Mortimer\" in \"Mr.Mortimersaid\") and thus improve the word segmentation performance. Other future work includes a LiB variant that accepts speech input and a semi-supervised LiB variant that uses semantic knowledge (e.g., word embeddings) to enhance the language unitization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Since BR-phono is a child-directed speech corpus, its chunk types are usually very common, and so they often have much higher document ratios than CTB8 chunks. We use a lower \u03c4 0 , which is related to document ratio, to balance the corpus difference. The number of training epochs for CTB8, which is large-scale, was set to a higher number than for BR-phono. The epochs numbers are well beyond the convergence points. \u03b1 and \u2206 mainly affect the training speed, while \u03c9 and \u03c4 0 mainly affect |L|. The current parameter settings may not be optimal for end tasks such as word segmentation; in preliminary experiments we optimized for speed 5 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Training parameter settings", |
| "sec_num": null |
| }, |
| { |
| "text": "Corpus \u03b1 \u2206 \u03c9 \u03c4 0 epochs BR-phono 0.25 0.2 0.0001 10 5,000 CTB8 500 50,000 Table 7 : The parameter settings in the training on two corpora. \u03b1 is the sampling probability, \u2206 the re-ranking rate, \u03c9 the forgetting ratio, \u03c4 0 the probation period.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 74, |
| "end": 81, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Training parameter settings", |
| "sec_num": null |
| }, |
| { |
| "text": "The progression in chunking over training epochs before convergence (Table 8) shows LiB can learn some word chunks even in the very early epochs. Also, Table 8 illustrates that convergence is reached well before the preset number of runs. Table 8 : Example segmentations of strings in the two corpora with increasing number of training epochs. See Table 3 for the correct word-level segementation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 68, |
| "end": 77, |
| "text": "(Table 8)", |
| "ref_id": null |
| }, |
| { |
| "start": 152, |
| "end": 159, |
| "text": "Table 8", |
| "ref_id": null |
| }, |
| { |
| "start": 239, |
| "end": 246, |
| "text": "Table 8", |
| "ref_id": null |
| }, |
| { |
| "start": 348, |
| "end": 355, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "B Segmentations with increasing number of training epochs", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/ray306/LiB", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The code of the BPC calculations was modified from a Github project: https://github.com/joshualoehr/ ngram-language-model. We kept all tokens during training.3 https://github.com/fxsjy/jieba", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The scores of Jieba, THULAC, and pkuseg are provided by https://github.com/lancopku/pkuseg-python", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The training of BR-phone costs 57 s and the training of CTB8 costs 31 min 55 s. The code is written in pure Python 3.7 and ran on a single core of Intel Core i5-7300HQ.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "C Top, middle and tail entries in lexicon", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| }, |
| { |
| "text": "Entries in Lexicon BRphono (Top 50) D6 the, y& yeah, yu you, WAt what, wan6 wanna, k&nyu can you, tu two, &nd and, D&ts that's, oke okay, f% four, nQ now, It it, D* they're, hiz he's, In in, lUk look, wIT with, yuwant you want, hu who, hi he, D&t that, Ol all, y) your, h( here, 9TINk i think , \u6ca1\u6709 haven't, \u4e2d\u56fd China, \u6211\u4eec we, \u7ecf\u6d4e economics, \u5df2\u7ecf already, \u5b69\u5b50 kid, \u4f46\u662f but, \u6559\u80b2 education, \u53ef\u4ee5 can, \u76ee\u524d now, \u653f\u5e9c government, \u56fd\u5bb6 country, \u4e00\u4e2a a, \u8fd9\u4e9b these, \u81ea\u5df1 self, \u4e0d\u80fd can't, \u5982\u679c if, \u8bb0\u8005 journalist, \u4eca\u5929 today, \u4ed6\u4eec they, \u867d\u7136 although, \u8981\u6c42 require, \u6280\u672f tech, \u8fdb\u884c process, \u8fd9\u4e2a this, \u65b0\u534e\u793e Xinhua News Agency, \u5e0c\u671b wish, \u95ee\u9898 issue, \u5c31\u662f is, \u5927\u9646 mainland, \u56e0\u4e3a because, \u4e00\u4e9b some, \u4ee5\u53ca and, \u90fd\u662f all are, \u56e0\u6b64 so, \u73b0\u5728 now, \u53ef\u80fd may, \u53f0\u6e7e Taiwan, \u5e94\u8be5 should, \u653f\u6cbb political, \u53d1\u5c55 development, \u4e5f\u662f also is, \u8fd8\u662f also is, \u793e\u4f1a society, \u8fd9\u6837 such, \u901a\u8fc7 via, \u7ee7\u7eed continue, \u4e0d\u662f isn't, \u4e0a\u6d77 Shanghai, \u7684 's CTB8 (Middle 20) \u809d\u810f liver, \u519b\u4e8b\u653f\u53d8\u63a8\u7ffb military coup overthrows, \u5728\u5176\u4ed6\u5730\u65b9 in other places, \u5728\u91ce\u52bf\u529b opposition force, \u800c\u4e14\u8fd9\u4e2a and this, \u6cc4\u7684, \u5e2e\u4ed6 help him, \u5b9d\u5e94\u53bf Baoying County, \u653f\u6cbb\u65b0\u95fb political news, \u7ecf\u6d4e\u8d8a economic more, \u5854\u80af, \u8fc5\u901f\u5730 rapidly, \u94c5\u7b14 pencil, \u96c6\u4f53\u7ecf\u6d4e collective economy, \u8d77\u6e90 origin, \u9093\u76f8\u626c\u534f\u52a9 Tang Xiangyang assisted, \u5efa\u5236 establishment, \u5199\u5b8c after writing, \u8bf4\u7684\u90a3\u6837 as said, \u540e \u987e look back CTB8 (Tail 20) \u5b58\u5728\u4e3b\u6743 there is sovereignty, \u786e\u6743 confirm rights, \u8349\u6848\u8fd8 the draft also, \u684c\u4f1a\u8bae, \u7b2c\u4e00\u9996\u76f8 the first prime minister, \u8fea\u5965 dior, \u957f\u5927\u4e86 grown up, \u7231\u4ed6 love him, \u8bf4 \u4ed6 say him, \u5b50\u865a\u4e4c, \u6709\u6ca1\u6709\u53c2\u4e0e did you participate, \u4e25\u8c28\u7684 rigorous, \u4ecd\u7136\u662f is still, \u7ad9\u4e0a\u8f66, \u8fd0\u8f93\u7f72 Transport Department, \u6740\u673a murderous, \u51b3 decided, \u5efa\u6210 \u901a\u8f66 completed and opened to traffic, \u4e3b\u8981\u5acc\u7591\u4eba\u8d56\u660c\u661f the main suspect Lai Changxing, \u5df2\u7ecf\u5411\u52a0\u62ff\u5927 has to Canada Table 9 : The top 50 entries, the middle 20 entries and the tail 20 entries in the lexicons. The original results of BRphono are in phonemic characters; we transcribed the entries containing complete words into English words (in bold font) for ease of presentation. The original results of CTB8 are the Chinese characters; we added the English translations (in bold font) with the entries containing complete words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 35, |
| "text": "(Top 50)", |
| "ref_id": null |
| }, |
| { |
| "start": 293, |
| "end": 294, |
| "text": ",", |
| "ref_id": null |
| }, |
| { |
| "start": 832, |
| "end": 843, |
| "text": "(Middle 20)", |
| "ref_id": null |
| }, |
| { |
| "start": 1189, |
| "end": 1198, |
| "text": "(Tail 20)", |
| "ref_id": null |
| }, |
| { |
| "start": 1567, |
| "end": 1574, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "More than words: Frequency effects for multi-word phrases", |
| "authors": [ |
| { |
| "first": "Inbal", |
| "middle": [], |
| "last": "Arnon", |
| "suffix": "" |
| }, |
| { |
| "first": "Neal", |
| "middle": [], |
| "last": "Snider", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "J. Mem. Lang", |
| "volume": "62", |
| "issue": "1", |
| "pages": "67--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Inbal Arnon and Neal Snider. 2010. More than words: Frequency effects for multi-word phrases. J. Mem. Lang., 62(1):67-82.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Stored word sequences in language learning: the effect of familiarity on children's repetition of four-word combinations", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Bannard", |
| "suffix": "" |
| }, |
| { |
| "first": "Danielle", |
| "middle": [], |
| "last": "Matthews", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Psychol. Sci", |
| "volume": "19", |
| "issue": "3", |
| "pages": "241--248", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Bannard and Danielle Matthews. 2008. Stored word sequences in language learning: the effect of familiarity on children's repetition of four-word combinations. Psychol. Sci., 19(3):241-248, March.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The proactive brain: using analogies and associations to generate predictions", |
| "authors": [ |
| { |
| "first": "Moshe", |
| "middle": [], |
| "last": "Bar", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Trends Cogn. Sci", |
| "volume": "11", |
| "issue": "7", |
| "pages": "280--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moshe Bar. 2007. The proactive brain: using analogies and associations to generate predictions. Trends Cogn. Sci., 11(7):280-289.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The phonology of parent-child speech", |
| "authors": [ |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Bernstein-Ratner", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Children's language", |
| "volume": "6", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nan Bernstein-Ratner. 1987. The phonology of parent-child speech. Children's language, 6(3).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Ilya Sutskever, and Dario Amodei. 2020. Language models are Few-Shot learners", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Tom B Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Nick", |
| "middle": [], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "Melanie", |
| "middle": [], |
| "last": "Ryder", |
| "suffix": "" |
| }, |
| { |
| "first": "Jared", |
| "middle": [], |
| "last": "Subbiah", |
| "suffix": "" |
| }, |
| { |
| "first": "Prafulla", |
| "middle": [], |
| "last": "Kaplan", |
| "suffix": "" |
| }, |
| { |
| "first": "Arvind", |
| "middle": [], |
| "last": "Dhariwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Pranav", |
| "middle": [], |
| "last": "Neelakantan", |
| "suffix": "" |
| }, |
| { |
| "first": "Girish", |
| "middle": [], |
| "last": "Shyam", |
| "suffix": "" |
| }, |
| { |
| "first": "Amanda", |
| "middle": [], |
| "last": "Sastry", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandhini", |
| "middle": [], |
| "last": "Askell", |
| "suffix": "" |
| }, |
| { |
| "first": "Ariel", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Gretchen", |
| "middle": [], |
| "last": "Herbert-Voss", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Krueger", |
| "suffix": "" |
| }, |
| { |
| "first": "Rewon", |
| "middle": [], |
| "last": "Henighan", |
| "suffix": "" |
| }, |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Child", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ramesh", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Daniel", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Ziegler", |
| "suffix": "" |
| }, |
| { |
| "first": "Clemens", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Winter", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Hesse", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mateusz", |
| "middle": [], |
| "last": "Sigler", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Litwin", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2005.14165" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are Few-Shot learners. arXiv preprint arXiv:2005.14165, May.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Syntactic Structures. Mouton", |
| "authors": [ |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1957, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noam Chomsky. 1957. Syntactic Structures. Mouton.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Hierarchical multiscale recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Junyoung", |
| "middle": [], |
| "last": "Chung", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungjin", |
| "middle": [], |
| "last": "Ahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "5th International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2017. Hierarchical multiscale recurrent neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April. OpenReview.net.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The biology of forgetting -a perspective", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Ronald", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhong", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Neuron", |
| "volume": "95", |
| "issue": "3", |
| "pages": "490--503", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronald L Davis and Yi Zhong. 2017. The biology of forgetting -a perspective. Neuron, 95(3):490-503, August.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Unsupervised language acquisition", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Carl De Marcken", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carl de Marcken. 1996. Unsupervised language acquisition. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA, USA.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The second international Chinese word segmentation bakeoff", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "Emerson" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the fourth SIGHAN workshop on Chinese language processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language processing.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Electrophysiological evidence for the morpheme-based combinatoric processing of English compounds", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Fiorentino", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuka", |
| "middle": [], |
| "last": "Naito-Billen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jamie", |
| "middle": [], |
| "last": "Bost", |
| "suffix": "" |
| }, |
| { |
| "first": "Ella", |
| "middle": [], |
| "last": "Fund-Reznicek", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Cogn. Neuropsychol", |
| "volume": "31", |
| "issue": "1-2", |
| "pages": "123--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Fiorentino, Yuka Naito-Billen, Jamie Bost, and Ella Fund-Reznicek. 2014. Electrophysiological evidence for the morpheme-based combinatoric processing of English compounds. Cogn. Neuropsychol., 31(1-2):123-146.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "The ERP response to the amount of information conveyed by words in sentences", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Stefan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Leun", |
| "suffix": "" |
| }, |
| { |
| "first": "Giulia", |
| "middle": [], |
| "last": "Otten", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriella", |
| "middle": [], |
| "last": "Galli", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vigliocco", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Brain Lang", |
| "volume": "140", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefan L Frank, Leun J Otten, Giulia Galli, and Gabriella Vigliocco. 2015. The ERP response to the amount of information conveyed by words in sentences. Brain Lang., 140:1-11, January.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Generating sequences with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1308.0850" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, August.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The importance of forgetting", |
| "authors": [ |
| { |
| "first": "Lauren", |
| "middle": [], |
| "last": "Gravitz", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Nature", |
| "volume": "571", |
| "issue": "", |
| "pages": "12--14", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lauren Gravitz. 2019. The importance of forgetting. Nature, 571:S12-S14.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "What's in the lexicon?", |
| "authors": [ |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Jackendoff", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Storage and Computation in the Language Faculty", |
| "volume": "", |
| "issue": "", |
| "pages": "23--58", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ray Jackendoff. 2002. What's in the lexicon? In Sieb Nooteboom, Fred Weerman, and Frank Wijnen, editors, Storage and Computation in the Language Faculty, pages 23-58. Springer Netherlands, Dordrecht.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies: The", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proceedings of Human Language Technologies: The", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "317--325", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 317-325.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Learning to discover, ground and use words with segmental neural language models", |
| "authors": [ |
| { |
| "first": "Kazuya", |
| "middle": [], |
| "last": "Kawakami", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "6429--6441", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kazuya Kawakami, Chris Dyer, and Phil Blunsom. 2019. Learning to discover, ground and use words with seg- mental neural language models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6429-6441, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "66--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium, November. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "PKUSEG: A toolkit for multi-domain Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "Ruixuan", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingjing", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuancheng", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Xu", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1906.11455" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruixuan Luo, Jingjing Xu, Yi Zhang, Xuancheng Ren, and Xu Sun. 2019. PKUSEG: A toolkit for multi-domain Chinese word segmentation. arXiv preprint arXiv:1906.11455, June.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Multiple routes for compound word processing in the brain: Evidence from EEG", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lucy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yury", |
| "middle": [], |
| "last": "Macgregor", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shtyrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Brain Lang", |
| "volume": "126", |
| "issue": "2", |
| "pages": "217--229", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucy J MacGregor and Yury Shtyrov. 2013. Multiple routes for compound word processing in the brain: Evidence from EEG. Brain Lang., 126(2):217-229.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Language learning as language use: A cross-linguistic model of child language development", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Stewart", |
| "suffix": "" |
| }, |
| { |
| "first": "Morten", |
| "middle": [ |
| "H" |
| ], |
| "last": "Mccauley", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Christiansen", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Psychol. Rev", |
| "volume": "126", |
| "issue": "1", |
| "pages": "1--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stewart M McCauley and Morten H Christiansen. 2019. Language learning as language use: A cross-linguistic model of child language development. Psychol. Rev., 126(1):1-51, January.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Lexical surprisal as a general predictor of reading time", |
| "authors": [ |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Fernandez Monsalve", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [ |
| "L" |
| ], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriella", |
| "middle": [], |
| "last": "Vigliocco", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "398--408", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Irene Fernandez Monsalve, Stefan L Frank, and Gabriella Vigliocco. 2012. Lexical surprisal as a general pre- dictor of reading time. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 398-408.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Forest before trees: The precedence of global features in visual perception", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Navon", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Cogn. Psychol", |
| "volume": "9", |
| "issue": "3", |
| "pages": "353--383", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Navon. 1977. Forest before trees: The precedence of global features in visual perception. Cogn. Psychol., 9(3):353-383.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Direct electrophysiological evidence for prefrontal control of hippocampal processing during voluntary forgetting", |
| "authors": [ |
| { |
| "first": "Juergen", |
| "middle": [], |
| "last": "Carina R Oehrn", |
| "suffix": "" |
| }, |
| { |
| "first": "Conrad", |
| "middle": [], |
| "last": "Fell", |
| "suffix": "" |
| }, |
| { |
| "first": "Timm", |
| "middle": [], |
| "last": "Baumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Rosburg", |
| "suffix": "" |
| }, |
| { |
| "first": "Henrik", |
| "middle": [], |
| "last": "Ludowig", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Kessler", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikolai", |
| "middle": [], |
| "last": "Hanslmayr", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Axmacher", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Curr. Biol", |
| "volume": "28", |
| "issue": "18", |
| "pages": "3016--3022", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carina R Oehrn, Juergen Fell, Conrad Baumann, Timm Rosburg, Eva Ludowig, Henrik Kessler, Simon Hanslmayr, and Nikolai Axmacher. 2018. Direct electrophysiological evidence for prefrontal control of hippocampal process- ing during voluntary forgetting. Curr. Biol., 28(18):3016-3022.e4, September.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "PARSER: A model for word segmentation", |
| "authors": [ |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Perruchet", |
| "suffix": "" |
| }, |
| { |
| "first": "Annie", |
| "middle": [], |
| "last": "Vinter", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "J. Mem. Lang", |
| "volume": "39", |
| "issue": "2", |
| "pages": "246--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre Perruchet and Annie Vinter. 1998. PARSER: A model for word segmentation. J. Mem. Lang., 39(2):246- 263, August.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Perceptual recognition as a function of meaningfulness of stimulus material", |
| "authors": [ |
| { |
| "first": "Gerald", |
| "middle": [ |
| "M" |
| ], |
| "last": "Reicher", |
| "suffix": "" |
| } |
| ], |
| "year": 1969, |
| "venue": "J. Exp. Psychol", |
| "volume": "81", |
| "issue": "2", |
| "pages": "275--280", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerald M. Reicher. 1969. Perceptual recognition as a function of meaningfulness of stimulus material. J. Exp. Psychol., 81(2):275-280, August.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Neural machine translation of rare words with subword units", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1715--1725", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany, August. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "The effect of word predictability on reading time is logarithmic", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nathaniel", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Cognition", |
| "volume": "128", |
| "issue": "3", |
| "pages": "302--319", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nathaniel J Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302-319, September.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "The sentence superiority effect revisited", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Snell", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Grainger", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Cognition", |
| "volume": "168", |
| "issue": "", |
| "pages": "217--221", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joshua Snell and Jonathan Grainger. 2017. The sentence superiority effect revisited. Cognition, 168:217-221, November.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Unsupervised neural word segmentation for Chinese via segmental language modeling", |
| "authors": [ |
| { |
| "first": "Zhiqing", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhi-Hong", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "4915--4920", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhiqing Sun and Zhi-Hong Deng. 2018. Unsupervised neural word segmentation for Chinese via segmental lan- guage modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4915-4920, Brussels, Belgium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Thulac: An efficient lexical analyzer for Chinese", |
| "authors": [ |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Xinxiong", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kaixu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhipeng", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiyuan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maosong Sun, Xinxiong Chen, Kaixu Zhang, Zhipeng Guo, and Zhiyuan Liu. 2016. Thulac: An efficient lexical analyzer for Chinese.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Reading and the mental lexicon", |
| "authors": [ |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Taft", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcus Taft. 2013. Reading and the mental lexicon. Psychology Press.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Chinese treebank 8.0 LDC2013T21. Linguistic Data Consortium", |
| "authors": [ |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiuhong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zixin", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Fu-Dong", |
| "middle": [], |
| "last": "Chiou", |
| "suffix": "" |
| }, |
| { |
| "first": "Meiyu", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nianwen Xue, Xiuhong Zhang, Zixin Jiang, Martha Palmer, Fei Xia, Fu-Dong Chiou, and Meiyu Chang. 2013. Chinese treebank 8.0 LDC2013T21. Linguistic Data Consortium, Philadelphia.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "How do we segment text? two-stage chunking operation in reading. eNeuro", |
| "authors": [ |
| { |
| "first": "Jinbiao", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qing", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Xing", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinbiao Yang, Qing Cai, and Xing Tian. 2020. How do we segment text? two-stage chunking operation in reading. eNeuro, May.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "An efficient algorithm for unsupervised word segmentation with branching entropy and MDL. Information and Media Technologies", |
| "authors": [ |
| { |
| "first": "Valentin", |
| "middle": [], |
| "last": "Zhikov", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroya", |
| "middle": [], |
| "last": "Takamura", |
| "suffix": "" |
| }, |
| { |
| "first": "Manabu", |
| "middle": [], |
| "last": "Okumura", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "8", |
| "issue": "", |
| "pages": "514--527", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valentin Zhikov, Hiroya Takamura, and Manabu Okumura. 2013. An efficient algorithm for unsupervised word segmentation with branching entropy and MDL. Information and Media Technologies, 8(2):514-527.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Human behavior and the principle of least effort", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Kingsley", |
| "suffix": "" |
| }, |
| { |
| "first": "Zipf", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1949, |
| "venue": "", |
| "volume": "573", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Kingsley Zipf. 1949. Human behavior and the principle of least effort, volume 573. Addison-Wesley Press, Oxford, England.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "LiB model accepts any sequence S of atomic symbols s: S = (s 1 , s 2 , . . .), as the input. A collection of S forms a document D and all D together form the training corpus. S can be segmented into chunk tokens (c 1 , . . . , c N ), where each chunk is a subsequence of S: c = (s i , . . . , s j ) and N is the number of chunk tokens in S. The segmentation is based on a lexicon L (Fig. 1) where all chunk types are stored in order. The ordinal number of chunk type c in L is denoted \u0398(c), and |L| is the number of chunk types in L. Let I(c) be the amount of information (the number of encoding bits) required to identify each chunk type in L, that is, I(c) = log 2 |L|, and I(S) be the amount of information required for the input S, then: I(S) = I(c)N . Our model aims to minimize the expected encoding information to extract the cognitive units in any S, which means minimizing E[I(S)], which is accomplished by simultaneously reducing |L| (smaller |L| means lower I(c)) and E[N ] (the expected number of chunk tokens in S). In practice our model: 1. Starts with an empty L; 2. Randomly selects a D from the corpus and analyzes the S in D; 3. Adds previously unseen symbols s as (atomic) chunk types to L; 4. Recursively combines adjacent chunk tokens into new chunk types, reducing E[N ] but increasing |L|; 5. Removes less useful types from L, reducing |L|; 6. Repeats steps 2 to 5 for a predetermined number of epochs.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "Information flow in the LiB model.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF0": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "The training and test corpus statistics after preprocessing. MSR and PKU are the (Chinese) test corpora which are mentioned in Section 4.5. Word units are presegmented in the CTB8, MSR, and PKU corpora.", |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Corpus</td><td>Level</td><td>Segmentation</td></tr><tr><td colspan=\"2\">BRphono Input</td><td>allrightwhydon'tweputhimawaynow</td></tr><tr><td/><td>Chunks</td><td>allright\u2022whydon't\u2022we\u2022puthimaway\u2022now</td></tr><tr><td/><td colspan=\"2\">Subchunks all\u2022right\u2022why\u2022don't\u2022we\u2022put\u2022him\u2022away\u2022now</td></tr><tr><td/><td>Words</td><td>all\u2022right\u2022why\u2022don't\u2022we\u2022put\u2022him\u2022away\u2022now</td></tr><tr><td>CTB8</td><td>Input</td><td>\u8fd9\u4e2a\u51fa\u53e3\u4fe1\u8d37\u9879\u76ee\u59d4\u6258\u4e2d\u56fd\u94f6\u884c\u4e3a\u4ee3\u7406\u94f6\u884c</td></tr><tr><td/><td>Chunks</td><td>\u8fd9\u4e2a\u2022\u51fa\u53e3\u4fe1\u8d37\u2022\u9879\u76ee\u2022\u59d4\u6258\u2022\u4e2d\u56fd\u94f6\u884c\u2022\u4e3a\u2022\u4ee3\u7406\u2022\u94f6\u884c</td></tr><tr><td/><td colspan=\"2\">Subchunks \u8fd9\u4e2a\u2022\u51fa\u53e3\u2022\u4fe1\u8d37\u2022\u9879\u76ee\u2022\u59d4\u6258\u2022\u4e2d\u56fd\u2022\u94f6\u884c\u2022\u4e3a\u2022\u4ee3\u7406\u2022\u94f6\u884c</td></tr><tr><td/><td>Words</td><td>\u8fd9\u2022\u4e2a\u2022\u51fa\u53e3\u2022\u4fe1\u8d37\u2022\u9879\u76ee\u2022\u59d4\u6258\u2022\u4e2d\u56fd\u2022\u94f6\u884c\u2022\u4e3a\u2022\u4ee3\u7406\u2022\u94f6\u884c</td></tr></table>", |
| "text": "Transliterations/translations into English of the top 50 entries in the lexicons. The original results of BRphono are in phonemic characters, and the original results of CTB8 are the Chinese characters. For completeness, in Appendix C we repeat these results with the original results included.", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Example segmentations of strings in the two corpora. BRphono's results are transcribed into English words for ease of presentation.", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td>Average length</td><td>1</td><td>2.8</td><td>2.9</td><td>2.9</td><td>3.6</td></tr><tr><td/><td>Lexicon size</td><td>50</td><td>5,574</td><td>1,321</td><td>1,119</td><td>1,869</td></tr><tr><td>BRphono</td><td>DL(lexicon)</td><td><1</td><td>173</td><td>28</td><td>24</td><td>47</td></tr><tr><td/><td>DL(corpus)</td><td>490</td><td>278</td><td>262</td><td>258</td><td>233</td></tr><tr><td/><td>DL(total)</td><td>490</td><td>451</td><td>289</td><td>282</td><td>281</td></tr><tr><td/><td>Average length</td><td>1</td><td>1.4</td><td>1.7</td><td>1.7</td><td>1.9</td></tr><tr><td/><td>Lexicon size</td><td>4,697</td><td colspan=\"2\">7,980 65,410</td><td>24,763</td><td>39,320</td></tr><tr><td>CTB8</td><td>DL(lexicon)</td><td>57</td><td>133</td><td>1,767</td><td>621</td><td>1,153</td></tr><tr><td/><td>DL(corpus)</td><td>21,864</td><td colspan=\"2\">18,229 15,669</td><td>16,188</td><td>15,602</td></tr><tr><td/><td>DL(total)</td><td>21,921</td><td colspan=\"2\">18,362 17,436</td><td>16,809</td><td>16,755</td></tr></table>", |
| "text": "SegmentationCorpusEvaluation metric Symbol BPE subword Word LiB subchunk LiB chunk", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Segmentation</td></tr></table>", |
| "text": "Average token lengths, lexicon sizes, and the DL results of different types of segmentation on the two corpora. The unit of Average Length is phoneme (BRphono) or Chinese character (CTB8). The unit of DL is kilobit.", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td/><td/><td/><td colspan=\"3\">Test set scores</td></tr><tr><td/><td>Scores</td><td/><td>Model</td><td colspan=\"3\">CTB8 MSR PKU</td></tr><tr><td>AG (unigram)</td><td>56</td><td/><td>Jieba</td><td>87.1</td><td>82.8</td><td>87.1</td></tr><tr><td>AG (collocations)</td><td>76</td><td>[b]</td><td>THULAC</td><td>94.6</td><td>83.5</td><td>89.1</td></tr><tr><td>AG (syllable)</td><td>87</td><td/><td>pkuseg</td><td>95.7</td><td>83.7</td><td>89.7</td></tr><tr><td>LiB subchunk</td><td>71</td><td/><td>LiB subchunk</td><td>76.1</td><td>78.7</td><td>78.9</td></tr><tr><td/><td/><td/><td>LiB(sup) chunk</td><td>94.7</td><td>84.5</td><td>88.3</td></tr><tr><td>Table 6:</td><td/><td/><td/><td/><td/><td/></tr></table>", |
| "text": "Token F1 scores (%) of segmentations. [a] the scores on BR-phono by three versions of Adaptor Grammar (AG) and LiB subchunks. [b] the scores of Jieba, THULAC, PKUSEG, LiB subchunks, and LiB(sup) chunks. LiB(sup) represents the supervised adaptation of LiB.", |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>BRphono</td><td>0 Olr9tW9dontwipUthIm6wenQ</td></tr><tr><td/><td>1 O\u2022l\u2022r\u20229\u2022t\u2022W\u20229\u2022don\u2022t\u2022w\u2022i\u2022pUt\u2022h\u2022I\u2022m\u20226\u2022w\u2022e\u2022nQ</td></tr><tr><td/><td>2 Ol\u2022r\u20229t\u2022W\u20229\u2022dont\u2022wi\u2022pUt\u2022h\u2022I\u2022m\u20226\u2022we\u2022nQ</td></tr><tr><td/><td>10 Olr9t\u2022W9\u2022dont\u2022wi\u2022pUt\u2022hIm\u20226we\u2022nQ</td></tr><tr><td/><td>100 Olr9t\u2022W\u20229dont\u2022wi\u2022pUthIm6we\u2022nQ</td></tr><tr><td/><td>1,000 Olr9t\u2022W\u20229dont\u2022wi\u2022pUthIm6we\u2022nQ</td></tr><tr><td>CTB8</td><td>0 \u8fd9\u4e2a\u51fa\u53e3\u4fe1\u8d37\u9879\u76ee\u59d4\u6258\u4e2d\u56fd\u94f6\u884c\u4e3a\u4ee3\u7406\u94f6\u884c</td></tr><tr><td/><td>1 \u8fd9\u2022\u4e2a\u2022\u51fa\u2022\u53e3\u2022\u4fe1\u2022\u8d37\u2022\u9879\u2022\u76ee\u2022\u59d4\u2022\u6258\u2022\u4e2d\u2022\u56fd\u2022\u94f6\u2022\u884c\u2022\u4e3a\u2022\u4ee3\u2022\u7406\u2022\u94f6\u2022\u884c</td></tr><tr><td/><td>2 \u8fd9\u2022\u4e2a\u2022\u51fa\u2022\u53e3\u2022\u4fe1\u2022\u8d37\u2022\u9879\u2022\u76ee\u2022\u59d4\u2022\u6258\u2022\u4e2d\u56fd\u2022\u94f6\u2022\u884c\u2022\u4e3a\u2022\u4ee3\u2022\u7406\u2022\u94f6\u2022\u884c</td></tr><tr><td/><td>10 \u8fd9\u2022\u4e2a\u2022\u51fa\u53e3\u2022\u4fe1\u2022\u8d37\u2022\u9879\u2022\u76ee\u2022\u59d4\u2022\u6258\u2022\u4e2d\u56fd\u2022\u94f6\u2022\u884c\u2022\u4e3a\u2022\u4ee3\u2022\u7406\u2022\u94f6\u2022\u884c</td></tr><tr><td/><td>100 \u8fd9\u4e2a\u2022\u51fa\u53e3\u2022\u4fe1\u2022\u8d37\u2022\u9879\u76ee\u2022\u59d4\u2022\u6258\u2022\u4e2d\u56fd\u2022\u94f6\u2022\u884c\u2022\u4e3a\u2022\u4ee3\u2022\u7406\u2022\u94f6\u2022\u884c</td></tr><tr><td/><td>1,000</td></tr></table>", |
| "text": "\u8fd9\u4e2a\u2022\u51fa\u53e3\u2022\u4fe1\u8d37\u2022\u9879\u76ee\u2022\u59d4\u2022\u6258\u2022\u4e2d\u56fd\u2022\u94f6\u884c\u2022\u4e3a\u2022\u4ee3\u2022\u7406\u2022\u94f6\u884c 10,000 \u8fd9\u4e2a\u2022\u51fa\u53e3\u4fe1\u8d37\u2022\u9879\u76ee\u2022\u59d4\u6258\u2022\u4e2d\u56fd\u94f6\u884c\u2022\u4e3a\u2022\u4ee3\u7406\u2022\u94f6\u884c", |
| "type_str": "table" |
| } |
| } |
| } |
| } |