Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1",
"Title": "CWSeg: An Efficient and General Approach to Chinese Word Segmentation",
"Limitation": "Limitations\nThis study has potential limitations. When the CWSeg model is applied to a new domain, we assume that words and phrases solely related to the domain are available.",
"abstractText": "In this work, we report our efforts in advancing Chinese Word Segmentation for the purpose of rapid deployment in different applications. The pre-trained language model (PLM) based segmentation methods have achieved state-of-the-art (SOTA) performance, whereas this paradigm also poses challenges in the deployment. It includes the balance between performance and cost, segmentation ambiguity due to domain diversity and vague words boundary, and multi-grained segmentation. In this context, we propose a simple yet effective approach, namely CWSeg, to augment PLMbased schemes by developing cohort training and versatile decoding strategies. Extensive experiments on benchmark datasets demonstrate the efficiency and generalization of our approach. The corresponding segmentation system is also implemented for practical usage and the demo is recorded.",
"1 Introduction": "Chinese word segmentation (CWS) is a preliminary but essential procedure for Chinese language processing tasks, and has been applied in various scenarios (Yang et al., 2018; Zhang et al., 2019; Cui et al., 2020; Han et al., 2020; Zhang et al., 2020; Tan et al., 2020; Lu et al., 2023). Especially for fast complete recall and accurate semantic understanding in search and recommendation scenarios (Bao et al., 2022), CWS is still indispensable. In addition, experiments on Chinese LLaMA and Alpaca show that the token throughput of the model that expands the vocabulary through word segmentation has greatly improved the processing of Chinese text compared with the original model (Cui et al., 2023). Recent deep learning methods have achieved remarkable results on publicly available datasets in this regard (Qiu et al., 2019). Also, the pre-trained language model (PLM) (Liu et al., 2019) further\n+Work was done at SenseTime Research *Corresponding author\nemerges as the paramount foundation of text representation for CWS as seen in other tasks (Tian et al., 2020b; Huang et al., 2020a; Maimaiti et al., 2021).\nCurrent PLM-based approaches, however, pose three hurdles to the production deployment we need to cross: (1) One dilemma is the trade-off between the model performance and inference speed. (2) The lexical diversity and domain gap also jeopardize the fast deployment of a generic model to customized scenarios. (Maimaiti et al., 2021). (3) PLM-based schemes with single granularity are less likely to meet multi-granularity demands of practical relevance.\nTo tackle these issues, we propose an efficient and general approach to augmenting PLM-based Chinese Word Segmentation methods, namely CWSeg. It can extrapolate to different sequence labeling scenarios. Recent studies showed that small models also have the potential to be comparable to large models (Ba and Caruana, 2014; Zhang et al., 2018). We thus introduce a new cohort training strategy to co-train a cohort of multi-scale model artifacts to meet the performance and real-time demands. Specifically, we employ Wasserstein distance (WD) (Rüschendorf, 1985) to orchestrate distributions of model cohorts to enable more robust learning. In addition, we propose to construct the tailored domain-specific lexicon Trie (Liu et al., 2002) and build up a versatile decoding scheme to augment the optimal segmentation path searching on the fly for diverse practical scenarios. It can flexibly adjust the segmentation granularity and benefit customized domains.\nIn summary, our primary goal is to build a versatile framework for strengthening different models simultaneously and then rapidly deploying them into multiple practical scenarios of CWS, which is fundamentally different from existing research works. Essentially, the output models of this framework can be regarded as complements to, not re-\n1\nplacements for, existing SOTA methods. Experimental results on multiple benchmark datasets demonstrate the effectiveness of our approach. Ablation studies confirm the necessity of cohort training strategy and lexicon Trie aided versatile decoding solution. The cross-domain application experiments demonstrate the generalization capacity of our holistic approach.",
"2 Related Work": "Early work in Chinese word segmentation builds upon the statistical assumption (Li and Sun, 2009; Sun et al., 2012a) by modeling rules into the learning process. Recently, PLMs have been introduced (Tian et al., 2020b,a; Maimaiti et al., 2021) and made significant advances in this regard. Our work, however, aims to alleviate their potential challenges involved in the industrial applications as mentioned in Section 1.\nRecent works (Huang et al., 2020b, 2021) distill knowledge from the well-trained teacher model into a student model to balance the model scale and performance. However, it requires multiple finetuning rounds and models can’t learn from each other collaboratively. In this work, we introduce a cohort training based learning strategy to address these two problems for CWS. Different from the pioneering mutual learning (Zhang et al., 2018) in computer vision, we propose Wasserstein distance to better enable the learning as studied in Sec. 4.3. It’s a more carbon-footprint-friendly solution as compared to recent research threads.\nTo mitigate the effects of Chinese lexical diversity, Qiu et al. (Qiu et al., 2019) proposed a concise unified model to extract the criterion-aware representation for multi-criteria corpus, which requires training from scratch on the entire corpus for new criteria or domains. Gong et al. (Gong et al., 2017, 2020) proposed a multi-grained word segmentation by training with large-scale pseudo labels, which is relatively lagging for rapid deployment to new domains. Our work approaches this issue by a lightweight versatile decoding scheme to sidestep heavy training loads.",
"3 Methodology": "As shown in Fig. 1 (a), we formulate CWS as a classical sequence labeling problem as with existing compelling schemes. Concretely, given a text sequence of n characters X = {x1, . . . , xn}, CWS is to tag involved characters sequentially with the\nBIO encoding by maximizing their joint probability p(y1, . . . , yn|X ) where yi ∈ T = {B, I,O}, short for beginning, inside and outside respectively.",
"3.1 Cohort Training": "The cohort training strategy enables multiple student models to teach and learn from each other. The objective function contains supervised loss Lc and mimicry loss Lm. As exemplified by two models in Fig. 2, the overall loss function is:\nL = Lc1 + Lc2 + λ · Lm (1) where λ ∈ [0, 1] is a hyper-parameter. Lc1 and Lc2 guide the model learning under the supervision of real segmentation tags while Lm can encourage different models to learn from each other collaboratively.\nSpecifically, Lc1 and Lc2 refer to the cross entropy (CE) loss. Without loss of generality, Lc1 = −∑Ni=1 ∑|T | t=1 I(yi, t)log(p t 1(xi)) and p t 1(xi) = exp(zt1)∑|T | t=1 exp(z t 1)\nwhere I(·) is an indicator function, pt1(xi) is the prediction probability, z t 1 is the output logit of the model F1. For Lm, Kullback-Leibler (KL) divergence is a naive metric to quantify the\ndistance between two distributions KL(p2||p1) =∑N i=1 ∑|T | t=1 p t 2(xi) pt2(xi)\npt1(xi) . However, KL diver-\ngence is asymmetric and possibly infinite when two distributions are disjoint or there are points such that p1(xi) = 0 and p2(xi) > 0, which is fragile in training (Arjovsky et al., 2017). The symmetric Jensen-Shannon (JS) divergence, suffers from the same problem (See A.1 for more details). Given the above concerns, we introduce the Wasserstein-1 distance (a.k.a. earth mover’s distance):\nW (p2,p1) = inf γ∈∏(p2,p1)\nE(x,y)∼γ [∥x− y∥] (2)\nwhere ∏ (p2,p1) is the set of all joint distributions γ(x,y) whose marginals are p2 and p1, respectively. As shown in Appendix A.1, Wasserstein distance can provide a meaningful and smooth representation of the in-between distance for two distributions in lower dimensional manifolds without overlaps. Eq. (2), however, is highly intractable. We thus resort to Kantorovich-Rubinstein duality:\nW (p2,p1) = sup ∥f∥≤1\nEx∼p2 [f(x)]− Ey∼p1 [f(y)]\n(3) where the supremum is over all the 1-Lipschitz * function f : RK → R, which maps each Kdimensional feature vector in the semantic space to a real number. In practice, f is implemented as a two-layer feed-forward neural network with parameters Θf clipped to [−c, c], where c > 0. Therefore, the mimicry loss Lm can be derived as the dual form of Wasserstein distance:\nLm = max Θf\n∑\n(x,y)\n[f(x)− f(y)] (4)\nExtension to Larger Cohort The cohort training strategy can be easily extended to larger cohorts. For example, given K models (K ≥ 2), the overall loss function L can be formulated as:\nL = K∑\ni=1\nLci + 2 · λ K(K − 1) K∑\ni=1\nK∑\nj=i+1\nW (pj ,pi)\n(5) Obviously, Eq. (1) is a special case of Eq. (5)\nwhen K = 2.",
"3.2 Versatile Decoding": "However, the PLM-based segmentation capacity of single-granularity barely meets diverse real-world\n*f is 1-Lipschitz ⇔ |f(x)− f(x′)| ≤ |x− x′| for all x and x′\napplications. As illustrated in Fig. 3 (a), the model tends to decode the input text as “中国 (China) /科 学技术 (Science and Technology) /大学 (University)”, whereas only the input as a whole “中国科 学技术大学 (University of Science and Technology of China, USTC)” refers to a meaningful entity. Additionally, for large-scale content recommendations, rapidly acquiring as much relevant content as possible is an essential step towards quality candidates on which more sophisticated methods can function. Thus, reasonably splitting the whole entity of “中国科学技术大学 (USTC)” into smaller relevant semantic units “中国 (China) /科学 (Science) /技术 (Technology) /大学 (University)” is crucial in this regard.\nIn this context, we focus on adapting generic models trained on annotated corpora to specific domains and supporting diverse granularity. It includes the construction of lexicon Trie (Liu et al., 2002) and versatile decoding.\nLexicon Trie: The lexicon Trie is designed to store vocabulary in a compressed Trie structure and search for each word efficiently. As illustrated in Fig. 3 (b), the solid node denotes the root node, and each circle denotes a Trie node, which contains a value containing a Chinese token and a label representing whether it is a complete word from the root node so far. Here the red circle indicates that the label is equal to True. Thus, given a collected vocabulary set, we can initialize a lexicon Trie.\nIn the matching stage, given an input text such as “中国科学技术大学”, we apply the matching algorithm to search for all complete words in the input text that can be matched on the lexicon Trie.\nThe matched word list is shown in Fig. 3 (b).\nDiverse Modes: The granularity criterion criteria is roughly determined by the RouteScore, which is the number of chunks in the segmented path regularized by semantic completeness. In total, we have the following four modes:\nNormal Mode: High-probability segmentation that conforms to the statistics of the data.\nFine Mode: RouteScore larger than normal, collecting more semantic units.\nCoarse Mode: RouteScore smaller than normal, perceiving more complete semantics.\nIndex Mode: A segmentation result that combines the above three modes.\nThe whole process can be formulated as Fig. 3 and Algorithm 1 (refer to A.1 for more function details). In addition to the prediction from the finetuned model F , we create a lexicon Trie D from the pre-processed vocabulary set V to capture all candidate phrases C without training. We merge predictions P into candidates set C to construct CWSGraph G, where each node represents a token. Viterbi algorithm is adopted for decoding according to the granularity criteria. In this way, we can flexibly tailor model-based segmentation results to multiple domain-specific scenarios while meeting the multi-granularity requirements.\nAlgorithm 1 Versatile Decoding Input: Text sequence X , fine-tuned model F , lex-\nicon Trie D, granularity mode m. Output: Text sequence label: Y .\n1: P = F(X ); C = Matching(X ,D)|P; 2: G = CWSGraph(C); 3: borders = ExtractBorders(P); 4: if m = \"normal\" then 5: Y = P 6: else if m = \"fine\" then 7: cands = CutBorders(G, borders); 8: Y = Viterbi(G, cands, criteriam); 9: else if m = \"coarse\" then\n10: cands = LinkBorders(G, borders); 11: Y = Viterbi(G, cands, criteriam); 12: else if m = \"index\" then 13: for m:[\"normal\", \"fine\", \"coarse\"] do 14: Y |= VersatileDecoding(X ,F ,D,m); 15: end for 16: end if 17: return Y",
"4.1 Setup": "Dataset We experiment with six widely-used datasets AS, CityU, CTB6, MSR, PKU, Weibo, from SIGHAN 2005 Bakeoff, Chinese Treebank and NLPCC2016 (SIGHAN2005Bakeoff; Emerson, 2005; Xue et al., 2005; Qiu et al., 2016). The basic statistics and train/dev/test settings are detailed in Table 1.\nBaselines We select baselines both from traditional methods and the well-executed or SOTA methods, such as Jieba (jieba) (Fast CWS tool based on HMM), HanLP (pyhanlp) (CRF-based method), THU (THULAC) (Perceptron-based method), PKU (PKUSeg) (CRF-based CWS tool uses a new training method, namely, the adaptive online gradient descent method based on feature frequency (Sun et al., 2012b)). Since the major architecture of recent competing methods is CRF on top of Transformers (e.g., BERT and its variants), and as mentioned earlier, our flexible framework CWSeg is a complement to, not a replacement for, existing compelling methods, we experiment with our method on BERT-CRF (refer to A.1 for more details), which can be easily applied to other variants. WMSeg (Tian et al., 2020b), another most recent SOTA method based on this architecture utilizing memory networks to incorporate wordhood information, is also used for comparison. To be noted here, the PLMs implemented in BERTCRF and WMSeg are the BERT base model. Since CWSeg adopts the cohort training strategy, we set base versions of BERT and NEZHA as cohorts.\nExperiment Settings The PLMs used in this work are readily available, and are the widely recognized SOTA backbones in the Chinese community. Such as ‘BERT’ for bert-base-chinese (Devlin et al., 2019; bert-base chinese), ‘RoBERTa’ for chinese_roberta_wwm (Liu et al., 2019; chineseroberta wwm), ‘NEZHA’ for NEZHA-Base-WWM\n(Junqiu Wei, 2019; NEZHA-Base-WWM). They are based on Chinese characters (similar to subwords in English). We choose Adam optimizer (Kingma and Ba, 2014) with an initial learning rate as 2e-5 and tuned amongst {1e-4, 5e-5, 2e-5, 1e-5}. We use the early stopping mechanism (Yao et al., 2007) in the model training. The batch size was tuned amongst {32, 64, 128}. The hyper-parameter λ was set as 0.5 and tuned from [0.01, 1], and the clipping threshold c was set as 0.5 and tuned from [0.1, 0.5]. All experiments were run on Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz and NVIDIA V100-32g GPUs. Note here that all these time-cost comparison experiments are tested on the same CPU device, while deep methods run faster on CUDA devices.",
"4.2 Main Results": "Overall Performance Table 2 reports the overall performance. For the sake of fairness, we utilize a unified model and average F1 scores of six individual test sets (Luo et al., 2019). BERT-CRF stands out as compared to traditional methods due to the powerful representation capacity of the pre-trained language model. Following the PLM paradigm, (Tian et al., 2020b,a) further fuses wordhood information into the network, and achieves better performance compared to BERT-CRF. For simplicity, we set the BERT-CRF architecture as the cohort in our implementation to verify the gain effect of our framework. As shown in Table 2, our approach further advances BERT-CRF with cohort training and versatile decoding without reshaping model architecture, which also defeats the most recent SOTA method WMSeg (Tian et al., 2020b).\nMulti-grained Segmentation We evaluate CWSeg on four different segmentation modes. As shown in Table 3, compared to the model without\nversatile decoding, CWSeg can better capture the whole words of the entity. This also illustrates the granularity gap between annotated corpora and the application scenarios. With versatile decoding, CWSeg can generate both fine-grained and coarsegrained labels. And multi-granularity results provide more knowledge and indexing, which is crucial for multiple scenarios such as retrieval, content recommendation, and advertisement.",
"4.3 Ablation Study": "We investigate the impact of versatile decoding, cohort training, and different losses on CWSeg.\nEffect of Versatile Decoding Table 4 details the performance gain of our approach in the domain adaption. It enables models to be readily applied to new domains without training. Take MSR for instance, our approach lifts the model performance by a large margin of 7%. This is reasonable as MSR has significantly different distributions compared\nto others as shown in Table 1, and thus requires the domain-adaptive decoding strategy.\nEffect of Cohort Training Overall, the cohort training outperforms the classical model distillation approach in terms of small models as evidenced by Net2 (94.84 vs 94.04 and 95.37 vs 94.87) in Table 5. It’s worthwhile to note that big models also benefit from the cohort training as compared to the independent training (e.g., Net1: 96.9 vs 96.31 and 97.03 vs 96.83). In this setting, the CH training policy, which is trained only once and converges faster, is about 3 times faster than MD, which requires 3 stages of training (Train Net1, train Net2, Net1 distills Net2).\nEffect of Cohort Settings To study the effect of the cohort settings, we conducted a detailed analysis. As shown in Table 6, we can easily find that: (1) The cohort setting stands out in all trials, and the small model improves more significantly. (2) Larger models improve small models better. (3) Diversity in cohort settings promotes performance.\nEffect of Wasserstein Distance For the cohort training, we further study the impact of mimicry loss. Specifically, we compare WD with KL and\nJS as detailed in Table 7 and Fig. 4. WD is slightly better than both KL and JS in large part due to the performance ceiling, whereas it can significantly accelerate cohort training by multiple folds. This is appealing, especially for multiple large-scale model learning.",
"4.4 Trade-off between Performance and Speed": "We experiment with cohort training (CH) of BERT1, BERT-4, BERT-8, and BERT-12. As a comparison, these 4 single networks (SN) are also finetuned independently. The latency for CH and SN is the same, and the units of latency are defined in Section 4.2. As shown in Fig. 5, overall, CH produces a batch of different model artifacts simultaneously as designed, which outperforms counterparts of SN without inference latency penalty. For example, CH-4 setting has almost the same segmentation performance as SN-12. These artifacts can serve different inference scenarios. Specifically, CH-1 can be used for real-time demanding applications and CH-12 works well on the offline inference scenarios with more tolerance of latency.",
"5 Discussion": "Our latency comparisons are benchmarked on the same CPU device, while deep methods run faster on CUDA devices. Besides, we can resort to a fastcompiling language (e.g., C++) backed platform\nor tailored toolchain (e.g., ONNX) to optimize the serving speed. How to apply diversity modes to different scenarios? Generally speaking, the coarse mode is to perceive complete semantics, and the fine mode is to perceive more extensive concepts. For example, in the scenarios of search and recommendation, the normal or coarse mode is employed to process web pages to build inverted indexes. Index mode is often used for query expansion, where we disassemble queries into multiple granularities to maximize recall of relevant documents.",
"6 Conclusion": "In this work, we develop an efficient and general framework, CWSeg, which enables the state-of-theart schemes of Chinese word segmentation better prepared for industrial deployment scenarios. We present Wasserstein distance-based cohort learning method and versatile decoding to facilitate the trade-off between segmentation performance and serving latency as well as the fast cross-domain adaption. Comprehensive experiments are performed to justify the efficiency and generalization of CWSeg. We believe that our work can be extrapolated to other sequence labeling problems straightforwardly.\nLimitations\nThis study has potential limitations. When the CWSeg model is applied to a new domain, we assume that words and phrases solely related to the domain are available.",
"A Appendix": "A.1 Model Details\nCohort Model We set the SOTA CWS model architecture BERT-CRF as cohort model implementations to exploit the PLM strength and transition patterns of the labeling system.\nFor each character xi is mapped to xi ∈ Rde , where de is the embedding size. The PLM encoder\nextract the contextual features hi ∈ Rdh automatically for each character xi by\n[h1,h2, ...,h|X |] = Encoder(X), (6)\nwhere X ∈ Rde×|X | is the embedding matrix of X , dh is the size of hidden features. There are several prevalent choices for Encoder model, such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).\nThere are rules in the labeling systems, such as the I can only be after the B label. We thus utilize the conditional random fields (CRF) (Lafferty et al., 2001) to model the transition patterns, which can be formulated as:\np(yi|xi) = exp(WcW ⊤ o hi + bc)∑\nyi−1yi exp(WcW ⊤ o hi + bc)\n, (7)\nwhere Wo ∈ Rdh×|T |, Wc ∈ R|T |×|T |, and bc ∈ R|T | are training parameters to model the transition from yi−1 to yi.\nWasserstein Distance As shown in Fig. 6, there is no overlap between P and Q when θ ̸= 0, and:\nKL(P ||Q) = ∑\nx=0,y∼U(0,1) 1 · log1 0 = +∞,\nKL(Q||P ) = ∑\nx=θ,y∼U(0,1) 1 · log1 0 = +∞,\nJS(P,Q) =\n1 2 (\n∑\nx=0,y∼U(0,1) 1 · log 11 2\n+ ∑\nx=0,y∼U(0,1) 1 · log 11 2 )\n= log2,\nW (P,Q) = |θ|, (8)\nwhen θ = 0:\nKL(P ||Q) = KL(Q||P ) = JS(P,Q) = 0, W (P,Q) = 0 = |θ|, (9)\nwhere KL(·) gives infinity when two distributions are disjoint, and JS(·) is always a constant. And they are both equal to 0 when θ = 0, so they both have a sudden jump at θ = 0. While the Wasserstein distance provides a smooth measure, which contributes to stable gradient descents.\nVersatile Decoding Pseudocode ExtractBorders aims to obtain the border indices of the prediction, such as the borders of “中国 /科学技术 /大学” is [0, 2, 6, 8]. CutBorders is designed to filter out the candidates in C that cross the borders, such as “中 国科学技术大学” will be filtered out, and “科学” “技术” will be preserved. LinkBorders is designed to obtain all candidates in C that match one-skip or multi-skip borders, such as “中国科学技术大学” will be preserved for it skip two borders [2, 6].\n# extract borders of the segmented token_list def extract_borders(token_list):\nborders = set() for token in token_list:\nborders.add(token.start_offset) borders.add(token.end_offset+1)\nreturn borders\n# find candidates that no-cross borders def cut_borders(token_list, borders):\ncut_borders = [] cross_border = False for token in token_list:\ncross_border = False for idx in range(token.start_offset+1,\ntoken.end_offset+1): if idx in borders:\ncross_border = True break\nif not cross_border: cut_borders.append(token)\nreturn cut_borders\n# find all candidates in token_list that match one-skip or multi-skip borders def link_borders(token_list, borders): link_borders = [] for token in token_list:\nif token.start_offset in borders and (token.end_offset+1) in borders: link_borders.append(token)\nreturn link_borders"
}