ACL-OCL / Base_JSON /prefixF /json /finnlp /2020.finnlp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:24:14.159027Z"
},
"title": "Financial News Annotation by Weakly-Supervised Hierarchical Multi-label Learning",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "Innovation Lab",
"institution": "Shanghai Financial Futures Information Technology Co",
"location": {
"settlement": "Ltd, Shanghai",
"country": "China"
}
},
"email": "jianghang@cffex.com.cn"
},
{
"first": "Zhongchen",
"middle": [],
"last": "Miao",
"suffix": "",
"affiliation": {
"laboratory": "Innovation Lab",
"institution": "Shanghai Financial Futures Information Technology Co",
"location": {
"settlement": "Ltd, Shanghai",
"country": "China"
}
},
"email": "miaozc@cffex.com.cn"
},
{
"first": "Yuefeng",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "Innovation Lab",
"institution": "Shanghai Financial Futures Information Technology Co",
"location": {
"settlement": "Ltd, Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Chenyu",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "Innovation Lab",
"institution": "Shanghai Financial Futures Information Technology Co",
"location": {
"settlement": "Ltd, Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Mengjun",
"middle": [],
"last": "Ni",
"suffix": "",
"affiliation": {
"laboratory": "Innovation Lab",
"institution": "Shanghai Financial Futures Information Technology Co",
"location": {
"settlement": "Ltd, Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Jian",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "Innovation Lab",
"institution": "Shanghai Financial Futures Information Technology Co",
"location": {
"settlement": "Ltd, Shanghai",
"country": "China"
}
},
"email": "gaojian@cffex.com.cn"
},
{
"first": "Jidong",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "Innovation Lab",
"institution": "Shanghai Financial Futures Information Technology Co",
"location": {
"settlement": "Ltd, Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Guangwei",
"middle": [],
"last": "Shi",
"suffix": "",
"affiliation": {
"laboratory": "Innovation Lab",
"institution": "Shanghai Financial Futures Information Technology Co",
"location": {
"settlement": "Ltd, Shanghai",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Financial news is an indispensable source for both investors and regulators to conduct research and investment decisions. To focus on specific areas of interest among the massive financial news, there is an urgent necessity of automatic financial news annotation, which faces two challenges: (1) supervised data scarcity for subdivided financial fields; (2) the multifaceted nature of financial news. To address these challenges, we target the automatic financial news annotation problem as a weakly-supervised hierarchical multi-label classification. We propose a method that needs no manual labeled data, but a label hierarchy with one keyword for each leaf label as supervision. Our method consists of three components: word embedding with heterogeneous information, multilabel pseudo documents generation, and hierarchical multi-label classifier training. Experimental results on data from a well-known Chinese financial news website demonstrate the superiority of our proposed method over existing methods.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Financial news is an indispensable source for both investors and regulators to conduct research and investment decisions. To focus on specific areas of interest among the massive financial news, there is an urgent necessity of automatic financial news annotation, which faces two challenges: (1) supervised data scarcity for subdivided financial fields; (2) the multifaceted nature of financial news. To address these challenges, we target the automatic financial news annotation problem as a weakly-supervised hierarchical multi-label classification. We propose a method that needs no manual labeled data, but a label hierarchy with one keyword for each leaf label as supervision. Our method consists of three components: word embedding with heterogeneous information, multilabel pseudo documents generation, and hierarchical multi-label classifier training. Experimental results on data from a well-known Chinese financial news website demonstrate the superiority of our proposed method over existing methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "To target information of concern among massive financial news quickly, there is a natural demand to search and analyze financial news based on topics. To cater for this, most financial news media adopt a manual annotation solution, which is too tedious to cope with rapidly growing financial news. Besides, manual annotation is not intelligent enough to meet the personalized needs of everyone. Therefore, to improve the searching efficiency and analysis accuracy of financial news, a critical step is automatic financial news annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Indeed, the automatic financial news annotation is a classic problem of natural language processing (NLP), that is, text classification. Although related research keeps emerging, however, compared to those common scenarios of fullysupervised flat single-label text classification, our task faces two major challenges. First, supervised model training heavily relies on labeled data, while annotated corpus for each sub-divided financial field is cost expensive, considering the significant professional knowledge requirements for manual annotation. Second, a piece of financial news usually talks about multiple financial products and concepts from multiple levels and perspectives, but it is difficult to apply existing mature neural networks to multi-label and hierarchical text classification simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recognition of the challenges above, we propose a weakly-supervised hierarchical multi-label classification method for financial news. Our method is built upon deep neural networks, while it only requires a label hierarchy and one keyword for each leaf label as supervision, without any labeled data requirements. To leverage user-provided supervised keywords and semantic information in financial news, even though they are unlabeled, our method employs a twostep process of pre-training and self-training. During the pre-training process, we train a classifier with pseudo documents driven by user-provided keywords. Specifically, we model topic distribution for each category with user-provided keywords and generate multi-label pseudo documents from a bag-of-word model guided by the topic distribution. Selftraining is a process of bootstrapping, using the predictions of unlabeled financial news as supervision to guide pre-training classifier fine-tuning iteratively. To ensure the effectiveness of self-training, a novel confidence enhancement mechanism is adopted. Besides, we include multi-modal signals of financial news into the word embedding process by heterogeneous information networks (HIN) [Sun and Han, 2012] encoding algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarize, we have the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We propose a method of weakly-supervised hierarchical multi-label classification for financial news driven by user-provided keywords. With our proposed method, users do need to provide a label hierarchy with one keyword for each leaf label as the supervised source but not any manual labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. To bridge the gap between low-cost weak supervision and expensive labeled data, we propose a multi-label pseudo documents generation module that almost reduces the annotation cost to zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. In the hierarchical multi-label classification model training process, we transform the classification problem into a regression problem and introduce a novel confidence enhancement mechanism in the self-training process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. We demonstrate the superiority of our method over var-ious baselines on a dataset from Cailianshe 1 (a wellknown Chinese financial news website), conduct a thorough analysis of each component, and confirm the practical significance of hierarchical multi-label classification by an application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Financial text mining As an important branch of fintech, financial text mining refers to obtaining valuable information from massive unstructured text data, which has attracted the attention of many researchers. The research object of text mining can be a company's financial report [Bai et al., 2019] , as well as self-media content such as Weibo (Chineses twitter) [Wang et al., 2019] . The purpose of the research is also different, for example, studies Seong and Nam, 2019] analyze market prediction using financial news, and study [Kogan et al., 2009] is dedicated to risk discovery. In our work, we take the financial news as the research object, and annotate each piece of news with multiple labels from a label hierarchy automatically.",
"cite_spans": [
{
"start": 283,
"end": 301,
"text": "[Bai et al., 2019]",
"ref_id": "BIBREF0"
},
{
"start": 367,
"end": 386,
"text": "[Wang et al., 2019]",
"ref_id": "BIBREF7"
},
{
"start": 457,
"end": 477,
"text": "Seong and Nam, 2019]",
"ref_id": "BIBREF5"
},
{
"start": 536,
"end": 556,
"text": "[Kogan et al., 2009]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Weakly-supervised text classification Despite the maturity of adopting neural networks in supervised learning, the requirements for labeled data are extremely expensive and full of obstacles, so weakly-supervised learning emerges as the times require. Above all classic works, it can be roughly divided into two directions: extending the topic model in the semantic space by user-provided seed information [Chen et al., 2015; Li et al., 2016] , and transforming weakly-supervised learning to full-supervised learning by generating pseudo documents [Zhang and He, 2013; Meng et al., 2018] .",
"cite_spans": [
{
"start": 406,
"end": 425,
"text": "[Chen et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 426,
"end": 442,
"text": "Li et al., 2016]",
"ref_id": "BIBREF3"
},
{
"start": 548,
"end": 568,
"text": "[Zhang and He, 2013;",
"ref_id": null
},
{
"start": 569,
"end": 587,
"text": "Meng et al., 2018]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hierarchical classification is more complicated than flat one, considering the hierarchy of labels. A lot of research on applying SVM in hierarchical classification [Cai and Hofmann, 2004; Liu et al., 2005] has been started from the first application of [Dumais and Chen, 2000] . Hierarchical dataless classification [Song and Roth, 2014] projects classes and documents into the same semantic space by retrieving Wikipedia concepts. [Meng et al., 2019; Zhang et al., 2019 ] is a continuation of the work in [Meng et al., 2018] , which solves the problem of hierarchical classification through a top-down integrated classification model. To our best knowledge, there is no hierarchical multi-label classification method based on weak supervision so far.",
"cite_spans": [
{
"start": 165,
"end": 188,
"text": "[Cai and Hofmann, 2004;",
"ref_id": "BIBREF1"
},
{
"start": 189,
"end": 206,
"text": "Liu et al., 2005]",
"ref_id": "BIBREF3"
},
{
"start": 266,
"end": 277,
"text": "Chen, 2000]",
"ref_id": "BIBREF2"
},
{
"start": 317,
"end": 338,
"text": "[Song and Roth, 2014]",
"ref_id": "BIBREF6"
},
{
"start": 433,
"end": 452,
"text": "[Meng et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 453,
"end": 471,
"text": "Zhang et al., 2019",
"ref_id": "BIBREF5"
},
{
"start": 507,
"end": 526,
"text": "[Meng et al., 2018]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical text classification",
"sec_num": null
},
{
"text": "We take the financial news annotation as a task of weaklysupervised hierarchical multi-label classification. Specifically, each piece of news can be assigned multiple labels, and each category can have more than one children categories but can only belong to at most one parent category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3"
},
{
"text": "To solve our task, we ask users to provide a tree-structured label hierarchy T and one keyword for each leaf label in 1 The website of Cailianshe: https://cls.cn T . Then we propagate the user-provided keywords upwards from leaves to root in T , that is, for each internal category, we aggregate keywords of its all descendant leaf classes as supervision.",
"cite_spans": [
{
"start": 118,
"end": 119,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3"
},
{
"text": "Now we are ready to formulate the problem. Given a class hierarchy tree T with one keyword for each leaf class in T , and news corpora D = {D 1 , D 2 , ..., D N } as well. The weakly-supervised hierarchical multi-label classification task aims to assign the most likely labels set C = {C j1 , C j2 , ..., C jn |C ji \u2208 T } to each D j \u2208 D, where the number of assigned labels is arbitrary and C ji stays for classes at any level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3"
},
{
"text": "The framework of our method is illustrated in Figure 1 , which can be divided into three phases. Because the corpus we use is in Chinese, word segmentation is an essential step before classification. Considering the specificity of the financial corpus, we construct a financial segmentation vocabulary including financial entities, terminologies and English abbreviations by neologism discovery algorithm [Yao et al., 2016] .",
"cite_spans": [
{
"start": 405,
"end": 423,
"text": "[Yao et al., 2016]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "Compared to plain textual data, financial news is a complex object composed of multi-modal signals, including news content, headline, medium, editor, and column. These signals are beneficial to topic classification, for example, editors are indicative because two pieces of news are more likely to share similar topic if they are supplied by the same editor as editors usually have stable specialty and viewpoints. To learn d-dimensional vector representations for each word using such significant multi-modal signals in the corpus, we construct a HIN centered upon words [Zhang et al., 2019] . Specifically, corresponds to heterogeneous information in financial news, we include seven types of nodes: news (N ), columns (C), headlines (H), media (M ), editors (E), words (W ) and labels (L). In which, headlines (H) and words (W ) are tokens segmented from title and content respectively. As a word-centric star schema is adopted, we add an edge between a word node and other nodes if they appear together, thus the weights of edges reflect their co-occurrence frequency.",
"cite_spans": [
{
"start": 572,
"end": 592,
"text": "[Zhang et al., 2019]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding with heterogeneous information",
"sec_num": "4.1"
},
{
"text": "Given a HIN following the above definition of nodes and edges, we can obtain word representations by learning nodes embeddings in this HIN. We use ESIM [Shang et al., 2016], a typical HIN embedding algorithm, to learn nodes representations by restricting the random walk under the guidance of user-specified meta-paths. To guide the random walk, we need to specify meta-paths centered upon words and assign the weights by the importance of meta-path. In our method, we specify meta-paths as W -N -W , W -H-W , W -M -W , W -E-W , W \u2212C \u2212W and W -L-W with empirical weights, modeling the multi-types of second-order proximity [Tang et al., 2015] between words. Furthermore, we perform normal- ",
"cite_spans": [
{
"start": 623,
"end": 642,
"text": "[Tang et al., 2015]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding with heterogeneous information",
"sec_num": "4.1"
},
{
"text": "ization v w \u2190 v w /||v w || on embedding vector v w for each word w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding with heterogeneous information",
"sec_num": "4.1"
},
{
"text": "In this section, we first model class distribution in a semantic space with user-provided keywords, and then generate multilabel pseudo documents as supervised training data based on them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-label pseudo documents generation",
"sec_num": "4.2"
},
{
"text": "Assume that words and documents shared a uniform semantic space, so that we can leverage user-provided keywords to learn a class distribution [Meng et al., 2018] .",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "[Meng et al., 2018]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling class distribution",
"sec_num": null
},
{
"text": "Specifically, we first take the inner product of two embedding vectors v T w1 v w2 as similarity measurement between two words w 1 and w 2 to retrieve top n nearest keywords set K j = {w j0 , w j1 , ..., w jn } in semantic space for each class j based on user-provided keyword w j0 . Remind that we do not specify the parameter n above but terminate the keywords retrieving process when keyword sets of any two classes tend to intersect to ensure the absolute boundary between different classes. Then we fit the expanded keywords distribution f (x|C j ) to a mixture von Mises-Fisher (vMF) distributions [Banerjee et al., 2005 ] to approximate class distribution for each class:",
"cite_spans": [
{
"start": 604,
"end": 626,
"text": "[Banerjee et al., 2005",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling class distribution",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (x|C j ) = m h=1 \u03b1 h f h (x|\u00b5 h , \u03ba h )",
"eq_num": "(1)"
}
],
"section": "Modeling class distribution",
"sec_num": null
},
{
"text": "where f h (x|\u00b5 h , \u03ba h ), as a component in the mixture with a weight \u03b1 h , is the distribution of the h-th child of category C j , m is equal to the number of C j 's children in the label hierarchy. In f h (x|\u00b5 h , \u03ba h ), \u00b5 h is the mean direction vectors and \u03ba h is the concentration parameter of the vMF distribution, which can be derived by Expectation Maximization (EM) [Banerjee et al., 2005] .",
"cite_spans": [
{
"start": 375,
"end": 398,
"text": "[Banerjee et al., 2005]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling class distribution",
"sec_num": null
},
{
"text": "Given distribution for each class, we use a bag-of-words based language model to generate multi-label pseudo documents. We first sample l document vectors d i from various class distribution f (x|C) (l is not specific), and then build a vocabulary V di that contains the top \u03b3 words closest to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "d i in semantic space for each d i . Given a vocabulary set V d = {V d1 , V d2 , ..., V d l },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "we choose a number of words to generate pseudo document with probaliblity p(w|D). Formally,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "p (w|D) = \u03b2p B (w) w / \u2208 V d \u03b2p B (w) + (1 \u2212 \u03b2)p D (w) w \u2208 V d (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "where \u03b2 is a \"noisy\" parameter to prevent overfitting, p B (w) is the background words distribution (i.e., word distribution in the entire corpus), p D (w) is the document-specific distribution, that is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p D (w) = 1 l l i=1 exp d T i v w w \u2208V d i exp d T i v w",
"eq_num": "(3)"
}
],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "where v w is the embedding of word w. Meanwhile, pseudo labels need to be expressed. Suppose existing k document vectors d i are generated from class j, then the label of class j of document D can be represented by,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "label * (D) j = tanh(\u03c3( k(1 \u2212 \u03b2) l + \u03b2 m ))",
"eq_num": "(4)"
}
],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "where \u03c3 is a scale parameter to control the range of label * (D) j , and generally takes an empirical value. Otherwise, if \u2200d i is not generated from class j,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "label * (D) j = \u03b2/m",
"eq_num": "(5)"
}
],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "Algorithm 1 Multi-label Pseudo Documents Generation Input: Class distribution set {f (x|C j )| m j=1 }. Parameter: number of probability distribution \u03b2 to generate multi-label pseudo documents for each class; number of pseudo documents \u03b3. Output: A set of \u03b3 multi-label pseudo documents D * and corresponding labels set L * 1: Initialize D * \u2190 \u2205, L * \u2190 \u2205, p \u2190 \u2205;. 2: for class index j f rom 1 to m do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "for probability distribution index i f rom 1 to \u03b2 do 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "Sample document vector d i from f (x; C j ); 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "Calculate probability distribution p(w|d i ) based on Eq 2 // parameter l = 1 in Eq 2;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "p \u2190 p \u222a p(w|d i ) 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "end for 8: end for 9: Sample \u03b3 probability distribution combinations from p 10: for combination index i f rom 1 to \u03b3 do 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "D * i \u2190 empty string 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "Calculate probability distribution p(w|D i ) based on Eq 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "Sample w ik \u223c p(w|D i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "D * i = D * i + w ik //concatenate w ik after D * i 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "Calculate label L * i based on Eq 4 and Eq 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "16:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "D * \u2190 D * \u222a D * i 17: L * \u2190 L * \u222a L * i 18: end for 19: return D * , L *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "where m is the number of children classes related to the local classifier. Algorithm 1 shows the entire process for generating multilabel pseudo-documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo documents generation",
"sec_num": null
},
{
"text": "In this section, we pre-train CNN-based classifiers with pseudo documents and refine it with real unlabeled documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical multi-label classifier training",
"sec_num": "4.3"
},
{
"text": "Hierarchical classification model pre-training can be split into two parts: local classifier training for nodes and global classifier ensembling. We trained a neural classifier M L (\u2022) for each class with two or more children classes. M L (\u2022) has multiscale convolutional kernels in the convolutional layer, ReLU activation in the hidden layer, and Sigmoid activation in the output layer. As the pseudo label is a new distribution instead of binarization vectors, we transform task from multi-label classification to regression and minimizing the mean squared error (MSE) loss from the network outputs to the pseudo labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training with pseudo documents",
"sec_num": null
},
{
"text": "After training a series of local classifiers, we need to build a global classifier G k by integrating all local classifiers from the root node to level k from top to bottom. The multiplication between the output of the parent classifier and child classifier can be explained by conditional probability formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training with pseudo documents",
"sec_num": null
},
{
"text": "p (D i \u2208 C c ) = p (D i \u2208 C c \u2229 D i \u2208 C p ) = p (D i \u2208 C c |D i \u2208 C p ) p (D i \u2208 C p ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training with pseudo documents",
"sec_num": null
},
{
"text": "where, class C c is the child of class C p . When the formula is called recursively, the final prediction can be obtained by the product of all local classifier outputs on the path from the root node to the target node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training with pseudo documents",
"sec_num": null
},
{
"text": "To take advantage of semantic information in the real documents, we utilize the prediction of real documents as supervision in the self-training procedure iteratively. However, if the predictions are used as the supervision for the next iter self-training directly, the self-training can hardly go on because the model has been convergent in pre-training. To obtain more high-confidence training data, we adopt a confidence enhancement mechanism. Specifically, we calculate the confidence of predictions by Eq 7 and only reserve data with high-confidence as training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-training with unlabeled real documents",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "conf (q) = \u2212 log( m i=1 q i + 1) m i=1 q i log q i .",
"eq_num": "(7)"
}
],
"section": "Self-training with unlabeled real documents",
"sec_num": null
},
{
"text": "where m \u2265 2 is the number of children of C j . In addition, we notice the true label of a real document is either zero or one, thus, we conduct a normalization on G k 's predictions by the following formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-training with unlabeled real documents",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "label * * (D i ) j = label * (D i ) j max j (label * (D i ) j )",
"eq_num": "(8)"
}
],
"section": "Self-training with unlabeled real documents",
"sec_num": null
},
{
"text": "When the change rate of G k 's outputs of real documents is lower than \u03b4, the self-training will stop earlier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-training with unlabeled real documents",
"sec_num": null
},
{
"text": "Three things will be demonstrated in this section. First, the performance of our method is superior to various baselines for the weakly-supervised hierarchical multi-label financial news classification task (Section 5.2). Second, we carefully evaluate and analyze the components in our method proposed in Section 4(Section 5.3). Third, we reveal the business significance in the task of hierarchical multi-label classification for financial news by an application(Section 5.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We collect a dataset from a well-known Chinese financial news website, Cailianshe, to evaluate the performance of our method. The dataset statistics are provided in Table 1 : the news corpus consists of 7510 pieces of financial news with 2 supercategories and 11 sub-categories, covering the major institutions and product categories in China mainland financial markets. The label hierarchy refers to Figure 2 for details, in which the colored italics are user-provided keywords for leaf labels. It should be noted that we maintained an unbalanced dataset to truly reflect the market size and shares of the Chinese financial market. For example, financial futures account for only 10% but stocks account for 53% in the dataset. This is because there is a mature stock market in China, while the beginning of financial futures in China is late and the initial stage comes into being until China Financial Futures Exchange (CFFEX) launches CSI 300 futures in 2010 to some extent.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 401,
"end": 409,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments setup Dataset",
"sec_num": "5.1"
},
{
"text": "\u2022 WeSHClass [Meng et al., 2019] provides a top-down global classifier for the hierarchical classification, which supports multiple weakly supervised sources.",
"cite_spans": [
{
"start": 12,
"end": 31,
"text": "[Meng et al., 2019]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": null
},
{
"text": "\u2022 HiGitClass [Zhang et al., 2019] utilizes HIN encoding to solve a hierarchical classification task of GitHub repositories, with user-provided keywords as weak seed information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": null
},
{
"text": "Note that WeSHClass and HiGitClass can only output at most a single-label at each level. To compare with our method, we adjust the activation and loss function of baselines to fit a multi-label classification task, but they are still unable to generate multi-label pseudo documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": null
},
{
"text": "According to the common standards for evaluating classification, we use Micro-F1 and Macro-F1 scores as metrics for classification performances at level 1, level 2, and overall classes respectively. Table 2 demonstrate the superiority of our proposed method over baselines on the financial news dataset. It can be observed from Table 2 that our method has a significant improvement over baselines, whether at level 1, level 2, or overall classes. This is because we borrow the self-training mechanism of WeSHClass and HIN encoding of HIGitClass at the same time, and propose a suitable multi-label pseudo documents generation module in addition. However, for finegrained labels, our method is still far from excellent although the average F1 scores improvement approaches 20% at level 2 comparing to baselines, which reflects the difficulty of this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 328,
"end": 335,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "To evaluate each components, we carefully analyze performance of models with or without different components in Figure 3 and Qualitatively, the effectiveness of the multi-label pseudo document generation module has been demonstrated in previous training, and its quantitative value will be carefully analyzed by replacing the pseudo documents with manually labeled data. As we can observe in Figure 3, F1 pseudo documents training model is slightly lower than labeled documents training model at level 1, but for level2 and overall classes, the former stays lower than the latter until the number of labeled documents reaches 120 per class. To some extent, this component can save 1560 (120 per class \u00d7 13 classes) pieces of documents labeling cost.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 392,
"end": 404,
"text": "Figure 3, F1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Components Performance Evaluation",
"sec_num": "5.3"
},
{
"text": "To analyze the effect of heterogeneous information and self-training, we conduct model ablation experiments to compare performances of two variants (No heterogeneous information and No self-training) and our Full method. Here, the method of No heterogeneous information means heterogeneous information is not included in the word embedding process, and the method of No self-training means the selftraining process is removed from the complete model. Overall F1 score in Figure 4 illustrates that both No heterogeneous information and No self-training perform are worse than the Full method. Therefore, embedding words with heterogeneous information and self-training with unlabeled real data play essential roles in financial news classification. ",
"cite_spans": [],
"ref_spans": [
{
"start": 471,
"end": 479,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Components Performance Evaluation",
"sec_num": "5.3"
},
{
"text": "A good classification can not only label each document appropriately but also can mine the hidden information behind the corpus. This section gives an example of a practical application, that is, discovering a correlation of business significance behind labels. In brief, we calculate the Pearson coefficients across all labels to draw a label correlation matrix in Figure 5 , whose colors from shallow to deep represent the labels correlation is from weak to strong.",
"cite_spans": [],
"ref_spans": [
{
"start": 366,
"end": 374,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Application",
"sec_num": "5.4"
},
{
"text": "We only analyze the lower triangular matrix due to its symmetry, observing following two phenomena: (1) Correlations between different exchanges and products are different (e.g., CFFEX has a strong correlation with financial futures) and correlations between different exchanges are different as well (e.g., there is a strong correlation between Shanghai Stock Exchange (SSE) and Shenzhen Stock Exchange (SZSE)). This phenomenon implies the main products of exchanges and their relationships. (2) Commodity futures are highly uncorrelated with stock exchanges or securities products such as stocks, while financial futures are not. This is because commodity futures (e.g., petroleum futures) take spot commodities as subject matter but financial futures (e.g., stock indexes futures) take securities products as subject matter. These phenomena are aligned with the reality of China's financial market, which demonstrates that targeting the financial news annotation task as hierarchical multi-label classification does have its practical application value, such as quickly understanding the relationship between different institutions, products, and concepts in complex financial markets. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application",
"sec_num": "5.4"
},
{
"text": "In this paper, we proposed a weakly-supervised hierarchical multi-label classification method with three modules for financial news, which enables us to effectively overcome challenges of supervision scarcity and the multifaceted nature of financial news. Experiments on a Chinese financial news dataset demonstrate the performance of our near-zero cost solution for hierarchical multi-label classification. Besides, we reveal the practical value and business significance of hierarchical multi-label classification in a real-world application. In the future, we would like to improve the quality of pseudo documents by label promotion methods such as the label propagation mechanism. With more accurate labels for pseudo documents, the performance of the model trained with pseudo documents will be further improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Business taxonomy construction using concept-level hierarchical clustering",
"authors": [
{
"first": "[",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bai",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Machine Learning Research",
"volume": "6",
"issue": "",
"pages": "1345--1382",
"other_ids": {
"arXiv": [
"arXiv:1906.09694"
]
},
"num": null,
"urls": [],
"raw_text": "References [Bai et al., 2019] Haodong Bai, Frank Z Xing, Erik Cambria, and Win-Bin Huang. Business taxonomy construction us- ing concept-level hierarchical clustering. arXiv preprint arXiv:1906.09694, 2019. [Banerjee et al., 2005] Arindam Banerjee, Inderjit S Dhillon, Joydeep Ghosh, and Suvrit Sra. Clustering on the unit hypersphere using von mises-fisher dis- tributions. Journal of Machine Learning Research, 6(Sep):1345-1382, 2005.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hierarchical document categorization with support vector machines",
"authors": [
{
"first": "; Lijuan",
"middle": [],
"last": "Hofmann",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the thirteenth ACM international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "78--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Hofmann, 2004] Lijuan Cai and Thomas Hofmann. Hierarchical document categorization with support vector machines. In Proceedings of the thirteenth ACM interna- tional conference on Information and knowledge manage- ment, pages 78-87, 2004.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dataless text classification with descriptive lda",
"authors": [],
"year": 2000,
"venue": "Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "256--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al., 2015] Xingyuan Chen, Yunqing Xia, Peng Jin, and John Carroll. Dataless text classification with descrip- tive lda. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. [Dumais and Chen, 2000] Susan Dumais and Hao Chen. Hi- erarchical classification of web content. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 256-263, 2000.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Effective document labeling with very few seed words: A topic model approach",
"authors": [
{
"first": "",
"middle": [],
"last": "Kogan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "85--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kogan et al., 2009] Shimon Kogan, Dimitry Levin, Bryan R Routledge, Jacob S Sagi, and Noah A Smith. Predicting risk from financial reports with regression. In Proceedings of Human Language Technologies: The 2009 Annual Con- ference of the North American Chapter of the Association for Computational Linguistics, pages 272-280, 2009. [Li et al., 2016] Chenliang Li, Jian Xing, Aixin Sun, and Zongyang Ma. Effective document labeling with very few seed words: A topic model approach. In Proceedings of the 25th ACM international on conference on information and knowledge management, pages 85-94, 2016. [Liu et al., 2005] Tie-Yan Liu, Yiming Yang, Hao Wan, Hua- Jun Zeng, Zheng Chen, and Wei-Ying Ma. Support vector machines classification with a very large-scale taxonomy.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Weakly-supervised neural text classification",
"authors": [
{
"first": "",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management",
"volume": "7",
"issue": "",
"pages": "983--992",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Acm Sigkdd Explorations Newsletter, 7(1):36-43, 2005. [Meng et al., 2018] Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. Weakly-supervised neural text classi- fication. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 983-992, 2018.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Predicting stock movements based on financial news with systematic group identification",
"authors": [],
"year": 2016,
"venue": "Metapath guided embedding for similarity search in largescale heterogeneous information networks",
"volume": "33",
"issue": "",
"pages": "1--17",
"other_ids": {
"arXiv": [
"arXiv:1610.09769"
]
},
"num": null,
"urls": [],
"raw_text": "et al., 2019] Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. Weakly-supervised hierarchical text clas- sification. In Proceedings of the AAAI Conference on Ar- tificial Intelligence, volume 33, pages 6826-6833, 2019. [Seong and Nam, 2019] NohYoon Seong and Kihwan Nam. Predicting stock movements based on financial news with systematic group identification. Journal of Intelligence and Information Systems, 25(3):1-17, 2019. [Shang et al., 2016] Jingbo Shang, Meng Qu, Jialu Liu, Lance M Kaplan, Jiawei Han, and Jian Peng. Meta- path guided embedding for similarity search in large- scale heterogeneous information networks. arXiv preprint arXiv:1610.09769, 2016.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Trade the tweet: Social media text mining and sparse matrix factorization for stock market prediction",
"authors": [
{
"first": "Roth ; Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Dan Roth ; Yizhou",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24th international conference on world wide web",
"volume": "3",
"issue": "",
"pages": "1067--1077",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Roth, 2014] Yangqiu Song and Dan Roth. On dataless hierarchical text classification. In Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014. [Sun and Han, 2012] Yizhou Sun and Jiawei Han. Min- ing heterogeneous information networks: principles and methodologies. Synthesis Lectures on Data Mining and Knowledge Discovery, 3(2):1-159, 2012. [Sun et al., 2016] Andrew Sun, Michael Lachanski, and Frank J Fabozzi. Trade the tweet: Social media text mining and sparse matrix factorization for stock market predic- tion. International Review of Financial Analysis, 48:272- 281, 2016. [Tang et al., 2015] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large- scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067-1077, 2015.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "User and topic hybrid context embedding for finance-related text data mining",
"authors": [],
"year": 2019,
"venue": "International Conference on Data Mining Workshops (ICDMW)",
"volume": "",
"issue": "",
"pages": "751--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al., 2019] Chenyu Wang, Zhongchen Miao, Yue- feng Lin, and Jian Gao. User and topic hybrid context em- bedding for finance-related text data mining. 2019 Interna- tional Conference on Data Mining Workshops (ICDMW), pages 751-760, 2019.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "; Jason",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6382--6388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Zou, 2019] Jason Wei and Kai Zou. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China, November 2019. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pu Zhang and Zhongshi He. A weakly supervised approach to chinese sentiment classification using partitioned self-training",
"authors": [],
"year": 2013,
"venue": "IEEE International Conference on Data Mining (ICDM)",
"volume": "39",
"issue": "",
"pages": "876--885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al., 2016] Rongpeng Yao, Guoyan Xu, and Jian Song. Micro-blog new word discovery method based on improved mutual information and branch entropy. Journal of Computer Applications, pages 2772-2776, 2016. [Zhang and He, 2013] Pu Zhang and Zhongshi He. A weakly supervised approach to chinese sentiment classification us- ing partitioned self-training. Journal of Information Sci- ence, 39(6):815-831, 2013. [Zhang et al., 2019] Yanyong Zhang, Frank F. Xu, Sha Li, Yu Meng, Xuan Wang, Qi Li, and Jiawei Han. Higit- class: Keyword-driven hierarchical classification of github repositories. 2019 IEEE International Conference on Data Mining (ICDM), pages 876-885, 2019.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The framework of proposed method."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The label hierarchy for Chinese financial market."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 4."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Performances Comparision of classificaton with pseudo documents and manual annotation documents"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Comparison among No heterogeneous information, No self-training and Full method."
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The labels correlation matrix, reflecting information about relationship between different financial concepts."
},
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": ""
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Performance comparison for all method, using Micro-F1 and Macro-F1 scores as metrics at all levels."
}
}
}
}