ACL-OCL / Base_JSON /prefixF /json /finnlp /2021.finnlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:24:18.133797Z"
},
"title": "DICoE@FinSim-3: Financial Hypernym Detection using Augmented Terms and Distance-based Features",
"authors": [
{
"first": "Lefteris",
"middle": [],
"last": "Loukas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Centre for Scientific Research \"Demokritos\"",
"location": {
"country": "Greece"
}
},
"email": "eloukas@iit.demokritos.gr"
},
{
"first": "Konstantinos",
"middle": [],
"last": "Bougiatiotis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Centre for Scientific Research \"Demokritos\"",
"location": {
"country": "Greece"
}
},
"email": ""
},
{
"first": "Manos",
"middle": [],
"last": "Fergadiotis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Centre for Scientific Research \"Demokritos\"",
"location": {
"country": "Greece"
}
},
"email": "mfergadiotis@iit.demokritos.gr"
},
{
"first": "Dimitris",
"middle": [],
"last": "Mavroeidis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Centre for Scientific Research \"Demokritos\"",
"location": {
"country": "Greece"
}
},
"email": "dmavroeidis@iit.demokritos.gr"
},
{
"first": "Elias",
"middle": [],
"last": "Zavitsanos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Centre for Scientific Research \"Demokritos\"",
"location": {
"country": "Greece"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the submission of team DICOE for FIN-SIM-3, the 3rd Shared Task on Learning Semantic Similarities for the Financial Domain. The task provides a set of terms in the financial domain and requires to classify them into the most relevant hypernym from a financial ontology. After augmenting the terms with their Investopedia definitions, our system employs a Logistic Regression classifier over financial word embeddings and a mix of handcrafted and distance-based features. Also, for the first time in this task, we employ different replacement methods for out-of-vocabulary terms, leading to improved performance. Finally, we have also experimented with word representations generated from various financial corpora. Our bestperforming submission ranked 4th on the task's leaderboard.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the submission of team DICOE for FIN-SIM-3, the 3rd Shared Task on Learning Semantic Similarities for the Financial Domain. The task provides a set of terms in the financial domain and requires to classify them into the most relevant hypernym from a financial ontology. After augmenting the terms with their Investopedia definitions, our system employs a Logistic Regression classifier over financial word embeddings and a mix of handcrafted and distance-based features. Also, for the first time in this task, we employ different replacement methods for out-of-vocabulary terms, leading to improved performance. Finally, we have also experimented with word representations generated from various financial corpora. Our bestperforming submission ranked 4th on the task's leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Taxonomies constitute the backbone of many knowledge representation schemas, especially in the context of the Semantic web and ontologies. Such hierarchies model among others the hypernymy relation, a significant semantic relation between concepts. It is an asymmetric relation between two concepts, a hyponym (subordinate) and a hypernym (superordinate), as in \"car-vehicle\", where the hyponym necessarily implies the hypernym, but not vice versa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the context of hypernym detection, the shared task on Learning Semantic Similarities for the Financial Domain (FINSIM-3) focuses on the evaluation of semantic representations by assessing the classification of a given list of terms from the financial domain against a domain ontology. A list of carefully selected terms from the financial domain is provided, such as \"European depositary receipt\", \"Interest rate swaps\", and others, and the task is to design a system that can classify them into the most relevant hypernym (or toplevel) concept in an external ontology. The referenced ontology is the Financial Industry Business Ontology (FIBO). 1 For instance, given the set of concepts \"Bonds\", \"Unclassified\", \"Share\", \"Loan\", the most relevant hypernym of \"European depositary receipt\" is \"Share\". Figure 1 illustrates hypernym examples based on small parts of the FIBO ontology. Figure 1 : Examples of hypernym relations from the FIBO ontology. Interestingly, \"Bond coupon\" is a kind of \"Bond\", but \"Bond option\" is a kind of \"Option\".",
"cite_spans": [],
"ref_spans": [
{
"start": 805,
"end": 813,
"text": "Figure 1",
"ref_id": null
},
{
"start": 887,
"end": 895,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present our solution to the FINSIM-3 task. Our system starts by augmenting all given terms with their Investopedia definitions. Then, we employ a Logistic Regression classifier over financial word embeddings and a mix of hand-crafted and distance-based features. Moreover, for the first time in this task, we explore various replacement methods for out-of-vocabulary (OOV) terms, leading to improved performance in our experiments. Our best-performing submission ranked 4th on the task's leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In what follows, Section 2 gives a brief overview of the related work in the context of the previous FINSIM tasks. Section 3 presents the data that are provided in this task. Then, Section 4 presents our solution to the task, while Section 5 provides empirical evaluation results. Finally, Section 6 summarizes the paper and presents future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hypernym modeling: Unsupervised hypernym modeling and classification mainly rely on measures assuming a dis-tributional inclusion. This means that if a term c is semantically narrower than term p, then a number of distributional features of c should also be included in the feature vector of p [Lenci and Benotto, 2012] . Similarly, the work in [Santus et al., 2014] is based on the distributional informativeness hypothesis, which assumes that hypernyms tend to be less informative than hyponyms. Such distributional approaches rely on vector semantics and represent words as vectors.",
"cite_spans": [
{
"start": 294,
"end": 319,
"text": "[Lenci and Benotto, 2012]",
"ref_id": null
},
{
"start": 345,
"end": 366,
"text": "[Santus et al., 2014]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Supervised methods are mainly based on word embeddings to represent words as low dimensional vectors in a latent space. Hypernym/hyponym pairs are encoded as combinations of two-word vectors, and hypernym relation classification is usually performed by training a classifier given the latter combinations of vectors as input [Baroni et al., 2012; Roller et al., 2014; Weeds et al., 2014] . Other approaches rely on pre-extracted taxonomic relation data to create word embeddings that are later used as input to a Support Vector Machine (SVM) to learn the hypernym relation [Tuan et al., 2016; Yu et al., 2015] .",
"cite_spans": [
{
"start": 325,
"end": 346,
"text": "[Baroni et al., 2012;",
"ref_id": "BIBREF0"
},
{
"start": 347,
"end": 367,
"text": "Roller et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 368,
"end": 387,
"text": "Weeds et al., 2014]",
"ref_id": "BIBREF9"
},
{
"start": 573,
"end": 592,
"text": "[Tuan et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 593,
"end": 609,
"text": "Yu et al., 2015]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In the context of FINSIM-3, we approach the task as a classification problem, aiming to classify input terms to their correct hypernym. We follow a supervised distributional approach without explicitly modeling hypernym relations or other ontological relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The first task to propose hypernym categorization in the financial domain was FINSIM-1 [Maarouf et al., 2020] , having a total tagset of 8 FIBO classes/hypernyms. The 1st-year winner system [Keswani et al., 2020] combined rules and a Naive Bayes classifier over word2vec embeddings [Mikolov et al., 2013] , overperforming BERT [Devlin et al., 2019] embeddings. The runner-up system [Saini, 2020] augmented all terms with their Investopedia definition and used a linear SVM over some hand-crafted and bi-gram TF-IDF features.",
"cite_spans": [
{
"start": 87,
"end": 109,
"text": "[Maarouf et al., 2020]",
"ref_id": "BIBREF2"
},
{
"start": 190,
"end": 212,
"text": "[Keswani et al., 2020]",
"ref_id": "BIBREF1"
},
{
"start": 282,
"end": 304,
"text": "[Mikolov et al., 2013]",
"ref_id": "BIBREF4"
},
{
"start": 322,
"end": 348,
"text": "BERT [Devlin et al., 2019]",
"ref_id": null
},
{
"start": 382,
"end": 395,
"text": "[Saini, 2020]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FINSIM:",
"sec_num": null
},
{
"text": "One year later, FINSIM-2 [Mansar et al., 2021] held place, expanding the tagset to 10 financial hypernyms. The FINSIM-2 winners [Chersoni and Huang, 2021] used a Logistic Regression classifier over word embeddings, semantic and string similarities, along with BERT-derived masking probabilities to classify each term to a hypernym. The second-place system [Pei and Zhang, 2021] also used a Logistic Regression classifier over fine-tuned word embeddings derived from various financial text corpora.",
"cite_spans": [
{
"start": 25,
"end": 46,
"text": "[Mansar et al., 2021]",
"ref_id": "BIBREF3"
},
{
"start": 356,
"end": 377,
"text": "[Pei and Zhang, 2021]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FINSIM:",
"sec_num": null
},
{
"text": "Our system augments all terms with their Investopedia definitions, following [Saini, 2020] , and incorporates handcrafted and distance-based features. However, in contrast to previous works, we also experiment with different OOV replacement methods for unknown terms to deal with the many words in the dataset that are not contained in the vocabulary of the pre-trained word embeddings.",
"cite_spans": [
{
"start": 77,
"end": 90,
"text": "[Saini, 2020]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FINSIM:",
"sec_num": null
},
{
"text": "In this section, we briefly present the data provided by the task organizers, which include the training dataset, the FIBO ontology, the prospectus corpus, and the word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The dataset consists of one-word or multi-word concepts from the financial domain and their labels. It is The training set comprises 1050 examples with their corresponding labels. The unique labels are 17 in total, and their frequencies in the training data are provided in Table 1 in descending order. The most frequent label is Equity Index, which appears in 27% of the training examples, while the rarest labels are Forward and Securities restrictions. Both of them appear less than 10 times in the training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 281,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset:",
"sec_num": null
},
{
"text": "FIBO Ontology: The Financial Industry Business Ontology (FIBO) is a pioneering effort to formalize the semantics of the financial domain using a large number of ontologies. At the time of this writing, FIBO is still a work in progress. However, it already defines large sets of concepts that are of interest in financial business applications and how these concepts relate to one another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset:",
"sec_num": null
},
{
"text": "For each concept, FIBO provides additional textual information, including a definition, a generated description, labels, titles, and in many cases, a small abstract. We combine the FIBO textual information with the prospectus corpus (see below) for training custom embeddings. In particular, we traverse the ontologies and concepts related to the labels in the training set, and we fetch the corresponding textual information. This way, we augment the prospectus corpus with additional concept-specific documents from the ontology that include small snippets, descriptions, and definitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset:",
"sec_num": null
},
{
"text": "Prospectus corpus: A corpus of documents is also provided in the English language for training word embeddings. The corpus has been compiled from various websites, comprising financial prospectuses, and it consists of approximately 14M tokens. Those files are given in pdf format. We have used this corpus and text parts of the FIBO ontology to train custom word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset:",
"sec_num": null
},
{
"text": "Word embeddings: Finally, two sets of pre-trained word embeddings are also provided in the context of the task. Both sets are based on the Word2Vec model and trained in an internal financial corpus by the organizers, which comprises key investor information documents and financial prospectuses. The difference between the two word embeddings is the number of dimensions of the vectors and the vocabulary size. The first comprises 100-dimensional vectors and 17328 words, while the second has 300 dimensions and 34437 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset:",
"sec_num": null
},
{
"text": "Our system extends baseline 2, which is a Logistic Regression classifier over the word2vec embeddings from the FINSIM-3 organizers. Figure 2 illustrates the pipeline of our system. After augmenting all terms with their Investopedia definitions, we concatenate and scale the OOV-aware embeddings with the hand-crafted and distance features. We then use a Logistic Regression classifier over these features. In the following subsections, we present how we worked towards building our pipeline.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System description",
"sec_num": "4"
},
{
"text": "First, we created a set of hand-crafted features that are indicative of specific classes. In order to gain insights on terms that could be indicative of each class, we did a preliminary error analysis using the provided baseline 2 system as a predictor. We devised 7 simple boolean hand-crafted features that denoted the existence of a specific string in the term. These strings were common in miss-classified terms of the baseline model while also being highly indicative of the true class of the term. For example, the occurrence of the string \"Inc.\" was very common in errors of the baseline, while at the same time, almost all terms having this string belonged to the Stock Corporation class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering",
"sec_num": "4.1"
},
{
"text": "Moreover, specific classes such as Credit Index have a lot of upper-case letters or unusual patterns. To capture these intricacies, we have also added 3 features that correspond to the number of characters in the term, the number of uppercase letters, and the ratio of upper-case to lower-case letters. We ended up with 10 such hand-crafted features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering",
"sec_num": "4.1"
},
{
"text": "Furthermore, in order to represent the latent space distance between the terms and the classes, we calculated the cosine distance between the term's embedding and each class' embedding, adding 17 features in total. 2 Last but not least, we calculated the Levenshtein distances between the term and each class label, adding another 17 features. After concatenating all of the features, we scale them to a uniform range of [-1, 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering",
"sec_num": "4.1"
},
{
"text": "Inside the 1050 training terms, we found at least 214 words that were out of vocabulary, using the 300-dimensional organizers' embeddings. Such words were mainly instances of credit indexes and regulatory agencies. Since terms contain a single word or a small number of words, we identified the need to deal with OOV occurrences instead of using zero embeddings. For example, the word \"asiacorporate\" and \"t-bill\" are OOV and they are represented by zero embeddings. Ideally, we would like to match them correctly to the most similar in-vocabulary words \"corporate\" and \"treasury-bill\" in order to retrieve the best vector representations available to help the classification process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Vocabulary Words",
"sec_num": "4.2"
},
{
"text": "First, we tried replacing each OOV word with its closest in-vocabulary word in terms of Levenshtein distance, which helped performance. Then, we replaced this relatively simple mechanism by utilizing the Magnitude toolkit [Patel et al., 2018] . Magnitude works as a wrapper for already-trained word2vec models. It allows advanced OOV lookup as it combines different character n-grams, string similarities, and morphology-aware matching.",
"cite_spans": [
{
"start": 222,
"end": 242,
"text": "[Patel et al., 2018]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Vocabulary Words",
"sec_num": "4.2"
},
{
"text": "We use the 300-dimensional embeddings provided by the organizers for the word vector representations since they perform better in the development set than the 100-dimensional ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In-domain representations",
"sec_num": "4.3"
},
{
"text": "We also experimented with a wide variety of other domainspecific word representations. First, we trained our financial word2vec embeddings (d=200) on the prospectus corpus and the FIBO ontology provided. We extracted the text from the prospectuses PDF files using the provided pdf-totext toolkit 3 . However, they proved to be less beneficial than the provided embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In-domain representations",
"sec_num": "4.3"
},
{
"text": "Furthermore, we extracted the embedding layer of FIN-BERT [Yang et al., 2020] , a financial BERT, and converted it to word2vec format. To our surprise, the provided embeddings outperformed the FINBERT embeddings. The provided embeddings also outperformed our custom word2vec and BERT embeddings, trained on financial documents from the Securities & Exchange Commission.",
"cite_spans": [
{
"start": 58,
"end": 77,
"text": "[Yang et al., 2020]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In-domain representations",
"sec_num": "4.3"
},
{
"text": "During the later stages of our experimentation, we noticed that many of the misclassifications were attributed to acronyms (e.g. \"FRN\") or common words being present (\"Long call/put\"). In order to alleviate this problem and provide more context for all terms, we utilized Investopedia 4 as a dictionary of definitions for each term. To do this, we built a scrapper that pinpoints the closest match of a given term that has a definition in the terms dictionary of the website. The scrapper first tries to find exact matches of the query term in the dictionary of the website. If this fails, we utilize the search functionality of the site to identify the closest matching term. Having found a corresponding match (exact or approximate), we fetch the definition of the matched term and keep only the first sentence of the definition, as it usually is in the format: \"[Term] is a ..\", where [Term] denotes the matched term for the given query term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentation with Investopedia Terms",
"sec_num": "4.4"
},
{
"text": "This process is followed both for augmenting the initial training data and the final test terms, as it can be easily incorporated at inference time. Following the above process, approximately 70% of both the train and test terms were augmented. In case the augmentation process did not retrieve any definition for a given term (this was common for Credit Indexes for example), the term was left as is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentation with Investopedia Terms",
"sec_num": "4.4"
},
{
"text": "Apart from Logistic Regression, we also evaluated a battery of different classifiers implemented in the scikit-learn library, like the Naive Bayes Classifier, Decision Trees, linear SVMs, Multi-Layer Perceptron, XGBoost, and RUSBoost [Seiffert et al., 2008] , without any improvement in performance. Indicatively, classifiers based on trees (Decision Trees, XGBoost, RUSBoost) scored the lowest, possibly due to the extensive feature space.",
"cite_spans": [
{
"start": 188,
"end": 257,
"text": "Multi-Layer Perceptron, XGBoost, and RUSBoost [Seiffert et al., 2008]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier tuning",
"sec_num": "4.5"
},
{
"text": "Thus, we chose to continue with the simple -yet powerful-Logistic Regression classifier, where we tuned the regularization strength hyperparameter C. We defined a search space of {0.001, 0.01, 0.1, 1.0, 10.0, 100.0}. We found that C=0.1 is the best option in terms of mean rank and accuracy based on a stratified 5-fold cross-validation setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier tuning",
"sec_num": "4.5"
},
{
"text": "We evaluate our performance using accuracy, mean rank and macro-average F1 score, as shown in equations 1, 2 and 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Accuracy = T P + T N T P + T N + F P + F N (1) M eanRank = 1 n * n i=1 rank i (2) M acroF 1 = 1 n * n i=0 F 1 i",
"eq_num": "(3)"
}
],
"section": "Metrics",
"sec_num": "5.1"
},
{
"text": "In the context of the shared task, apart from accuracy, we also had to generate all labels in ranked order and measure 4 https://www.investopedia.com/ the mean rank. For each term x i with a label y i from the n samples in the test set, the expected prediction is a top-3 list of labels ranked from most to least likely to be equal to the ground truth. In equation 2, rank i is the rank of the correct label in the top-3 prediction list. If the ground truth does not appear in the top-3 then rank i is equal to 4. Table 2 presents the experimental results of our system's variations. We used the Logistic Regression classifier in a stratified 5-fold cross-validation setting. Since the training set is small and imbalanced we selected that setting in order to ensure that all classes will be represented in each fold. We provide empirical results using the best hyperparameters after tuning (see Subsection 4.5). In particular, the following variations were evaluated: Investopedia-based augmented terms (Subsection 4.4). This variation constitutes the second submission of our system (DICOE 2) to the shared task. Table 2 shows that incorporating the hand-crafted features improves the baseline by 2.4% in terms of accuracy. The mean rank also improves from 1.196 to 1.166, suggesting that simple substring binary features may indicate the specific class of each term. Then, we first leverage the Levenshtein distance to replace each OOV word found in the terms with its closest in-vocabulary word. This boosts accuracy by 0.5%, while the mean rank is reduced to 1.156. Thus, handling OOV words with replacements allows us to retrieve better vector representations than zero embeddings. Our next improvement combines the Levenshtein character distance between the term's words and the class labels, as well as the cosine distance between their representations in the latent space, improving accuracy to 90.8% and mean rank to 1.147.",
"cite_spans": [],
"ref_spans": [
{
"start": 514,
"end": 521,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1115,
"end": 1122,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Metrics",
"sec_num": "5.1"
},
{
"text": "\u2022 BL:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Next, we combine all of the above and replace the simple Levenshtein OOV mechanism with the Magnitude toolkit [Patel et al., 2018] . This advanced OOV lookup method scored 91.2% in terms of accuracy and 1.144 in terms of mean rank and represents our first submission to the FIN-SIM-3 shared task (see BL.HF.OOV l .D 2 in Table 2 ).",
"cite_spans": [
{
"start": 110,
"end": 130,
"text": "[Patel et al., 2018]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 321,
"end": 328,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Our final system extends the first submission by augmenting the financial terms with their Investopedia definitions (Section 4.4) in order to provide more context for the classification. This was our best system and scored a 1.132 mean rank, 91.5% accuracy, and 85.0% Macro F1 Score in the 5fold cross validation (see BL.HF.OOV m .D 2 + in Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "We presented DICOE team's submissions to FINSIM-3. Our Investopedia-augmented system ranked 4th on the leaderboard. We leveraged hand-crafted and distance-based features which led to significant improvements over the baseline. To our surprise, external and modern financial word representations, such as FINBERT, did not contribute positively to the results. Moreover, for the first time in this shared task, we introduced the application of OOV word replacement methods. Using OOV replacements, we can successfully retrieve correct vector representations for unknown tokens that share the same morphology with in-vocabulary words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In future work, we plan to investigate other ways of augmenting terms with their definitions and broad context, as well as creating new external financial resources for experimentation. An additional future direction is the direct modeling of the hypernym relation using pairs of tokens and labels to explicitly learn that type of relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Visit https://spec.edmcouncil.org/fibo/ontology for more information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Proceedings of the Third Workshop on Financial Technology and Natural Language Processing (FinNLP@IJCAI 2021), pages 40-45, Online, August 19, 2021.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The vector representation for multi-token terms/classes is the sum of each token's vector representation. We also tried the centroids of the embeddings which reduced the performance.3 Consult https://poppler.freedesktop.org/ for more information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "PolyU-CBS at the FinSim-2 Task: Combining Distributional, String-Based and Transformers-Based Features for Hypernymy Detection in the Financial Domain",
"authors": [
{
"first": "[",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References [Baroni et al., 2012] Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung chieh Shan. Entailment above the word level in distributional semantics. In 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 23-32, Avi- gnon, France, 2012. [Chersoni and Huang, 2021] Emmanuele Chersoni and Chu- Ren Huang. PolyU-CBS at the FinSim-2 Task: Combin- ing Distributional, String-Based and Transformers-Based Features for Hypernymy Detection in the Financial Do- main, page 316-319. New York, NY, USA, 2021. [Devlin et al., 2019] Jacob Devlin, Ming-Wei Chang, Ken- ton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June 2019.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "IITK at the FinSim task: Hypernym detection in financial domain via context-free and contextualized word embeddings",
"authors": [
{
"first": "[",
"middle": [],
"last": "Keswani",
"suffix": ""
}
],
"year": 2012,
"venue": "SEM 2012: The First Joint Conference on Lexical and Computational Semantics, Proceedings of the Sixth International Workshop on Semantic Evaluation (Se-mEval)",
"volume": "",
"issue": "",
"pages": "75--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Keswani et al., 2020] Vishal Keswani, Sakshi Singh, and Ashutosh Modi. IITK at the FinSim task: Hypernym de- tection in financial domain via context-free and contextu- alized word embeddings. In Proceedings of the Second Workshop on Financial Technology and Natural Language Processing, pages 87-92, Kyoto, Japan, 5 January 2020. [Lenci and Benotto, 2012] Alessandro Lenci and Giulia Benotto. Identifying hypernyms in distributional seman- tic spaces. In SEM 2012: The First Joint Conference on Lexical and Computational Semantics, Proceedings of the Sixth International Workshop on Semantic Evaluation (Se- mEval), pages 75-79, Montreal, Canada, 2012.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The FinSim 2020 shared task: Learning semantic representations for the financial domain",
"authors": [
{
"first": "[",
"middle": [],
"last": "Maarouf",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Financial Technology and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "81--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Maarouf et al., 2020] Ismail El Maarouf, Youness Mansar, Virginie Mouilleron, and Dialekti Valsamou-Stanislawski. The FinSim 2020 shared task: Learning semantic repre- sentations for the financial domain. In Proceedings of the Second Workshop on Financial Technology and Natural Language Processing, pages 81-86, Kyoto, Japan, 5 Jan- uary 2020.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The finsim-2 2021 shared task: Learning semantic similarities for the financial domain",
"authors": [
{
"first": "[",
"middle": [],
"last": "Mansar",
"suffix": ""
}
],
"year": 2021,
"venue": "Companion of The Web Conference 2021, Virtual Event / Ljubljana",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Mansar et al., 2021] Youness Mansar, Juyeon Kang, and Isma\u00efl El Maarouf. The finsim-2 2021 shared task: Learn- ing semantic similarities for the financial domain. In Jure Leskovec, Marko Grobelnik, Marc Najork, Jie Tang, and Leila Zia, editors, Companion of The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, pages 288-292. ACM / IW3C2, 2021.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Magnitude: A fast, efficient universal vector embedding utility package",
"authors": [
{
"first": "[",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "120--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Mikolov et al., 2013] Tom\u00e1s Mikolov, Kai Chen, Greg Cor- rado, and Jeffrey Dean. Efficient estimation of word rep- resentations in vector space. In Yoshua Bengio and Yann LeCun, editors, 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013. [Patel et al., 2018] Ajay Patel, Alexander Sands, Chris Callison-Burch, and Marianna Apidianaki. Magnitude: A fast, efficient universal vector embedding utility pack- age. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing: Sys- tem Demonstrations, pages 120-126, Brussels, Belgium, November 2018.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "GOAT at the FinSim-2 Task: Learning Word Representations of Financial Data with Customized Corpus",
"authors": [
{
"first": "Yulong",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Pei and Zhang, 2021] Yulong Pei and Qian Zhang. GOAT at the FinSim-2 Task: Learning Word Representations of Financial Data with Customized Corpus, page 307-310. New York, NY, USA, 2021.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Inclusive yet selective: Supervised distributional hypernymy detection",
"authors": [
{
"first": "[",
"middle": [],
"last": "Roller",
"suffix": ""
}
],
"year": 2014,
"venue": "25th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "1025--1036",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Roller et al., 2014] Stephen Roller, Katrin Erk, and Gemma Boleda. Inclusive yet selective: Supervised distributional hypernymy detection. In 25th International Conference on Computational Linguistics (COLING), pages 1025-1036, Dublin, Ireland, 2014.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Anuj@FINSIM-Learning Semantic Representation of Financial Domain with Investopedia",
"authors": [
{
"first": "Anuj",
"middle": [],
"last": "Saini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saini",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Financial Technology and Natural Language Processing",
"volume": "5",
"issue": "",
"pages": "93--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saini, 2020] Anuj Saini. Anuj@FINSIM-Learning Seman- tic Representation of Financial Domain with Investopedia. In Proceedings of the Second Workshop on Financial Tech- nology and Natural Language Processing, pages 93-97, Kyoto, Japan, 5 2020.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Rusboost: Improving classification performance when training data is skewed",
"authors": [
{
"first": "",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2008,
"venue": "14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Santus et al., 2014] Enrico Santus, Alessandro Lenci, Qin Lu, , and Sabine Schulte Im Walde. Chasing hypernyms in vector spaces with entropy. In 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 38-42, Gothenburg, Sweden, 2014. [Seiffert et al., 2008] Chris Seiffert, Taghi M. Khoshgoftaar, Jason Van Hulse, and Amri Napolitano. Rusboost: Im- proving classification performance when training data is skewed. In 2008 19th International Conference on Pattern Recognition, pages 1-4, 2008.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning term embeddings for taxonomic relation identification using dynamic weighting neural network",
"authors": [],
"year": 2006,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1390--1397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "et al., 2016] Luu Anh Tuan, Yi Tay, Siu Cheung Hui, and See Kiong Ng. Learning term embeddings for tax- onomic relation identification using dynamic weighting neural network. In In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 403-413, Austin, Texas, 2016. [Weeds et al., 2014] Julie Weeds, Daoud Clarke, Jeremy Reffin, David J. Weir, and Bill Keller. Learning to dis- tinguish hypernyms and co-hyponyms. In 25th Interna- tional Conference on Computational Linguistics (COL- ING), pages 2249-2259, Dublin, Ireland, 2014. [Yang et al., 2020] Yi Yang, Mark Christopher Siy UY, and Allen Huang. Finbert: A pretrained language model for financial communications. arXiv, abs/2006.08097, 2020. [Yu et al., 2015] Zheng Yu, Haixun Wang, Xuemin Lin, and Min Wang. Learning term embeddings for hypernymy identification. In 24th International Conference on Artifi- cial Intelligence (IJCAI), pages 1390-1397, Buenos Aires, Argentina, 2015.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The pipeline of our best system.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"text": "Class distribution in the training set.",
"content": "<table><tr><td>Class</td><td>Count</td></tr><tr><td>Equity Index</td><td>286</td></tr><tr><td>Regulatory Agency</td><td>205</td></tr><tr><td>Credit Index</td><td>129</td></tr><tr><td>Central Securities Depository</td><td>107</td></tr><tr><td>Debt pricing and yields</td><td>58</td></tr><tr><td>Bonds</td><td>55</td></tr><tr><td>Swap</td><td>36</td></tr><tr><td>Stock Corporation</td><td>25</td></tr><tr><td>Option</td><td>24</td></tr><tr><td>Funds</td><td>22</td></tr><tr><td>Future</td><td>19</td></tr><tr><td>Credit Events</td><td>18</td></tr><tr><td>Stocks</td><td>17</td></tr><tr><td>MMIs</td><td>17</td></tr><tr><td>Parametric schedules</td><td>15</td></tr><tr><td>Forward</td><td>9</td></tr><tr><td>Securities restrictions</td><td>8</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF1": {
"html": null,
"text": "Experimental results based on stratified 5-fold cross validation. Results are shown using a tuned Logistic Regression Classifier (C=0.1).",
"content": "<table><tr><td>Model</td><td colspan=\"3\">Mean Rank Accuracy Macro F1</td></tr><tr><td>BL</td><td>1.196</td><td>87.6</td><td>80.0</td></tr><tr><td>BL.HF</td><td>1.166</td><td>90.0</td><td>82.0</td></tr><tr><td>BL.HF.OOV l</td><td>1.156</td><td>90.5</td><td>83.0</td></tr><tr><td>BL.HF.OOV l .D</td><td>1.148</td><td>90.7</td><td>84.0</td></tr><tr><td>BL.HF.OOV l .D 2</td><td>1.147</td><td>90.8</td><td>83.8</td></tr><tr><td>BL.HF.OOV m .D 2</td><td>1.144</td><td>91.2</td><td>84.1</td></tr><tr><td>BL.HF.OOV m .D 2 .+</td><td>1.132</td><td>91.5</td><td>85.0</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF2": {
"html": null,
"text": "The baseline model. Logistic Regression classifier with the given input embeddings as features. \u2022 BL.HF: BL with additional hand-crafted features (Subsection 4.1). \u2022 BL.HF.OOV l : BL.HF plus the Levenhstein-based OOV words handling (Subsection 4.2). \u2022 BL.HF.OOV l .D: BL.HF.OOV l plus additional features based on the cosine distance between term and class embeddings (Subsection 4.1). \u2022 BL.HF.OOV l .D 2 : BL.HF.OOV l plus the character distance between terms and classes (Subsection 4.1). \u2022 BL.HF.OOV m .D 2 : BL.HF plus the Magnitude-based OOV words handling (Subsection 4.2). This variation constitutes the first submission of our system (DICOE 1) to the shared task. \u2022 BL.HF.OOV m .D 2 .+: This is BL.HF.OOV m .D 2 using",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}