| { |
| "paper_id": "Y18-1031", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:35:52.978634Z" |
| }, |
| "title": "Trivia Score and Ranking Estimation Using Support Vector Regression and RankNet", |
| "authors": [ |
| { |
| "first": "Kazuya", |
| "middle": [], |
| "last": "Niina", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Kyushu Institute of Technology", |
| "location": { |
| "addrLine": "680-4 Kawazu Iizuka", |
| "postCode": "820-8502", |
| "settlement": "Fukuoka", |
| "country": "Japan" |
| } |
| }, |
| "email": "kniina@pluto.ai.kyutech.ac.jp" |
| }, |
| { |
| "first": "Kazutaka", |
| "middle": [], |
| "last": "Shimada", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Kyushu Institute of Technology", |
| "location": { |
| "addrLine": "680-4 Kawazu Iizuka", |
| "postCode": "820-8502", |
| "settlement": "Fukuoka", |
| "country": "Japan" |
| } |
| }, |
| "email": "shimada@pluto.ai.kyutech.ac.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Dialogue systems have been increasingly important these days. In particular, non-taskoriented dialogue systems are studied because of the success of neural network approaches such as seq2seq models. However, these models tend to generate simple responses such as \"yes\" and \"ok.\" To construct a dialogue system that holds users' attention continuously, we need to generate utterances that capture the interest of the user. In this paper, we propose a method to extract trivia sentences for the purpose. Trivia information perhaps adds a surprise to users. Therefore, capturing trivia information is beneficial for dialogue systems. We estimate a trivia score of a sentence by using machine learning approaches, Support Vector Regression (SVR) and RankNet. We obtained 0.79 and 0.78 on SVR for the nDCG@5 and RankNet for the nDCG@10, respectively. We focus on the subject word in each sentence. The method with subject information outperformed that without subject information; 0.79 with subject information vs. 0.64 without subject information on the SVR for the nDCG@5.", |
| "pdf_parse": { |
| "paper_id": "Y18-1031", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Dialogue systems have been increasingly important these days. In particular, non-taskoriented dialogue systems are studied because of the success of neural network approaches such as seq2seq models. However, these models tend to generate simple responses such as \"yes\" and \"ok.\" To construct a dialogue system that holds users' attention continuously, we need to generate utterances that capture the interest of the user. In this paper, we propose a method to extract trivia sentences for the purpose. Trivia information perhaps adds a surprise to users. Therefore, capturing trivia information is beneficial for dialogue systems. We estimate a trivia score of a sentence by using machine learning approaches, Support Vector Regression (SVR) and RankNet. We obtained 0.79 and 0.78 on SVR for the nDCG@5 and RankNet for the nDCG@10, respectively. We focus on the subject word in each sentence. The method with subject information outperformed that without subject information; 0.79 with subject information vs. 0.64 without subject information on the SVR for the nDCG@5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Dialogue systems, such as Siri and Alexa, have been increasingly important these days. In particular, non-task-oriented dialogue systems are studied because of the success of neural networks or reinforcement learning approaches such as seq2seq models (Vinyals and Le, 2015; Li et al., 2016) . However, these models tend to generate simple responses such as \"yes\" and \"ok\". Therefore, users often get bored with the conversation based on such dialogue systems. To solve this problem, we need to generate utterances that stimulate users' interest.", |
| "cite_spans": [ |
| { |
| "start": 251, |
| "end": 273, |
| "text": "(Vinyals and Le, 2015;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 274, |
| "end": 290, |
| "text": "Li et al., 2016)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we focus on trivia to solve the problem. Here, we define trivia information as a lesserknown and interesting fact. For example, the following sentence is trivia information; \"If red swamp crayfishes eat mackerels, the body color becomes blue.\" We believe that the trivia sentences can capture the interest of the users and are beneficial to construct a dialogue system that holds users' attention continuously.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To identify trivia information, we propose a method for estimating a trivia score of each sentence. In the method, we focus on a relation between the main topic and words in each sentence. We extract the main topic from a target sentence and then identify an important noun for estimating the trivia score. We compute feature values from the word pair and then apply them to machine learning approaches. We use the Support Vector Regression (SVR) and the RankNet as the machine learning approach. Prakash et al. (2015) have proposed a mining method for interesting trivia. They used trivia information in IMDB as the training data. They extracted named entities and superlative words as features for a machine learning method. They estimated interestingness as trivia by using SVMrank. Fatma et al. (2017) have proposed a method based on deep learning for a trivia classification task. They handled Trivia Score Maximum A statue of Buddha with the Afro haircut exists. 77 100 Largehead hairtails swim with the standing style 93 100 Tyrannosaurus cannot run 66 100 The blood type of all gorillas is B 110 200 Table 1 : Examples of trivia sentences from \"Hey! Spring of Trivia.\"", |
| "cite_spans": [ |
| { |
| "start": 497, |
| "end": 518, |
| "text": "Prakash et al. (2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 786, |
| "end": 805, |
| "text": "Fatma et al. (2017)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 899, |
| "end": 1131, |
| "text": "Trivia Score Maximum A statue of Buddha with the Afro haircut exists. 77 100 Largehead hairtails swim with the standing style 93 100 Tyrannosaurus cannot run 66 100 The blood type of all gorillas is B 110 200 Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "RDF triples from DBpedia 1 as the target data. They evaluated a Fusion Based CNN which learns combinations of features obtained by convolution and hand-crafted features. Tsurel et al. (2017) have extracted trivia information from Wikipedia by a scoring method. They focused on categories on each Wikipedia page and estimated a trivia score based on similarity and cohesiveness measures. Ota et al. (2009) have proposed a method for extracting sentences with a surprise for a dialogue system. They computed the TFIDF value, the cooccurrence frequency, and the sentence length and extracted sentences with surprise from Wikipedia by using some rules. Niina and Shimada (2017) have proposed a method for extracting unusual facts from Wikipedia for a dialogue system. They also computed some scores from each sentence in Wikipedia and then detected unusual facts in a similar way to (Ota et al., 2009) .", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 190, |
| "text": "Tsurel et al. (2017)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 387, |
| "end": 404, |
| "text": "Ota et al. (2009)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 649, |
| "end": 673, |
| "text": "Niina and Shimada (2017)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 879, |
| "end": 897, |
| "text": "(Ota et al., 2009)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this paper, we use a Japanese supervised dataset which contains trivia sentences with a trivia score. Our purpose is to estimate the trivia score and the ranking of each trivia sentence by using machine learning approaches based on word pair features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For the estimation of a trivia score of a sentence, we need a training dataset. We use sentences that appeared in \"Hey! Spring of Trivia\" that was a Japanese TV show. Most of the trivia on the show was sent by viewers. We extract trivia sentences from the Wikipedia page of the TV show. Table 1 shows some examples of trivia sentences in the dataset. Each trivia sentence was evaluated by some judges by pushing a \"hey 2 \" button on the TV show. The judges pushed the button when they Trivia score # of sentences 0.0 -0.1 3 0.1 -0.2 1 0.2 -0.3 3 0.3 -0.4 6 0.4 -0.5 17 0.5 -0.6 105 0.6 -0.7 258 0.7 -0.8 323 0.8 -0.9 251 0.9 -1.0 64 Total 1031 felt that the trivia sentence was interesting. In Table 1 , the score denotes the number of \"hey\", namely the interestingness of the trivia. Since the maximum value of \"hey\" depends on the TV episodes, the table contains the maximum values for each trivia sentence. In this paper, we normalize the score by the maximum value. We regard the normalized score as the trivia score in this paper. Table 2 shows the distribution of the trivia score from \"Hey! Spring of Trivia.\" The distribution is unbalanced because the trivia sentences on the show were submitted by viewers as trivia and were selected by the production team.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 287, |
| "end": 294, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 694, |
| "end": 702, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1037, |
| "end": 1044, |
| "text": "Table 2", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The purpose of this paper is to estimate a trivia score of a sentence by using machine learning. Therefore, we need features for the machine learning methods. In this section, we describe our feature extraction process. The outline of our method is shown in Figure ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 258, |
| "end": 264, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this section, we describe the word pair extraction for the calculation of feature values. This process is divided into two processes on the basis of one rule; presence of the subject word in a sentence. We select feature candidates via this process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Pair Extraction", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Each trivia sentence contains a topic. People find interesting and humorous if a sentence contains a gap between the topic and the mention in the sentence. Assume that the following trivia sentence, \"If red swamp crayfishes eat mackerels, the body color becomes blue.\" The interesting point of this trivia sentence is the unexpected fact, \"blue\", against the common sense, \"Crayfishes are red.\" The important point, in this case, is a relation between \"crayfishes\" and \"blue.\" This point is what makes it trivia. On the other hand, there is no surprise for a relation between \"mackerels\" and \"blue.\" These results indicate the significance of the main topic in the sentence for understanding the trivia. The topic of this sentence is crayfishes, namely the subject word in the sentence. Therefore, extraction of the subject word has a significant role to estimate a trivia score of each sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subject Extraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "Our target in this paper is Japanese. In Japanese, the subject word in sentences is often omitted, zeropronouns. Thus, we need to identify the subject word in each sentence. We use a Japanese Predicate Argument Structure Analyzer, ChaPAS 3 , for the subject extraction. ChaPAS is a modified model of (Watanabe et al., 2010) . In this paper, we extract a nominative case, ga-case in Japanese, as the subject.", |
| "cite_spans": [ |
| { |
| "start": 300, |
| "end": 323, |
| "text": "(Watanabe et al., 2010)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subject Extraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "The pair of the subject word and other words in a sentence is important for the calculation of feature values. On the other hand, there are many combinations of the subject and words in a sentence. In this process, we handle nouns and verbs for generating word pair candidates. For example, we obtain a word pair (mailbox, bottom-of-the-ocean) 4 from the trivia sentence There is a mailbox on the bottom of the ocean.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Pair Candidates", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "As mentioned above, zero-pronouns frequently appear in Japanese sentences. In other words, sentences do not always contain the subject. In this situation 5 , we create all pairs about (noun, noun) and (noun, verb) from the sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Pair Candidates", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "In the previous sub-section, we obtain some (subject, word) pairs from each sentence. Hence, we need to determine the feature pair for the calculation of feature values that are used in machine learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Pair Determination", |
| "sec_num": "4.1.3" |
| }, |
| { |
| "text": "We assume that a good feature for the trivia score estimation rarely appears in real-world texts because the important point about trivia is a gap between words. Therefore, we compute a co-occurrence value, coF req, as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Pair Determination", |
| "sec_num": "4.1.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "coF req(s, w) = pair-f req(s, w) f req(s)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Feature Pair Determination", |
| "sec_num": "4.1.3" |
| }, |
| { |
| "text": "where s is the subject extracted Section 4.1.1. w is the pair word with s. pair-f req(s, w) is the cooccurrence frequency of the pair (s, w). f req(s) is the frequency of s. pair-f req (s, w) and f req(s) are computed from 7-grams of Japanese Google Ngrams (Kudo and Kazawa, 2007) . As mentioned above, trivia sentences in the dataset perhaps do not contain the subject word. In this situation (non-subject situation), we do not determine the feature pair from (noun, noun) and (noun, verb) in this process. On the basis of some rules in the calculation process (the next section), we determine which pair should be used for the machine learning.", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 191, |
| "text": "(s, w)", |
| "ref_id": null |
| }, |
| { |
| "start": 257, |
| "end": 280, |
| "text": "(Kudo and Kazawa, 2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Pair Determination", |
| "sec_num": "4.1.3" |
| }, |
| { |
| "text": "In this paper, we compute the following feature values for machine learning methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Value Calculation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 IDF \u2022 Similarity \u2022 Inverse Entity Frequency (IEF) \u2022 Word embeddings 4.2.1 IDF", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Value Calculation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We assume that a trivia sentence with well-known words contains a higher trivia score than that with less-known words. Therefore, we use the IDF value of the subject as the feature value. In other words, we assume that the trivia score of a sentence is high in the case that the IDF value of the subject in the sentence is low, namely a well-known word. The IDF value is computed from all Wikipedia pages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Value Calculation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the non-subject situation, we compute the IDF values for all nouns in the target trivia sentence and then use the minimum value as the feature value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Value Calculation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Trivia sentences tend to contain word combinations that are rare. For example, (grave-marker, printer) for the trivia sentence \"There is a dedicated printer for grave makers.\" Generally, the combination is rare. This implies that the similarity of words in the feature pair is small, as compared with ordinary word pairs. Therefore, we apply a word similarity measure to the feature values. We generate a word-embedding model by using word2vec (Mikolov et al., 2013) . Then, we compute the cosine similarity of words in the target feature pair, as the feature value.", |
| "cite_spans": [ |
| { |
| "start": 444, |
| "end": 466, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "In the non-subject situation, we compute the cosine for all (noun, noun) pairs in the target trivia sentence, and then use the minimum value as the feature value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "In the previous sub-section, \"Similarity\", we compute a similarity value between words. In this feature, we extend this point to a category-level. In a similar way to the word similarity, we assume that a word in the feature pair rarely appears in documents that related to another word. For example, we obtain (mummy, fuel) from \"Mummies were used as a fuel for the train in the 18th century.\" Here we obtain documents that related to mummies, such as \"corpse\" and \"ancient Egypt.\" There is obviously a gap between the word \"fuel\" and these documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inverse Entity Frequency (IEF)", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "We use the categories on Wikipedia for this process. We compute the Inverse Entity Frequency (IEF) value from the category of the subject on Wikipedia and another word (pair-word) in the feature pair. The process is as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inverse Entity Frequency (IEF)", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "1. Extract the Wikipedia page of the subject 2. Extract the category of the Wikipedia page 3. Extract the page set which belongs to the category 4. Compute the IDF of the pair-word in the page set 5. Compute the IEF by using the following equations", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inverse Entity Frequency (IEF)", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "IEF (s, w) = IDF Cs (w) log(|C s | + 1) (2) IDF Cs (w) = log |C s | + 1 df Cs (w) + 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inverse Entity Frequency (IEF)", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "where s and w are the subject and the pair-word, respectively. C s is the page set of the category that s belongs to. IDF Cs (w) is the IDF of w in C s . df Cs (w) is the document frequency of w in C s .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inverse Entity Frequency (IEF)", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "In the non-subject situation, we compute the IEF value of all pairs in the target trivia sentence and then use the maximum value as the feature value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inverse Entity Frequency (IEF)", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "In recent years, word embedding is often used as a feature for machine learning. Hence, we also apply a word embedding model into the feature set. We use the embedding of the subject as the feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word embeddings", |
| "sec_num": "4.2.4" |
| }, |
| { |
| "text": "In the non-subject situation, we select three words from the beginning of the sentence and use the embedding of them as the features. If the number of target words is less than 3, we add the zero vector to the feature space 6 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word embeddings", |
| "sec_num": "4.2.4" |
| }, |
| { |
| "text": "In this section, we evaluate our features with the dataset described in Section 3. We apply the features into two machine learning methods, a regression model and a ranking model. First, we evaluate the trivia score estimation with the regression model. Then, we compare the ranking model with the regression model and a baseline based on the previous work, in terms of ranking estimation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As common settings of the experiment, we used the data dump provided by Wikimedia Foundation on May 21, 2017 7 . We also used Japanese DBpedia 8 for the IEF calculation. We generated a word embedding model by using Word2Vec 9 with skipgram. The number of dimensions was 200.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In the experiment, we evaluated a regression model based on features extracted in Section 4 for the trivia score estimation. We used Support Vector Regression (SVR) (Drucker et al., 1997) with the RBF kernel. SVR is a linear regression method based on SVM. We implemented the model with default parameters by scikit-learn. We used the RBF kernel. The parameters, C and \u03b3, were default values.", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 187, |
| "text": "(Drucker et al., 1997)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trivia Score Estimation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Here we have a problem with the dataset. As mentioned in Section 3, the distribution of the dataset was unbalanced. The model generated from the dataset probably estimates approximately 0.75 as the trivia score of many instances. It is not suitable for the estimation model. Therefore, we reconstructed the dataset by the following process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trivia Score Estimation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "MAE MSE R2 Proposed 0.2490 0.0835 -0.0108 Baseline 0.2503 0.0827 0.0000 As a result, we obtained a balanced dataset. In other words, the trivia score of the top 103 trivia sentences 10 was 0.95 and that of the next 103 trivia sentences was 0.85. Likewise, a new pseudo trivia score is assigned to trivia sentences. For evaluation criteria, we used Mean Absolute Error (MAE), Mean Squared Error (MSE) and Coefficient of Determination (R2) in 10-fold crossvalidation. As a naive baseline, we use the model that regarded the trivia score as the average value on the dataset, namely approximately 0.5. Table 3 shows the estimation result by SVR and the baseline. We found that the proposed method just slightly exceeded the baseline in terms of MAE. Although the baseline obtained better results about MSE and R2, they were also very few differences. In addition, our method estimated that the trivia scores of most sentences were within the range of 0.45 to 0.65. In other words, the range that our method can estimate is insufficient. Therefore, we need to discuss new features for identifying trivia sentences with high trivia scores (0.85 to 0.95).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 598, |
| "end": 605, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "Our final goal is to apply trivia sentences into our dialogue system. Thus, our motivation is to extract 10 10% of 1031 instances. sentences with high trivia scores for the purpose. In other words, we need trivia sentences in the higher rank in all sentences. Therefore we evaluated our features with a ranking task of trivia sentences. We used RankNet that was proposed by (Burges et al., 2005) . RankNet is a gradient descent method for learning ranking functions based on the pairwise approach. In this experiment, the number of hidden layers was 2 and the number of units was 1024. The activation function was ReLU and the loss function was cross entropy. We compared the RankNetbased method with the regression model (SVR) described in Section 5.1 and a baseline. The baseline proposed by (Niina and Shimada, 2017) was based on a scoring function to extract unusual facts from sentences in Wikipedia because the task was similar to our task, namely trivia sentence extraction.", |
| "cite_spans": [ |
| { |
| "start": 374, |
| "end": 395, |
| "text": "(Burges et al., 2005)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 794, |
| "end": 819, |
| "text": "(Niina and Shimada, 2017)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ranking Estimation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In the experiment, we randomly divided the dataset into two parts; training and test. We used 90% as the training data and 10% as the test data. As a criterion, we use nDCG@k (k = 5, 10). Table 4 shows the experimental result of the ranking task. The RankNet-based method and SVR outperformed the baseline from the related work in terms of both settings, k = 5, 10. This result shows the effectiveness of our methods. In addition, the SVR described in Section 5.1 also outperformed the baseline based on a scoring function although the result of the SVR was not enough in the trivia score estimation task. This result shows that the proposed features were effective for the ranking task, as compared with a scoring function from the related work. In other words, the proposed features were effective to recognize a relationship between magnitudes of trivia scores although those were not sufficient to estimate actual trivia scores of sentences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 188, |
| "end": 195, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ranking Estimation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "First, we evaluated the RankNet-based method by ablation test. the result of the method with all features, namely the same as Table 4 . OUT-IDF denotes the result of the method without the IDF feature. IEF(s, n) and IEF(s, v) are IEF(subject, noun) and IEF(subject, verb), respectively. From the table, IDF, IEF(subject, verb), and vectors from the word embedding model were effective to recognize the ranks of each trivia sentence because deleting these features led to decrease of the accuracy. However, the results were based on our small dataset, namely 90% as training data and 10% as test data from 1031 instances. Increasing the dataset and evaluating the larger dataset are important for a reliable experiment. Next, we discuss some features of our methods in detail. Here we focused on the IDF and IEF(s, v) features. Table 6 shows the feature values computed from the dataset for the actual top/bottom 5 and 10 trivia sentences. The IDF values in Table 6 were clearly arranged in descending order. This is one reason that the feature generated the good performance in Table 5 . However, it should be in ascending order from the assumption described in Section 4.2.1. In other words, this was a reversal phenomenon; we expected that the values in the top ranks and the bottom ranks were low and high, respectively. Therefore the assumption itself might not be correct for the trivia identification although the IDF feature was effective. The IEF values were ex- pected in descending order from the assumption in Section 4.2.3. However, the values in Table 6 were out of order although the feature contributed to generating a little better performance in Table 5 . In addition, the results, especially the trivia score estimation, were not always sufficient as compared with a naive baseline. To accomplish the higher accuracy, we need to consider new features and the combinations of the current features and them. Moreover, we need to apply other machine learning methods to the task. The main contribution of our method is to handle a relationship with the subject in each trivia sentence. For the validation of this contribution, we compared our methods based on Figure 1 and methods that did not handle the relation, namely methods with only the \"NO\" process in Figure 1 . Table 7 shows the experimental result of the ranking task. The methods with subject information outperformed those without subject information for both criteria; nDCG@5 and nDCG@10. This result shows the effectiveness of the features that incorporated the relation with the subject.", |
| "cite_spans": [ |
| { |
| "start": 216, |
| "end": 225, |
| "text": "IEF(s, v)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 126, |
| "end": 133, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 827, |
| "end": 834, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 957, |
| "end": 965, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 1079, |
| "end": 1086, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 1560, |
| "end": 1567, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 1664, |
| "end": 1671, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 2176, |
| "end": 2184, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 2276, |
| "end": 2284, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 2287, |
| "end": 2295, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion about Features", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In this paper, we proposed some features for estimating a trivia score of each sentence. We focused on a relation between the main topic and the words in each sentence. We extracted the main topic, namely the subject word, from a target sentence and then identified an important noun for estimating the trivia score. We computed feature values from the word pair and then applied them to machine learning approaches. We used the Support Vector Regression (SVR) and the RankNet as the machine learning approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In the experiment about the trivia score estimation by using SVR, we did not always obtain the success as compared with the naive baseline. On the other hand, in the experiment about the ranking task, the SVR obtained 0.791 on nDCG@5 and the RankNet-based method obtained 0.782 on nDCG@10. The two methods outperformed the baseline from the related work. The IDF feature, IEF feature, and the vectors from the word embedding model were effective to recognize the rank of each trivia sentence. In addition, the method with subject information outperformed that without subject information. These results show the effectiveness of the proposed features with the subject information. However, our subject extraction method relied on a simple rule and an existing tool. The improvement of this process is important future work. Moreover, the size of the dataset was not enough for machine learning techniques. Increasing the dataset is also important future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our final goal is to apply trivia sentences into a dialogue system that holds users' attention continuously. Therefore, we need to not only estimate the trivia score of a sentence but also extract trivia sentences from massive sentences. For the purpose, we need to incorporate non-trivia features although we focused on trivia features in this paper. In addition, we need to discuss the usage of the extracted trivia sentences in the dialogue system, such as selection of trivia sentences for the output process and output control of a trivia sentence in a dialogue.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "https://wiki.dbpedia.org/ 2 The meaning of this word in English is similar to really in this context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://sites.google.com/site/yotarow/chapas 4 Note that we ignore some verbs, such as do (suru in Japanese) and be ( desu in Japanese), as stop-words.5 Hereinafter, this is called \"non-subject situation.\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that we always add zero vectors of two words in the subject situation because we just use one subject in this process.7 https://dumps.wikimedia.org/jawiki/ 8 http://ja.dbpedia.org/ 9 https://radimrehurek.com/gensim/models/word2vec.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Learning to rank using gradient descent", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Burges", |
| "suffix": "" |
| }, |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Shaked", |
| "suffix": "" |
| }, |
| { |
| "first": "Erin", |
| "middle": [], |
| "last": "Renshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Lazier", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Deeds", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicole", |
| "middle": [], |
| "last": "Hamilton", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Hullender", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 22nd International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "89--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine Learning, pages 89-96. ACM.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Support vector regression machines", |
| "authors": [ |
| { |
| "first": "Harris", |
| "middle": [], |
| "last": "Drucker", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "C" |
| ], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "Linda", |
| "middle": [], |
| "last": "Burges", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [ |
| "J" |
| ], |
| "last": "Kaufman", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Smola", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vapnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "155--161", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harris Drucker, Christopher JC Burges, Linda Kaufman, Alex J Smola, and Vladimir Vapnik. 1997. Support vector regression machines. In Advances in neural in- formation processing systems, pages 155-161.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The Unusual Suspects: Deep Learning Based Mining of Interesting Entity Trivia from Knowledge Graphs", |
| "authors": [ |
| { |
| "first": "Nausheen", |
| "middle": [], |
| "last": "Fatma", |
| "suffix": "" |
| }, |
| { |
| "first": "Manoj", |
| "middle": [], |
| "last": "Kumar Chinnakotla", |
| "suffix": "" |
| }, |
| { |
| "first": "Manish", |
| "middle": [], |
| "last": "Shrivastava", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Thirty-First AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1107--1113", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nausheen Fatma, Manoj Kumar Chinnakotla, and Man- ish Shrivastava. 2017. The Unusual Suspects: Deep Learning Based Mining of Interesting Entity Trivia from Knowledge Graphs. In Thirty-First AAAI Con- ference on Artificial Intelligence, pages 1107-1113.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Web Japanese ngram version 1", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideto", |
| "middle": [], |
| "last": "Kazawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo and Hideto Kazawa. 2007. Web Japanese n- gram version 1. published by Gengo Shigen Kyokai.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [], |
| "year": 2013, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "3781", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors ment learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing,D16-112, pages 1192-1202, Valencia, Spain, November. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781:1-12.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A dialogue system with miscellaneous knowledge from sizzle words", |
| "authors": [ |
| { |
| "first": "Kazuya", |
| "middle": [], |
| "last": "Niina", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazutaka", |
| "middle": [], |
| "last": "Shimada", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IEICE Tech. Rep", |
| "volume": "117", |
| "issue": "", |
| "pages": "77--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kazuya Niina and Kazutaka Shimada. 2017. A dia- logue system with miscellaneous knowledge from siz- zle words. In IEICE Tech. Rep, NLC2017-42 (in Japanese), volume 117, pages 77-82.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Sentence extraction from Wikipedia for making utterance for dialogue system", |
| "authors": [ |
| { |
| "first": "Tomohiro", |
| "middle": [], |
| "last": "Ota", |
| "suffix": "" |
| }, |
| { |
| "first": "Fujio", |
| "middle": [], |
| "last": "Toriumi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenichiro", |
| "middle": [], |
| "last": "Ishii", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "The 23rd Annual Conference of the Japanese Society for Articial Intel-ligence", |
| "volume": "", |
| "issue": "", |
| "pages": "2--3", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomohiro Ota, Fujio Toriumi, and Kenichiro Ishii. 2009. Sentence extraction from Wikipedia for making utter- ance for dialogue system. In The 23rd Annual Confer- ence of the Japanese Society for Articial Intel-ligence (in Japanese), number 2G1-NFC5-11.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Did you know? -mining interesting trivia for entities from wikipedia", |
| "authors": [ |
| { |
| "first": "Abhay", |
| "middle": [], |
| "last": "Prakash", |
| "suffix": "" |
| }, |
| { |
| "first": "Manoj", |
| "middle": [], |
| "last": "Kumar Chinnakotla", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhaval", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "Puneet", |
| "middle": [], |
| "last": "Garg", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "3164--3170", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhay Prakash, Manoj Kumar Chinnakotla, Dhaval Pa- tel, and Puneet Garg. 2015. Did you know? - mining interesting trivia for entities from wikipedia. In Twenty-Fourth International Joint Conference on Artificial Intelligence, pages 3164-3170.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Fun facts: Automatic trivia fact extraction from wikipedia", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Tsurel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Pelleg", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Guy", |
| "suffix": "" |
| }, |
| { |
| "first": "Dafna", |
| "middle": [], |
| "last": "Shahaf", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Tenth ACM International Conference on Web Search and Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "345--354", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Tsurel, Dan Pelleg, Ido Guy, and Dafna Shahaf. 2017. Fun facts: Automatic trivia fact extraction from wikipedia. In Proceedings of the Tenth ACM Interna- tional Conference on Web Search and Data Mining, pages 345-354. ACM.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A neural conversational model", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1506.05869" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A structured model for joint learning of argument roles and predicate senses", |
| "authors": [ |
| { |
| "first": "Yotaro", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| }, |
| { |
| "first": "Masayuki", |
| "middle": [], |
| "last": "Asahara", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the ACL 2010 Conference Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "98--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yotaro Watanabe, Masayuki Asahara, and Yuji Mat- sumoto. 2010. A structured model for joint learning of argument roles and predicate senses. In Proceed- ings of the ACL 2010 Conference Short Papers, pages 98-102. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "The outline of the feature selection." |
| }, |
| "TABREF0": { |
| "num": null, |
| "text": "The distribution of normalized trivia scores.", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "num": null, |
| "text": "The result of the trivia score estimation task by SVR.", |
| "content": "<table><tr><td>1. Sort the dataset in descending order by the orig-</td></tr><tr><td>inal trivia score</td></tr><tr><td>2. Set 0.95 to a new pseudo trivia score</td></tr><tr><td>3. Assign the new pseudo trivia score for the top</td></tr><tr><td>10% sentences</td></tr><tr><td>4. Delete the current top 10% sentences.</td></tr><tr><td>5. Decrease the pseudo trivia score by 0.1.</td></tr><tr><td>6. Repeat 3 to 5 until 0.05 about the pseudo trivia</td></tr><tr><td>score.</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "num": null, |
| "text": "The result of the ranking task.", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "num": null, |
| "text": "", |
| "content": "<table><tr><td>shows the result. ALL denotes</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "num": null, |
| "text": "The result of the ablation test by RankNet.", |
| "content": "<table><tr><td>Range</td><td>IDF IEF(s, v)</td></tr><tr><td>top 5</td><td>7.594 0.1000</td></tr><tr><td>top 10</td><td>6.231 0.3195</td></tr><tr><td colspan=\"2\">bottom 10 3.371 0.5105</td></tr><tr><td colspan=\"2\">bottom 5 3.019 0.3309</td></tr><tr><td>average</td><td>4.556 0.3821</td></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "num": null, |
| "text": "The values of IDF and IEF values in the dataset.", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF9": { |
| "num": null, |
| "text": "The effectiveness of the proposed features with the subject information for SVR and RankNet (RN).", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |