| { |
| "paper_id": "Y08-1043", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:38:04.390067Z" |
| }, |
| "title": "Extracting Troubles from Daily Reports based on Syntactic Pieces", |
| "authors": [ |
| { |
| "first": "Kakimoto", |
| "middle": [], |
| "last": "Yoshifumi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Nagaoka University of Technology", |
| "location": { |
| "postCode": "1603-1, 940-2188", |
| "settlement": "Kamitomioka, Nagaoka", |
| "region": "Niigata", |
| "country": "Japan" |
| } |
| }, |
| "email": "kakimoto@nlp.nagaokaut.ac.jp" |
| }, |
| { |
| "first": "Kazuhide", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Nagaoka University of Technology", |
| "location": { |
| "postCode": "1603-1, 940-2188", |
| "settlement": "Kamitomioka, Nagaoka", |
| "region": "Niigata", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "It is expensive for companies to browse daily reports. Our aim is to create a system that extracts information about problems from reports. This system operates in two steps. First, it records expressions involving troubles in a dictionary from training data. Second, it expands the dictionary to include information not included in the training data. We experimentally tested this extraction system; in the tests, a two-values classifier attained an F-value of 0.772, and experimental extraction of troubles attained a precision of 0.400 and a recall of 0.827.", |
| "pdf_parse": { |
| "paper_id": "Y08-1043", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "It is expensive for companies to browse daily reports. Our aim is to create a system that extracts information about problems from reports. This system operates in two steps. First, it records expressions involving troubles in a dictionary from training data. Second, it expands the dictionary to include information not included in the training data. We experimentally tested this extraction system; in the tests, a two-values classifier attained an F-value of 0.772, and experimental extraction of troubles attained a precision of 0.400 and a recall of 0.827.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In resent years, many companies request daily reports of text data from company members. The text data is Email and web form. However, daily reports are browsed by human, it is expensive for companies. Our aim is to cut back the workload of daily reports browsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper discusses a type of information extraction. Ichimura et al. (2001) developed a system that reduces the cost of access to reports and assists decision-making for users. This system extracts 'good' or 'bad' practices using a human-created knowledge dictionary. However, the knowledge dictionary depends on the specific enterprise because it is constructed using reports from that enterprise. Saito and Watabe (2001) developed a system that extracts and visualizes information indicating troubles using extraction rules defined by humans. They chose the extraction categories 'trouble', 'causality' and 'countermeasure', which their system evaluated with a precision of 0.878, 0.701 and 0.703, respectively. This system depends on the specific domain, because they considered only printer problems. Both systems produce rules or a dictionary only from training data. This approach has two problems. First, because the dictionary and rules are made by humans, the cost is considerable. Second, this system cannot extract troubles that is not included in the training data. Our approach corrects these problems. Our system automatically creates a dictionary, and can extract troubles by expanding the dictionary to add data not included in the training data.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 77, |
| "text": "Ichimura et al. (2001)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 401, |
| "end": 424, |
| "text": "Saito and Watabe (2001)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we define the trouble as 'content regarding some problem in daily reports'. Troubles must take into account the context of the problem. A single word does not provide adequate troubles because it is too short. For example, when the word '\u58ca\u308c\u308b(break)' occurs in daily reports, we can identify a trouble word. But '\u6905\u5b50\u304c\u58ca\u308c\u308b(The chair breaks)' and '\u30b5\u30fc \u30d0\u30fc\u304c\u58ca\u308c\u308b(The server breaks)' have different meanings, calling for different responses and involving different degrees of risk. Aoki and Yamamoto (2007) investigate syntactic pieces that mine units of syntactic structure. Syntactic pieces are pairs consisting of a modifier and a modificand, as shown in Figure 1 . \u2027They are easy to extract \u2027They yield statistical information readily \u2027They are amenable to matching analysis \u2027They can deal with a chunk of meaning We can apply these characteristics to troubles. Therefore, we found it more comfortable to deal with syntactic pieces than with words. Furthermore, we dealt with continuous modification of the syntactic pieces because troubles must take into account the context. Figure 2 shows an overview of the system. This paper defines weblogs and bulletin boards as daily reports. Our system uses weblogs and bulletin boards as sources of training data. First, we extract syntactic pieces from the training data, compute scores and construct a dictionary of troubles (the trouble dictionary). Second, we expand this dictionary because we must process troubles that do not appear in the training data. Third, we extract troubles from new reports using the trouble dictionary.", |
| "cite_spans": [ |
| { |
| "start": 486, |
| "end": 510, |
| "text": "Aoki and Yamamoto (2007)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 662, |
| "end": 670, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 1085, |
| "end": 1093, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Definition of the Trouble", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The weblogs we used are part of the 'livedoor blog' (2) , and have original tags and titles supplied by each author. We accumulated reports that mentioned problems (trouble reports) in the tag and title text. We defined trouble reports as those with the term '\u30c8\u30e9\u30d6\u30eb(trouble)' in tags or titles. We defined no-trouble reports as those that did not include '\u30c8\u30e9\u30d6\u30eb(trouble)' in tags, titles or sentences.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 55, |
| "text": "(2)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Bulletin boards are 'kakaku.com review boards' (3) . These boards have context tags supplied by users. We defined trouble reports as those with the term '\u60aa\u3044(bad)' in the tags. No-trouble reports did not include '\u60aa\u3044(bad)' or '\u8cea\u554f(question)' in the tags.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 50, |
| "text": "(3)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We constructed a trouble dictionary using these reports. We collected syntactic pieces from trouble and no-trouble reports and assigned a score to each one as troubles. The score observes deviation between trouble and no-trouble reports. Common pieces appear with similar frequency in trouble and no-trouble reports. Troubles has a higher score because it occurs more often in trouble reports. Therefore, we employed the method of Fujimura et al. (2004) . We calculate the trouble score as follows:", |
| "cite_spans": [ |
| { |
| "start": 431, |
| "end": 453, |
| "text": "Fujimura et al. (2004)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trouble Dictionary", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "1. Prepare the corpus, classified as trouble reports or no-trouble reports 2. Extract the syntactic piece 3. Count the frequency of each piece in each reports 4. Compute the trouble score with the following equation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trouble Dictionary", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where w i is a syntactic piece, P(w i ) is the frequency of trouble reports containing w i , N(w i ) is the frequency of no-trouble reports containing w i , P doc is the total number of trouble reports and N doc is the total number of no-trouble reports. Expression (2) defines a population parameter because there really is difference of frequency between P doc and N doc . If the score computed by expression (1) is a positive number, the syntactic piece occurs frequently in trouble reports. However, expression (1) does not consider the frequency of syntactic pieces in the training data. For example, in the two cases Case1. frequency: 100, score: 0.9 and Case2. frequency: 10000, score: 0.9, One may regard the second case is more reliable than the first. We applied the confidence interval estimation method to fix this problem. Expression (3) shows this method (Agresti and Coull, 1998) .", |
| "cite_spans": [ |
| { |
| "start": 869, |
| "end": 894, |
| "text": "(Agresti and Coull, 1998)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trouble Dictionary", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The second term of expression (3) is the confidence interval. This paper considers only negative values for the confidence interval because we treat syntactic pieces with positive values for score(w i ). The second term of expression (3) is the doubled confidence interval, because score(w i ) is the computed difference of two probabilities. This paper considers only negative values for the confidence interval, because we treat syntactic pieces with positive values for score(w i ). As mentioned previously, we computed the trouble score of syntactic pieces, and added syntactic pieces having positive scores to the trouble dictionary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trouble Dictionary", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Extraction of troubles from new input reports was conducted by comparison with the trouble dictionary created as described in section 3.3. However, we cannot extract all troubles in the new reports using the trouble dictionary because it contains troubles from the training data only. Therefore, we expanded the trouble dictionary, tackling troubles not included in the training data, as described in the following sections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expansion of Trouble Dictionary with Syntactic Pieces", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We considered syntactic pieces that only included verbal nouns, and we expanded only verbal nouns in syntactic pieces. Verbal nouns means nouns that used in the same manner as verbs. We consider that if a verbal noun changes, the meaning of the syntactic piece changes too. For example, '\u30e1\u30c3\u30bb\u30fc\u30b8\u304c\u51fa\u306a\u3044(don't output message)' is a trouble, but '\u30af\u30ec\u30fc\u30e0\u304c\u51fa\u306a\u3044 (don't output complaint)' is not a trouble. Therefore, this paper uses only verbal nouns for expanded objects. We do not expand verbs because verbs have many conjugations, and handling them would be too complex. Below, 'expansion' refers only to verbal nouns.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expansion of Target", |
| "sec_num": "3.4.1" |
| }, |
| { |
| "text": "As we expand the trouble dictionary, the plausibility of troubles in syntactic pieces in the dictionary must not change. We must find verbal nouns that easily apply in the context of the expanded object. Figure 3 outlines the expansion method.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 204, |
| "end": 212, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Expansion Method", |
| "sec_num": "3.4.2" |
| }, |
| { |
| "text": "First, we arrange a large web corpus that does not include the training data. Second, we extract syntactic pieces from this corpus, making a piece list of syntactic pieces. In the piece list, we note the frequency of syntactic pieces in the web corpus. Third, we use one of two methods to expand the dictionary. One method is to expand the modifiers (modifier expansion); the other is to expand the modificands (modificand expansion). Following are details of modifier expansion. For modificand expansion, susbstitute the word 'modifier' for 'modificand' and vice versa.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3: Expansion overview", |
| "sec_num": null |
| }, |
| { |
| "text": "1. Search the piece list for a modifier (motion), and compute the frequency of each modificand (sluggish, wrong, fickleness, and slow). The 10 most frequent modificands are added to the 'top modificand list'. 2. Search the piece list for each modificand (sluggish, wrong, fickleness, and slow) in the top modificand list, and compute the frequency of each modifier (search, display, response, and business). The 10 most frequent modifiers are added to the 'top modifier list'. 3. Modifiers on the top modifier list are considered highly likely to occur along with a modifier of the expansion target. We connect the modifiers on that list with a modificand of the expansion target, and add these to the dictionary as new syntactic pieces (search is slow, display is slow, response is slow, and business is slow) having the same trouble score as the expansion target piece. In this section, we construct the trouble dictionary. We extract troubles from new reports using this dictionary. Following are details of extracting troubles. 1. Extract syntactic pieces form the input report.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3: Expansion overview", |
| "sec_num": null |
| }, |
| { |
| "text": "2. Set thresholds in the dictionary, and used syntactic pieces that scored higher than the thresholds as troubles. 3. If syntactic pieces in the input report occur in pieces of dictionary, we correct input pieces as troubles.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3: Expansion overview", |
| "sec_num": null |
| }, |
| { |
| "text": "We extracted troubles from new input reports with the trouble dictionary described above. We used two evaluation methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We designed a two-values classifier that differentiated trouble reports from no-trouble reports. We decided that input reports that extracted troubles were trouble reports, and other cases were no-trouble reports. We classified input reports as having troubles or no-troubles, and evaluated this result.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of a two-values classifier", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We evaluated troubles for plausibility.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of troubles extracted from input reports", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We prepared reports that did not include training data, and chose three human evaluators. Evaluators sorted reports into trouble reports and no-trouble reports. The classification basis is whether a report includes expressions indicating troubles. Classification reports include 400 reports having '\u30c8\u30e9\u30d6\u30eb(trouble)' in the title, and 400 not having the word '\u30c8\u30e9\u30d6\u30eb (trouble)' in the title or main text. These 800 reports are judged in trouble reports or no-trouble reports by evaluators. The evaluators classified 133 as trouble reports and 253 as no-trouble reports. Therefore we used 133 trouble reports and 253 no-trouble reports as evaluation data. Figure 4 shows the result of analysis using the dictionary created in section 3.3. Evaluation of the two-values classifier used precision, recall and F-value. The highest F-value is 0.757, which occurs when the threshold is 0.5. In this case, the precision and recall are 0.665 and 0.880, respectively. Figure 5 shows the result using the expanded dictionary created in section 3.4. The highest Fvalue is 0.772 when the threshold is 0.780. In this case, the precision and recall are 0.724 and 0.827, respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 650, |
| "end": 658, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 953, |
| "end": 961, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "This section evaluates troubles extracted from the evaluation reports. We evaluate the result obtained using the expanded dictionary. The threshold of the expanded dictionary is 0.780. This score is the highest point in Figure 5 . Four hundred seven pieces of troubles were extracted from the evaluation reports. We evaluated the results by hand, based on the following:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 220, |
| "end": 228, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation of Troubles", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "1. The piece is about a problem. 2. The piece is not about a problem, but we can associate it with a problem. 3. The piece is not about a problem, and we cannot associate it with a problem. The number of pieces corresponding to (1), (2) and (3) is 116, 47 and 244, respectively, indicating that the system can identify troubles but with a precision of only about 0.300. If we consider both (1) and (2) as being the right answer, the precision is about 0.400. The precision is shown expression (5). We cannot evaluate the recall of the extracted troubles, but section 4.2 indicates a recall of 0.827. We believe it represents the recall of extracted troubles. This value of recall is difficult to attain with heuristic rules and a dictionary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of Troubles", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We show part of the extracted troubles from the evaluation data in Table 1 . Table 1 : Example of extracted troubles Basis (1) included lots of troubles that had a word meaning '\u306a\u3044(not)'. Syntactic pieces including '\u306a\u3044(not)' are easily identified as troubles. The syntactic piece ' \u21d2 \u97f3\u304c \u9014\ufa00\u308c\u308b (interrupt the sound)' clearly indicates the nature of the troubles. In Basis (2), we regarded terms as indicating troubles when they occur with Basis (1) terms. In fact, ' \u21d2 \u30b5\u30dd\u30fc\u30c8\u306b \u96fb\u8a71\u3059\u308b (call for support)' and ' \u21d2 \u753b\u9762\u304c \u8868\u793a\u3055\u308c\u306a\u3044(do not appear on the screen)' occur in the same reports. Therefore, if we connect Basis (2) and Basis (1), we can look on Basis (2) as belonging to Basis (1), with Basis (2) terms clarifying the meaning. Basis (3) includes terms that cannot be considered troubles. As a result, the rate of Basis (3) terms is higher than that of Basis (1) and (2). But there are valid pieces for which the task of two-values classification would work if the pieces cannot be judged as troubles manually. However, we must consider a score that can separate Basis (1) and (2) from Basis (3).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 67, |
| "end": 74, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 77, |
| "end": 84, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion 5.1 Extracted Troubles", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We show part of the extracted troubles obtained using the expanded dictionary. In Basis (1) and (2), we obtain the pairs '\u30b5\u30fc\u30d3\u30b9(service)\u2192\u30a4\u30e1\u30fc\u30b8(imagery)', '\u691c\uf96a\u3059\u308b(search)\u2192\u8868\u793a\u3059 \u308b(display)', '\uf99a\u7d61\u3059\u308b(inform)\u2192\u76f8\u8ac7\u3059\u308b(consult)' and '\u30a8\u30e9\u30fc\u304c(error)\u2192\u30de\u30fc\u30af\u304c (mark)'. These word usages resemble each other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expanded Dictionary", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Also, troubles in Basis (1) and (2) retains the attribution of troubles in both the unexpanded and expanded dictionaries. In Basis (3), our system can expand the pieces '\u5831\u544a\u3059\u308b(inform)\u2192 \u53d6\u5f15\u3059\u308b(have a deal)' and '\uf99a\u7d61\u3092(contact)\u2192\u8fd4\u4e8b\u3092(reply)'; these words resemble each other. However, the expanded pieces are not troubles because the pieces of the expansion target are not troubles. Most pieces in Basis (3) belong to this case. This result indicates that the expansion is good. In other words, our expansion method by syntactic pieces is found to be useful for this task. In this paper, the expanded pieces are few because we expanded only verbal nouns. We must consider other parts of speech: verbs, nouns and adjectives. If other parts of speech are expanded, we can create bigger dictionary. Table 2 : Example of troubles extracted using expanded dictionary", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 787, |
| "end": 794, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Expanded Dictionary", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In summary, we developed a system that extracts troubles from reports. Our dictionary is constructed using training data involving syntactic pieces, and is expanded to accommodate unknown troubles. We extract troubles from input reports with the dictionary. We evaluated our system using two methods. The two-values classifier had an F-value of 0.772, and the extracted troubles had a precision of 0.400 and recall of 0.827.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We are currently working on two future projects. The first problem involves the treatment of Basis (2) in section 5.1. These syntactic pieces can be considered troubles but do not directly indicate a problem. We think that if we can connect Basis (2) and Basis (1) pieces, Basis (2) pieces can be called troubles. The second problem is about the expanded dictionary. The expansion method cannot add many pieces because we consider only verbal nouns. We must consider expanding other parts of speech.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "(1) CaboCha, Ver.0.53, Matsumoto Lab., Nara Institute of Science and Technology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tools and language resources", |
| "sec_num": null |
| }, |
| { |
| "text": "http://chasen.org/\u02dctaku/software/cabocha/ (2) livedoor Blog, http://blog.livedoor.com/ (3) kakaku.com review boards, http://bbs.kakaku.com/bbs/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tools and language resources", |
| "sec_num": null |
| }, |
| { |
| "text": "22nd Pacific Asia Conference on Language, Information and Computation, pages 411-417", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Approximate is better than \"exact\" for interval estimation of binomial proportion", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Agresti", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Brent", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Coull", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "The American Statistician", |
| "volume": "52", |
| "issue": "", |
| "pages": "119--126", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Agresti and Brent A. Coull. 1998. Approximate is better than \"exact\" for interval estimation of binomial proportion. The American Statistician, 52:119-126.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Opinion Extraction based on Syntactic Pieces", |
| "authors": [ |
| { |
| "first": "Suguru", |
| "middle": [], |
| "last": "Aoki", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuhide", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "The 21st Pacific Asia Conference on Language, Information and Computation", |
| "volume": "", |
| "issue": "", |
| "pages": "76--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Suguru Aoki and Kazuhide Yamamoto. 2007. Opinion Extraction based on Syntactic Pieces. The 21st Pacific Asia Conference on Language, Information and Computation, pages 76- 86.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A Consideration of Extracting Reputations and Evaluative Expressions from the Web", |
| "authors": [ |
| { |
| "first": "Shigeru", |
| "middle": [], |
| "last": "Fujimura", |
| "suffix": "" |
| }, |
| { |
| "first": "Masashi", |
| "middle": [], |
| "last": "Toyota", |
| "suffix": "" |
| }, |
| { |
| "first": "Masaru", |
| "middle": [], |
| "last": "Kitsuregawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "104", |
| "issue": "", |
| "pages": "141--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shigeru Fujimura, Masashi Toyota, and Masaru Kitsuregawa. 2004. A Consideration of Extracting Reputations and Evaluative Expressions from the Web. Technical Report of The Institute of Electronics, Information and Communication Engineers, 104:141-146.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Text Mining System for Analysis of a Salesperson's Daily Reports", |
| "authors": [ |
| { |
| "first": "Yumi", |
| "middle": [], |
| "last": "Ichimura", |
| "suffix": "" |
| }, |
| { |
| "first": "Yasuko", |
| "middle": [], |
| "last": "Nakamura", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshio", |
| "middle": [], |
| "last": "Akahane", |
| "suffix": "" |
| }, |
| { |
| "first": "Miyoko", |
| "middle": [], |
| "last": "Miyosi", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshikazu", |
| "middle": [], |
| "last": "Sekiguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Yousuke", |
| "middle": [], |
| "last": "Fujiwara", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "127--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yumi Ichimura, Yasuko Nakamura, Toshio Akahane, Miyoko Miyosi, Toshikazu Sekiguchi, and Yousuke Fujiwara. 2001. Text Mining System for Analysis of a Salesperson's Daily Reports. Pacific Association for Computational Linguistics, pages 127-135.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The mining from trouble control data", |
| "authors": [ |
| { |
| "first": "Takahiro", |
| "middle": [], |
| "last": "Saito", |
| "suffix": "" |
| }, |
| { |
| "first": "Isamu", |
| "middle": [], |
| "last": "Watabe", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "93", |
| "issue": "", |
| "pages": "145--152", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Takahiro Saito and Isamu Watabe. 2001. The mining from trouble control data(in Japanese). Information Processing Society of Japan SIGNL Note, 93:145-152.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "\u5165\u529b\u6587( input sentence ): \u51fa\u529b\u69cb\u6587\u7247( output syntactic pieces ): \u53e4\u3044\u30d1\u30bd\u30b3\u30f3\u306e\u30d0\u30c3\u30c6\u30ea\u30fc\u304c\u3044\u304d\u306a\u308a\u7206\u767a\u3057\u305f\u3002 ( The battery of the old computer exploded suddenly. ) \u53e4\u3044( old ) \u21d2 \u30d1\u30bd\u30b3\u30f3( computer ) \u30d0\u30c3\u30c6\u30ea\u30fc\u304c( battery ) \u21d2 \u7206\u767a\u3059\u308b( explode ) \u3044\u304d\u306a\u308a( suddenly ) \u21d2 \u7206\u767a\u3059\u308b( explode ) Example of syntactic pieses Syntactic pieces have the following characteristics:" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "System overview" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Two-values classifier results (unexpanded dictionary)Figure 5: Two-values classifier results (expanded dictionary)" |
| } |
| } |
| } |
| } |