ACL-OCL / Base_JSON /prefixY /json /Y03 /Y03-1043.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y03-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:34:33.055311Z"
},
"title": "A Word Selection Model Based On Lexical Semantic Knowledge In English Generation)",
"authors": [
{
"first": "Yi-Dong",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xiamen University Xiamen",
"location": {
"postCode": "361004",
"settlement": "Fujian",
"country": "China"
}
},
"email": "ydchen@xrnu.edu.cn"
},
{
"first": "L",
"middle": [
"I"
],
"last": "Tang-Qiu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xiamen University Xiamen",
"location": {
"postCode": "361004",
"settlement": "Fujian",
"country": "China"
}
},
"email": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Xu-Ling",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xiamen University Xiamen",
"location": {
"postCode": "361004",
"settlement": "Fujian",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word selection is an vital factor to improve the quality of machine translation. This paper introduces a new model for word selection based on lexical semantic knowledge, which could deal with the problem significantly better. Meanwhile, the construction of the English lexical semantic knowledge base required for the model in our Chinese-English machine translation system is also discussed in detail.",
"pdf_parse": {
"paper_id": "Y03-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "Word selection is an vital factor to improve the quality of machine translation. This paper introduces a new model for word selection based on lexical semantic knowledge, which could deal with the problem significantly better. Meanwhile, the construction of the English lexical semantic knowledge base required for the model in our Chinese-English machine translation system is also discussed in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The task of Vocabularies Handling in machine translation is to map source language words or phrases to their corresponding ones in target language. The task should be performed in almost every stage of machine translation, since words are basic elements of a sentence. A word in a source language can be translated into many different ones in the corresponding target language, since there exist 1 to N mapping between words in different languages due to the homophony and synonyms. But only one of them should be chosen according to the context. Such work is called Word selection. It is common practice that if one target word is selected improperly during the word selection, the sentence of the translation becomes quite unreadable, or even its meaning is much different from the source sentence. Word selection is regarded as one of the most important and difficult problem in machine translation. (Liu Xiaohu et al., 1998) .",
"cite_spans": [
{
"start": 903,
"end": 928,
"text": "(Liu Xiaohu et al., 1998)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word selection Methods Based on Lexical Semantic Knowledge in Generation",
"sec_num": "1"
},
{
"text": "With the development of machine translation, researchers realized that it is more important to consider its semantic constraints in dealing with the problem of word selection than syntax constraints of each word candidates, and are now paying more and more attention to applying of semantic knowledge in machine translation. The following (in 1.1 and 1.2) are two frequently used methods of this kind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word selection Methods Based on Lexical Semantic Knowledge in Generation",
"sec_num": "1"
},
{
"text": "In this method, a semantic pattern consists of a headword and its one or more slots of semantic constraints. The semantic pattern base with a great number of such patterns should be constructed first. In word selection, the probability of each candidate can be calculated by comparing the semantic slot constraints of the pattern with the actual semantic environment of a concept, the interlingua structure. The interlingua structure is structurally similar to the pattern but contains the concept to be expressed with proper target word. Finally, one pattern with the highest probability will be chosen as the base of the word selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Pattern Based Method",
"sec_num": "1.1"
},
{
"text": "This method is usually referred to as Rationalist Method and was first used in DOGENES (Nirenburg et al., 1998) developed in Carnegie Mellon University.",
"cite_spans": [
{
"start": 87,
"end": 111,
"text": "(Nirenburg et al., 1998)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Pattern Based Method",
"sec_num": "1.1"
},
{
"text": "There are a few main weak points of this method. First, the pattern base is usually constructed manually, and it is hard to construct a good one without losses. Also, subjective factors will be introduced while constructing such a pattern base. Secondly, the semantic slot constraints in patterns manually made are usually high level concepts, so the variety and particularity in the natural language could not be reflected easily. Therefore, there will often be more than one result chosen after this stage, because many candidates have the same probability. Third, the semantic slot constraints are qualitative constraints and the quantitative differences of language phenomena could not be embodied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Pattern Based Method",
"sec_num": "1.1"
},
{
"text": "Example based method is an Empiricist Method. It was proposed as a new model of machine translation at first. In performing a sentence translation in example based method, the most similar example to the input sentence, together with its corresponding translation, will be found out from a large scale bilingual corpus. Then the corresponding translation will be used as the result, or with some necessary adjustments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example-Based Method",
"sec_num": "1.2"
},
{
"text": "This same idea could also be used in many stages of machine translation, especially those involved with disambiguation. For example, this idea was used to deal with word sense disambiguation in a Chinese-English machine translation system (Yang Xiaofeng et al., 2001) . Similarly, the idea could also be used to deal with the word selection problem. The key advantage of using this idea is that the variety and particularity in the natural language could be taken account of in the course of word selection.",
"cite_spans": [
{
"start": 245,
"end": 267,
"text": "Xiaofeng et al., 2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example-Based Method",
"sec_num": "1.2"
},
{
"text": "There are also some main problems with the method. First, the examples in the example base should be selected carefully, and should be somewhat representative. Otherwise the result of the word selection using this example base would become unreliable. The problem could be resolved by constructing example base as large as possible with examples extracted from a real corpus randomly. Meanwhile, the \"Combination Explosion\" problem will likely to occur in the process of word selection, if the scare of the example base becomes very large.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example-Based Method",
"sec_num": "1.2"
},
{
"text": "Although the two methods of word selection mentioned above have their merits and demerits respectively, they complement each other well: The Semantic-Pattern-Based Method is somehow simpler and has a smaller computation complexity compared to Example-Based Method. On the other hand, the Example-Based Method can take into account the variety and particularity of languages while the Semantic-Pattern-Based Method does it relatively deficient in this aspect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hybrid Method",
"sec_num": "1.3"
},
{
"text": "In order to make use of the complement-ness of these two methods, we put forward a Hybrid method. In this method, word selection is divided into two stages. In first stage, the Semantic-Pattern-Based Method is used, and many improper candidates will be got rid of in this stage, hence the sides of the candidate set will become smaller. Then, in second stage, the Example Based Method is used, and the quantitative language knowledge will be utilized to select the best result (Chen Yidong et al., 2001) . Figure 1 shows the process briefly. constructing such a knowledge base may be different from system to system, due to the different language pairs and different semantic representation adopted etc. But the main idea should be common. The design of the structure and organization of the base is very important to realize our goal. The rest of this paper will introduce in detail the construction of the lexical semantic knowledge base used in our interlingua-based Chinese-English machine translation system. Here, the interlingua is a frame-like representation which utilizes HowNet (Dong Zhendong) as its semantic basics.",
"cite_spans": [
{
"start": 483,
"end": 503,
"text": "Yidong et al., 2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 506,
"end": 514,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Hybrid Method",
"sec_num": "1.3"
},
{
"text": "2 Construction of the Example Base",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Hybrid Method",
"sec_num": "1.3"
},
{
"text": "In order to fulfil the need of natural language translation, the corpus should be large enough and come from a real corpus. The examples used in our system are extracted from a real corpus named SEMCOR (Miller et al., 1993) , which is a text corpus and is normally released with WordNet (Chen Qunxiu, 1998) . The text of SEMCOR stems from Brown Corpus (Francis et al., 1982) and is semantically tagged according to the WordNet. SEMCOR is tagged mainly by hand with the help of some tagging tools.",
"cite_spans": [
{
"start": 202,
"end": 223,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF5"
},
{
"start": 287,
"end": 306,
"text": "(Chen Qunxiu, 1998)",
"ref_id": "BIBREF0"
},
{
"start": 352,
"end": 374,
"text": "(Francis et al., 1982)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Knowledge Source",
"sec_num": "2.1"
},
{
"text": "In SEMCOR released with WordNet 1.6, there are about 352 tagged SMGL-like files, among which about 183 ones are entirely tagged and others only have their verbs tagged. In these 183 entirelytagged files there are altogether 359,732 words or so, with around 193,373 ones semantically-tagged. The proper name in SEMCOR such as names of person, group and location and etc. are tagged with additional taggers according to their types. For instance, names of person are tagged with \"person\", names of group are tagged with \"group\" and names of location are tagged with \"location\" and etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Knowledge Source",
"sec_num": "2.1"
},
{
"text": "To build an example base, we must acquire required knowledge from the knowledge source and organize them into a proper form. To do so, two inconsistent problems should be resolved in our system:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Organization",
"sec_num": "2.2"
},
{
"text": "First, the tag information of knowledge source is inconsistent with the semantic tag information required in the semantic disambiguation: as mentioned above, SEMCOR is tagged with tags used in WordNet, while the structure of our interlingua is constructed according to HowNet. The difference should be resolved one way or the other so that the knowledge acquired from the corpus can be utilized in reasoning. In aur system, we adopt the tag system used in HowNet and all the examples extracted from the source must be converted into a proper form accordingly. Secondly, there are another inconsistence, the inconsistence of relation names and structures. SEMCOR is neither syntactically nor semantically analyzed, only linear collocations could be extracted from the corpus. This kind of structure is not easily utilized in the disambiguation process. To facilitate the reasoning process, the examples should be constructed as frame-like structure, which is similar to the structures adopted in our interlingua. In other word, we must build frame-like examples from linear collocation relations abstracted from the corpus. It means that there is much information to be resolved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Organization",
"sec_num": "2.2"
},
{
"text": "To ensure that the examples can be easily be utilized for disambiguation, its structure should be similar to the structure of our interlingua. Following is its formal defmition as shown in Figure 2 :",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 197,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach to Construct the Example Base",
"sec_num": "2.3"
},
{
"text": "The steps of the construction of the example base can be described informally as follows: First, to change the corpus tagged in WordNet formalism into one tagged in HowNet formalism. To accomplish the step, we designed an algorithm that can map WordNet senses in the corpus to HowNet concepts effectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Construct the Example Base",
"sec_num": "2.3"
},
{
"text": "The main idea of the algorithm is that: Each sense in the WordNet is represented as a set of synonym, called Senset. Given a WordNet Senet, each word in the Senset has a series of possible corresponding words, each of which has series corresponding concepts in the HowNet. All the probable combinations will be enumerated and the common concept occurred with in each word or the one occurred most often will be chosen as the meaning representation of the sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Construct the Example Base",
"sec_num": "2.3"
},
{
"text": "Using the algorithm, a mapping list from WordNet senses to HowNet concepts was constructed. Further, this list is used to transform SEMCOR, the corpus tagged with tags used in WordNet, to the corpus, tagged with tags used in the HowNet. Secondly, linear collocation relations would be extracted from the corpus already tagged in HowNet formalism, and their semantic relationships among the words in each collocation be inferred and then transformed into frame-like forms. To do this, we designed an example transforming procedure which can transform the collocations automatically into frame-like forms. After a process of automatic transformation, a manual adjustment process is performed to correct some errors in the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Construct the Example Base",
"sec_num": "2.3"
},
{
"text": "Thirdly, to reorganize the examples and make the example base optimized, two procedures are performed: The first is to delete the redundant entries and count the frequency. The second is to merge the examples that has identical headword. Doing so, the redundancy can be reduced and the base be packed into a smaller size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Construct the Example Base",
"sec_num": "2.3"
},
{
"text": "Following the approach described in 2.3, an example base with 4362 examples was constructed. To list a few, some examples are shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Some Instances",
"sec_num": "2.4"
},
{
"text": "((keep (SENSE keeplig14) (CAT V)) ( (THEME (((lead (SENSE surpass! a) (CAT N)) 0.1) ((fashion (SENSE attribute)att,SocialModelg,&entitylV2) (CAT N)) 0.1) ((faith (SENSE experienceiSlt,believeltig) (CAT N)) 0.1) )) )) ((keep (SENSE SetAsideli) (CAT V)) ( (THEME (((moisture (SENSE attributeNt,dampnessirigt&physicallitrffl) (CAT N)) 0.1) ((package (SENSE physical! J) (CAT N)) 0.1) ( (letter (SENSE letterriW (CAT N)) 0.1) )) )) ( (reserve (SENSE SetAsidem) (CAT V)) ( (THEME (((complaint (SENSE thoughtiz -t3k,different14,#opposetKR-4) (CAT N)) 0.1) ((power (SENSE attibutaft,abilitylftn,&physicaliTtE) (CAT N)) 0.1) ((right (SENSE rightsitOU) (CAT N)) 0.1) )) )) ((conserve (SENSE SetAsidelt) (CAT v) ) ((THEME (( (energy (SENSE attribute)Nit,strengthl)ji,$function)&AnimalHumanktit) (CAT N)) 0.1) ((resources (SENSE material)f7M,genericlgt*) (CAT N)) 0.1) )) )) 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Some Instances",
"sec_num": "2.4"
},
{
"text": "Construction of the Semantic-Pattern Base",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Some Instances",
"sec_num": "2.4"
},
{
"text": "As mentioned in the first part of the paper, the pattern based method plays an important part in the first stage of the hybrid method. But the traditional semantic pattern based method has some demerits and it may influence the performance of the whole system. Two important improvements are proposed to overcome its difficulties and to improve its performance. One is in the way to build semantic pattern base. The other is the improvement of the structure of the pattern representation itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improvement of Selection Method Based on Semantic Pattern",
"sec_num": "3.1"
},
{
"text": "With the traditional semantic pattern based method, the most difficulty is how to build the pattern base. Manual approach to build vast number of semantic patterns not only results in a big workload but also leads to the introduction of subjectivity of the author who builds the patterns. So the way to construct the pattern base has to be changed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Approach to construct the Pattern Base",
"sec_num": "3.1.1"
},
{
"text": "In our system, the semantic pattern base will be constructed automatically from the example base mentioned in 2.4. Since the examples in the base have been semantically tagged and the relations among words in them have been well determined, it is not difficult to utilize the example base as the training knowledge source to extract the semantic patterns. Obviously the automatic approach to construct the pattern base well overcome the short-comes of great workload and subjectivity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Approach to construct the Pattern Base",
"sec_num": "3.1.1"
},
{
"text": "The semantic patterns used in the traditional method are all yes or no rules. If the corresponding semantic pattern to a valid collocation is not included in the pattern base, the relevant candidate will be rejected incorrectly. It lacks the flexibility characterized by a quantitative matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fuzzy Semantic Pattern",
"sec_num": "3.1.2"
},
{
"text": "To solve the problem, the structure and content of the semantic pattern should be improved. In our system, an additional field, Probability, is introduced into a semantic slot constraint, and the amended semantic patterns are so called Fuzzy Semantic Patterns (Chen Yidong et al., 2002) . By introducing this additional field, the semantic pattern will be able to support inexact match. Obviously it makes the method become more flexible. Since our pattern is extracted from the real example base, it is possible for the probability field to be calculated from the corpus statistically without difficulty.",
"cite_spans": [
{
"start": 260,
"end": 286,
"text": "(Chen Yidong et al., 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fuzzy Semantic Pattern",
"sec_num": "3.1.2"
},
{
"text": "As described in 3.1.2, fuzzy semantic patterns consist of semantic slot constraints that indicate the collocation relation of a headword. The structure of fuzzy semantic patterns is shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Organization of the Semantic Pattern Base",
"sec_num": "3.2"
},
{
"text": "It can be seen that, similar to the example base, the semantic slot constraints with the same headword will be merged into the same pattern, and similarly, the values of the semantic slot constraints with the same name in a pattern will be merged into the same value list. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Organization of the Semantic Pattern Base",
"sec_num": "3.2"
},
{
"text": "As we can see in Figure 2 and Figure 3 , the structure of the semantic patterns and examples in our system are very similar to each other. So it's easy to train fuzzy semantic patterns from the example base. The algorithm to train fuzzy semantic patterns could be described as follows (Chen Yidong et al., 2002) :",
"cite_spans": [
{
"start": 285,
"end": 311,
"text": "(Chen Yidong et al., 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 2",
"ref_id": null
},
{
"start": 30,
"end": 38,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Train Algorithm of Fuzzy Semantic Patterns",
"sec_num": "3.3"
},
{
"text": "Given an example (CH (SSi SS2 SSO), where CH is the concept of the headword and each SS in the list stands for a semantic slot of this headword respectively (the detail definition is shown in Figure 2 ), a fuzzy semantic pattern with a structure that meets with the definition shown in Figure 3 will be trained using the following steps:",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Figure 2",
"ref_id": null
},
{
"start": 286,
"end": 294,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Train Algorithm of Fuzzy Semantic Patterns",
"sec_num": "3.3"
},
{
"text": "1\u00b0 For each semantic slot SS, which is of the form (SSN (SSV1 SSV2 SSVm)), with SSN as its name and the SSVs list as the list of collocation instances of CH in it, construct a corresponding semantic slot constraint, SSC, whose form is (SSN SSCV2 SSCVm)). In this step, two sub-steps are to be performed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Train Algorithm of Fuzzy Semantic Patterns",
"sec_num": "3.3"
},
{
"text": "1.1\u00b0 Get each SSCV from the corresponding SSV and form the list of SSCVs. Meanwhile, the probability field of each SSCV will be calculated respectively. (See the more detail description in Chen Yidong et al., 2002) 1.2\u00b0 Use the value list, (SSCV1 SSCV2 SSCVm), and the semantic slot name, SSN, to construct a semantic slot constraint.",
"cite_spans": [
{
"start": 194,
"end": 214,
"text": "Yidong et al., 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Train Algorithm of Fuzzy Semantic Patterns",
"sec_num": "3.3"
},
{
"text": "2\u00b0 Construct the final fuzzy semantic pattern using CH and SS that is formed in 1\u00b0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Train Algorithm of Fuzzy Semantic Patterns",
"sec_num": "3.3"
},
{
"text": "\u2022 Figure 4 . The algorithm to train fuzzy semantic patterns",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 10,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Train Algorithm of Fuzzy Semantic Patterns",
"sec_num": "3.3"
},
{
"text": "Using the algorithm above, 4362 semantic patterns were trained automatically from the example base described in 2.4. As is mentioned above, when constructing the fuzzy semantic pattern base, all the collocation information related to the same headword will be merged into the same pattern, and so is the construction of the example base, hence, although the number of entries in the pattern base and in the example base is identical, the actual scale of them are not the same. Some instances of the semantic patterns are shown below. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Some Instances",
"sec_num": "3.4"
},
{
"text": "V) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Some Instances",
"sec_num": "3.4"
},
{
"text": "1.1) (attributeNt 1.1) (rightsiVfli 1.1) (thinkingIZZ 0.14) 0.12) (thingWr00.021) (entity{ i* 0.0041) ((conserve (SENSE SetAsideR) (CAT V) ) ( (THEME (attributel)'t 1.1) (materiall#g 1.1) (artifactlAilt 0.092) (inanimateiXtAt 0.031) (physical)ttif 0.0076) (thingWit0.0013) (entitylM* 0.0003)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Some Instances",
"sec_num": "3.4"
},
{
"text": "......)) ......))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Some Instances",
"sec_num": "3.4"
},
{
"text": "In order to improve the quality of machine translation, semantic knowledge should be utilized, especially in word selection, an important process in machine translation. Based on the investigation and analysis of several commonly used methods, a hybrid method is proposed in this paper. The method combines the advantages of the two traditional methods and shows its extra flexibility. In implementing such a method, how to obtain lexical semantic knowledge becomes the key. Therefore, the main part of this paper focuses on the construction of the example base and semantic pattern base used. Using the algorithms presented in this paper, a lexical semantic knowledge base with considerable scale was constructed successfully in our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "This paper was supported by the Chinese 863 High Tech Research Fund (2001AA114110), and the Fund of Key Research Project of Fujian Province (2001H023)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An On-line Thesaurus: WordNet. Application of Language and Literary",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Qunxiu",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "93--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Qunxiu. 1998. An On-line Thesaurus: WordNet. Application of Language and Literary, (2):93-99",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Designing of a Model of Word selection in English Generation",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Yidong",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Tangqiu",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Qingyang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Xuling",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Chinese Information Processing",
"volume": "15",
"issue": "6",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Yidong, Li Tangqiu, Hong Qingyang and Zheng Xuling. 2001. Designing of a Model of Word selection in English Generation. Journal of Chinese Information Processing, 15(6): 19-26.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fuzzy Semantic Pattern and Its Application in the Word selection for English Generation",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Yidong",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Tangqiu",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Xuling",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Chinese Information Processing",
"volume": "16",
"issue": "5",
"pages": "15--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Yidong, Li Tangqiu and Zheng Xuling. 2002. Fuzzy Semantic Pattern and Its Application in the Word selection for English Generation. Journal of Chinese Information Processing, 16(5): 15-22.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Frequency Analysis of English Usage: Lexicon and Grammar",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Zhendong",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Qiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hownet",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Zhendong and Dong Qiang. HowNet. http://www.keenage.com W. N., and Kucera H. 1982. Frequency Analysis of English Usage: Lexicon and Grammar. Houghton Mifflin Company, Boston.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Target Word Selection in Machine Translation Based on Machine Learning",
"authors": [
{
"first": "Liu",
"middle": [],
"last": "Xiaohu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Sheng",
"suffix": ""
}
],
"year": 1998,
"venue": "Computer Research & Development",
"volume": "35",
"issue": "10",
"pages": "946--950",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu Xiaohu and Li Sheng. 1998. Target Word Selection in Machine Translation Based on Machine Learning. Computer Research & Development, 35(10):946-950",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Semantic Concordance",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tengi",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Bunker",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the ARPA Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller G.A., Leacock C., Tengi R. and Bunker R.T. 1993. A Semantic Concordance. In: Proceedings of the ARPA Workshop on Human Language Technology.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A framework for word selection in NLG",
"authors": [
{
"first": "Sergei",
"middle": [],
"last": "Nirenburg",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Nirenburg",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 12th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nirenburg, Sergei and Nirenburg, Irene. 1998. A framework for word selection in NLG. In: Proceedings of the 12th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word Sense Disambiguation Method in Chinese -English Machine Translation System",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Xiaofeng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Tangqiu",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Qingyang",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Chinese Information Processing",
"volume": "15",
"issue": "3",
"pages": "22--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Xiaofeng, Li Tangqiu and Hong Qingyang. 2001. Word Sense Disambiguation Method in Chinese -English Machine Translation System. Journal of Chinese Information Processing, 15(3):22-28.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "The diagram of the hybrid method It is clear that the basic component of the Hybrid Method is a well-organized Lexical Semantic Knowledge Base, which consists of a semantic pattern base and an example base. The details of",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "The structure definition of fuzzy semantic patterns",
"type_str": "figure",
"uris": null
}
}
}
}