| { |
| "paper_id": "1992", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:45:27.098733Z" |
| }, |
| "title": "Example-Based Machine Translation using Connectionist Matching", |
| "authors": [ |
| { |
| "first": "Ian", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mclean", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "ian@ccl.umist.ac.uk" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper proposes an alternative approach to matching input text with example text in an Example-Based Machine Translation system. The approach employs a connectionist network to compute a measure of distance between the input text and the source members of source / target text pairs contained in a bilingual corpus.", |
| "pdf_parse": { |
| "paper_id": "1992", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper proposes an alternative approach to matching input text with example text in an Example-Based Machine Translation system. The approach employs a connectionist network to compute a measure of distance between the input text and the source members of source / target text pairs contained in a bilingual corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "A framework for Example-Based Machine Translation (EBMT) was first proposed by Nagao [9] where he suggests that rather than requiring deep linguistic analysis, translation is achieved by \"... properly decomposing an input sentence into certain fragmental phrases [...] , then by translating these fragmental translations into other language phrases and finally by properly composing these fragmental phrases into one long sentence.\" (p. 179) This paper will address a problem inherent in the example-based approach to MT: the selection from a set of examples (a bilingual corpus) the most suitable translation pair(s) given an input in the source language, a process commonly referred to as matching. This selection is often performed by first computing a distance measurement between the input to be translated and the source language examples. Figure 1 illustrates the architecture within which this process operates. The matching process selects from a set of translation pairs the pairs whose source sentence is the by the matching process, constituents from the matched examples are recombined in order to more closely match the input sentence. Finally, a translation is generated using the translations of the recombined source sentence constituents. For further explanation see Hutchins & Somers [5] (pp. 125-130).", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 88, |
| "text": "[9]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 263, |
| "end": 268, |
| "text": "[...]", |
| "ref_id": null |
| }, |
| { |
| "start": 433, |
| "end": 441, |
| "text": "(p. 179)", |
| "ref_id": null |
| }, |
| { |
| "start": 1303, |
| "end": 1306, |
| "text": "[5]", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 846, |
| "end": 854, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "A number of approaches to distance measurement have been proposed which employ both statistical and heuristic techniques. Sato and Nagao [12] combine two heuristics in a similarity measure which accounts for both the size of a translation unit and the 'environmental similarity' 1 of the input and source example. The latter is computed with the aid of a thesaurus which specifies similarity values between words in the same language e.g. the similarity between book and notebook is defined to be 0.8.", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 141, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Similarly, Sumita et al. [14] employ a thesaurus containing an abstraction hierarchy to which semantic distances (relative to the root) are attached at the nodes. Attributes from the input are matched to attributes in the thesaurus and for each pair a common level of semantic abstraction is identified. The semantic distance from that level is taken as a measure of semantic distance between the two attributes. A weight derived from this distance is combined with a frequency based syntactic distance measure and the sum of all of the attribute level products provides the final distance measurement for the whole translation.", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 29, |
| "text": "[14]", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "The 'Bilingual Knowledge Bank' proposed in [11] following the DLT project also uses a bilingual thesaurus, two interconnected Textual Knowledge Banks' (TKBs), which contains aligned 'translation units' generated by a conventional parsing process. Matching in this approach originally involved the rule-based parsing of an input text into a dependency tree although this process was subsequently developed to become purely analogical.", |
| "cite_spans": [ |
| { |
| "start": 43, |
| "end": 47, |
| "text": "[11]", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "The approaches taken by Sumita et al., Sato and Nagao and Sadler rely upon the use of extensive thesauri, requiring that formal linguistic information be encoded in a 'rationalist' manner and thus retaining the problem of rigidity implicit in such static definitions. It may also be argued (aside from any practical considerations) that the 'formal' embodiment of this type of information in a thesaurus runs contrary to the ethos of the example-based paradigm which they employ smallest distance from an input sentence. Once suitable translation pairs have been selected elsewhere in their work: the decomposition of input sentences into word dependency trees in [12] , the analysis phase mentioned in [14] and the parsing process required for the initial stages of TKB construction in [11] .", |
| "cite_spans": [ |
| { |
| "start": 664, |
| "end": 668, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 703, |
| "end": 707, |
| "text": "[14]", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 787, |
| "end": 791, |
| "text": "[11]", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Carroll [1] introduces the concept of an angle of similarity as a measure of distance between sentences. This angle is calculated using a triangle whose three points represent the two sentences being compared and a 'null sentence'. The length of the sides from this null point to the points representing the two sentences are the respective sizes of those sentences and the length of the third side is the difference between the two. The size of a sentence is calculated by costing the add, delete and replace operations necessary to derive one sentence from the other using costs from a set of 'rules' embodied in the system. Carroll shows that the angle at the null sentence point (between the adjacent and hypotenuse sides) provides a measure of distance which reduces 'undesirable' length effects whereby a sentence may be selected from a set of examples by virtue of the proximity of its length to the input sentence length rather than its qualitative or directional similarity to other, longer, examples. However, the derivation process requires the formal specification of ad hoc cost measurement rules which define the cost of the basic operations it employs.", |
| "cite_spans": [ |
| { |
| "start": 8, |
| "end": 11, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Jones [6] proposes an approach employing an analogical modelling technique whereby the distance between an input and example is calculated by a comparison of feature vectors attached to the input and examples. A number of these vectors may be attached to each example, describing features at different levels: morphological and clausal, for example. From the example database, examples are selected for both the similarity of their feature vectors with the input and for the similarity between their 'outcomes' 2 and the outcome predicted by the closest analogy to the input. The probability that an example will serve as the analogical model is calculated based upon its similarity and frequency of occurrence in the example database (for a complete explanation of analogical modelling see Skousen [13] ). While this approach does not require an extensive thesaurus, it is necessary for a corpus to be augmented with the feature information required by the analogical modelling process in order to produce the example database. However, the extraction of this linguistic information may be readily automated and is not as extensive a problem as either the encoding of linguistic rules or generation of ad hoc thesaurus entries.", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 9, |
| "text": "[6]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 799, |
| "end": 803, |
| "text": "[13]", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "In the following sections a connectionist 3 alternative to the above which avoids the use of extensive thesauri and alleviates the extent of corpus augmentation will be outlined. It will be shown experimentally that connectionism offers an alternative to the above solutions to distance measurement in that it may account for length, frequency, syntactic and semantic contextual effects and it will be concluded that on the basis these results the application of connectionism to MT should be pursued further.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "The proposed approach employs a two layer connectionist architecture (figure 2). As its various names suggest, the connectionist paradigm is based upon the use of a large number of very simple interconnected processing units (these are sometimes, rather controversially, referred to as neurons). A pattern of activation (often binary) is presented to the source units and this activation is propagated through weighted connections (the weights of which are initially random) to the units at the end of the connections which combine the weighted activations from their incoming connections by way of a combination function (usually summation) to produce a net value for that unit. The net value is processed by an activation function associated with the unit to produce its new activation. The key to the operation of such a network is the relative strengths of its interconnecting weights. These are modified during an initial learning phase where the network is provided with not only a source pattern but the target pattern which it is expected to make in response 4 . By comparing the actual and desired target responses, a learning algorithm adjusts the weights of the connections in proportion to a learning rate and, given a suitably stable training set, the weights stabilize. Once the weights have become stable the network has learned its training set and with the learning 'turned off (i.e. learning rate = 0) can be presented with patterns which it will attempt to classify but not learn. In the proposed architecture, units on the input are grouped so that for each word position in the input sentence there are 30 units, each unit corresponding to a word. Consequently only one unit in each group may be active at any given time as only a single word can occupy a position in a sentence at one time. Output units are encoded differently with a single unit corresponding to a translation. Thus the network is configured to map from source sentences with word-level resolution to targets with sentence-level resolution which enables it to learn to select a translation by virtue of detailed source-level information. Although the detail employed in this particular model is at the word level, the architecture allows for the use of encoded morphological, clausal and semantic information (see discussion in the summary). The network weights fully interconnect the input units to the output units in a unidirectional manner and are adjusted using the Delta learning algorithm described by Rumelhart & McClelland [10] (chapter 11).", |
| "cite_spans": [ |
| { |
| "start": 1067, |
| "end": 1068, |
| "text": "4", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 2522, |
| "end": 2526, |
| "text": "[10]", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architecture", |
| "sec_num": null |
| }, |
| { |
| "text": "In each of the following experiments the same network and training set was employed. The training set comprised 32 phrases taken from business letters and was of the form {<source sentence , <unique example identifier>} The unique example identifier can be considered to be a pointer to the corresponding translation pair in the example set of the form <unique example identifier> -> {e s , e t ) e.g.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": null |
| }, |
| { |
| "text": "1 -> {[the, cat, sat, on, the, mat], [le, chat, assis, sur, le, tapis]}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": null |
| }, |
| { |
| "text": "The learning rate used in the Delta learning algorithm was 0.5 and the activations contained on the input and target vectors were binary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": null |
| }, |
| { |
| "text": "The network was trained by presenting source sentence and unique translation pair identifier patterns to the input and output units of the network respectively and applying the learning algorithm to adjust the weights of the connections. This was iterated over each of the 32 entries in the training set and the network was exposed to 65 sets of iterations before learning was complete and the tests performed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": null |
| }, |
| { |
| "text": "The first of the experiments demonstrates the network's ability to identify salient features of the input sentences. Using the sentences (1) and (2) the ability of the network to identify salient features (words) from an input pattern (sentence) can be demonstrated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Salient Feature Identification", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) We cannot accept your conditions of payment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Salient Feature Identification", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(2) We cannot accept your conditions of delivery.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Salient Feature Identification", |
| "sec_num": "1" |
| }, |
| { |
| "text": "After training, the weights connecting units representing words in position 7 in the source sentence pattern 5 to units representing the unique example identifier are as shown in figure 3. It can be seen that there is a strong positive weight from the payment unit and a strong negative weight from the delivery unit into the target unit representing the example (1). In a similar manner, the converse is true for the weights into the target unit representing the example (2). (2) . The size of the weight is proportional to the size of the square representing it and its sign indicated by the square's colour.", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 110, |
| "text": "5", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 477, |
| "end": 480, |
| "text": "(2)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Salient Feature Identification", |
| "sec_num": "1" |
| }, |
| { |
| "text": "An inherent property of connectionist networks is their ability to make educated guesses and is exploited in models which need to generalize about pattern classifications or withstand noisy data or even physical damage. In this particular test the network, having been taught the unique example identifier for the sentence (3) We acknowledge receipt of your letter.", |
| "cite_spans": [ |
| { |
| "start": 323, |
| "end": 326, |
| "text": "(3)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graceful Degradation -'Best Guess'", |
| "sec_num": "2" |
| }, |
| { |
| "text": "is able to make a generalization to the effect that any pattern similar to (3) should invoke the same response as (3) . So, for example, when shown (but not taught) the sentence (4) We acknowledge receipt of your memo.", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 117, |
| "text": "(3)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 178, |
| "end": 181, |
| "text": "(4)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graceful Degradation -'Best Guess'", |
| "sec_num": "2" |
| }, |
| { |
| "text": "the response of the target unit representing the example containing sentence (3) is reduced from 0.91 to 0.82 reflecting the slight dissimilarity between the two. However, where the network has been taught two similar sentences, for example the response of the target unit representing the example containing sentence (5), for example, drops from 0.84 to 0.36. This drop is more significant than that exhibited when sentence (4) was shown to the network as sentence (7) differs from two sentences by only one word. Thus the network is 'hedging its bets' by activating both of the target units corresponding to the examples containing sentences (5) and (6) respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graceful Degradation -'Best Guess'", |
| "sec_num": "2" |
| }, |
| { |
| "text": "It can also be shown that the network is able to reflect differences in sentence length in the responses it makes. Table 1 shows the responses produced by target units corresponding to the examples containing sentences (8) and (9) when subsequently shown sentences (8) and (9) which were contained in the training set and sentence (10) which was not. The responses to sentence (10) illustrate two points: first the network is attempting to select the most likely translation pair for a sentence which it has never seen before and is consequently making an informed guess (see previous experiment); and second there is a stronger response from the target unit corresponding to the example containing sentence (8) . This is because the lengths of sentences (8) and (10) differ by only one word whereas those for sentences (9) and (10) differ by two. ", |
| "cite_spans": [ |
| { |
| "start": 219, |
| "end": 222, |
| "text": "(8)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 227, |
| "end": 230, |
| "text": "(9)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 708, |
| "end": 711, |
| "text": "(8)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 755, |
| "end": 758, |
| "text": "(8)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 763, |
| "end": 767, |
| "text": "(10)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 820, |
| "end": 823, |
| "text": "(9)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 115, |
| "end": 122, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Length Effects", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this experiment, the original corpus is augmented with two extra occurrences of the sentence (11) we do not need these items at present. and the network re-trained. The post-learning responses to this sentence is shown in table 2 together with those for the sentence (12) we do not stock these items at present. which occurred only once in the corpus and the sentence (13) we do not have these items at present. which did not occur in the corpus at all. The table shows that although the previously unseen Table 2 Frequency effects upon example selection. sentence (13) differs from sentences (14) and (15) by a single word, the response representing the example containing source sentence (14) is more highly active because of sentence (16)'s higher frequency of occurrence. Although the network shows a bias in favour of the response unit representing example (11), it does not reflect the 3:1 relationship implied by the frequency with which the examples are presented to the network during training (see discussion in the summary below).", |
| "cite_spans": [ |
| { |
| "start": 596, |
| "end": 600, |
| "text": "(14)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 509, |
| "end": 516, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Frequency Effects", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The results above provide a simple illustration of the suitability of connectionist networks for the matching process required in EBMT. They show that a network may be taught a set of example translations and from these select the most appropriate translation for a previously unseen source sentence. The model used in the experiments above accounts for a number of the factors employed in existing measurements.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Results", |
| "sec_num": null |
| }, |
| { |
| "text": "In common with the approaches taken by Sato & Nagao [12] , Sumita et a/. [14] and Jones [6] , the connectionist model reflects a degree of confidence in the translation which it has selected (the reciprocal of which is referred to as a distance). As noted by Skousen [13] (p.81) connectionist networks do not produce probabilities which directly reflect the frequency at which a given response is produced because of 'interference' between weight changes made for different source/target pairs 6 . However, as a means of establishing the suitability of one translation over another, connectionism remains a useful paradigm to employ.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 56, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 73, |
| "end": 77, |
| "text": "[14]", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 88, |
| "end": 91, |
| "text": "[6]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 267, |
| "end": 271, |
| "text": "[13]", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 494, |
| "end": 495, |
| "text": "6", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Results", |
| "sec_num": null |
| }, |
| { |
| "text": "Despite the probability problem outlined above, it was shown in experiment 4 that the connectionist model also exhibits frequency effects. The bias illustrated by the data in table 2 6 This is a fundamental feature of connectionism which gives rise to emergent properties such as noise tolerance and multiple constraint satisfaction.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 175, |
| "end": 186, |
| "text": "table 2 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Summary of Results", |
| "sec_num": null |
| }, |
| { |
| "text": "is similar in nature to the frequency effect of the analogical modelling approach in that as the frequency of occurrence of a feature in the training set increases, so does the likelihood that its corresponding translation will be selected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Results", |
| "sec_num": null |
| }, |
| { |
| "text": "The connectionist approach is also sensitive to sentence length in a similar way to that employed in Sato & Nagao's [12] distance measurement. However, it is clear that using the architecture proposed above this length sensitivity relies upon the absolute positioning of words in the sentence. Indeed, the correct operation of the whole network is dependent upon this absolute positioning (see Conclusion section). The length effects shown by the model in its current form differ from those outlined in [1] in that equal favour is given to sentence length and qualitative similarity. This may be achieved in a connectionist model by the introduction of semantic data (see below) to provide a qualitative bias or by changes in the combination function to provide an explicit bias in favour of words occurring at the beginning of a sentence. Thus a previously unseen short sentence would carry almost as much weight as a similar long one of which it was a fragment.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 120, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 503, |
| "end": 506, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Results", |
| "sec_num": null |
| }, |
| { |
| "text": "The architecture employed in the above experiments is structured to process data at word and clausal levels. In reality, however, the network is simply learning to map one pattern to another and consequently these patterns may be augmented with encoded syntactic features or semantic microfeatures like those used by McClelland & Kawamoto [7] (Ch.19) to enable the network to select translations based upon syntactic and semantic cues in a similar way to Sato & Nagao's [12] syntactic context and the semantic distance of Sumita et al. [14] .", |
| "cite_spans": [ |
| { |
| "start": 339, |
| "end": 342, |
| "text": "[7]", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 470, |
| "end": 474, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 536, |
| "end": 540, |
| "text": "[14]", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Results", |
| "sec_num": null |
| }, |
| { |
| "text": "A simple connectionist architecture for distance measurement has been proposed and the results presented above illustrate that the approach is capable of reproducing some of the desirable features of existing measurements. Connectionism is inherently empirical and its application to EBMT seems inevitable. Furthermore, the approach not only represents a shift away from inflexible rule-based rationalist techniques but also from conventional iterative computation with all the subsequent advantages that parallel computation has to offer 7 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": null |
| }, |
| { |
| "text": "However, there exists one major problem with the model described in this paper, which is its total reliance upon the absolute positioning of words in a sentence. For example it would fail to recognize any similarity between the following sentences (given that, say, sentence (19) was included in the training set and (20) was not):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": null |
| }, |
| { |
| "text": "(19) John walked through the door. (20) Mr. Smith walked through the door.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": null |
| }, |
| { |
| "text": "The system's reliance upon absolute positioning means that having been trained to recognize (19) and subsequently shown (20) it will attempt to match John with Mr., walked with Smith and so on. Thus the fact that the syntax of the two sentences is virtually identical would not be reflected by the system's response to the previously unseen sentence (20). A solution to this problem lies in the move from the spatial domain to the temporal domain. It is clear that language processing in humans whether it be spoken or written, comprehension or translation, is in the latter of the two domains as speech is obviously time based as is the scanning of the words on a written page. Having demonstrated that the application of the connectionist paradigm to the problem of EBMT shows potential it is now possible to look to more complex models [3] [2][4] [8] to address the outstanding problems faced by connectionist machine translation.", |
| "cite_spans": [ |
| { |
| "start": 839, |
| "end": 842, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 850, |
| "end": 853, |
| "text": "[8]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": null |
| }, |
| { |
| "text": "Environmental similarity is a measure of the syntactic context in which a translation unit may occur.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For example at the morphological level the outcome of the would be Det and at the clausal level the outcome of the dog would be NP.3 The connectionist paradigm is also referred to as Parallel Distributed Processing (PDP) or the use of neural networks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The words which may appear in position 7 of the sentence are encoded on units 180 to 209 of the network.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "An understanding of parallel computation is not required to appreciate this point. The only model we have of a fully operational translation system (the human brain) employs massive parallelism.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Repetitions processing using a Metric Space and the Angle of Similarity", |
| "authors": [ |
| { |
| "first": "Jeremy", |
| "middle": [ |
| "J" |
| ], |
| "last": "Carroll", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeremy J. Carroll. Repetitions processing using a Metric Space and the Angle of Similarity. Technical Report No. 90/3. Centre for Computational Linguistics, UMIST, Manchester, 1990.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Finite state automata and simple recurrent networks", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Cleeremans", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "Servan" |
| ], |
| "last": "Schreiber", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mcclelland", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Neural Computation", |
| "volume": "1", |
| "issue": "", |
| "pages": "372--381", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Cleeremans, D. Servan Schreiber, and J.L. McClelland. Finite state automata and simple recurrent networks. Neural Computation, 1:372-381, 1989.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Finding structure in time", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "L" |
| ], |
| "last": "Elman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Cognitive Science", |
| "volume": "14", |
| "issue": "", |
| "pages": "179--211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.L. Elman. Finding structure in time. Cognitive Science, 14:179-211, 1990.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The problem of serial order: A neural network of sequence learning and recall", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Houghton", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Current Research in Natural Language Generation", |
| "volume": "", |
| "issue": "", |
| "pages": "287--320", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Houghton. The problem of serial order: A neural network of sequence learning and recall. In R. Dale, C. Melish, and N. Zock, editors, Current Research in Natural Language Generation, pages 287-320. London Academic Press, 1990.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "An Introduction to Machine Translation", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "J" |
| ], |
| "last": "Hutchins", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Somers", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W.J. Hutchins and H.L Somers. An Introduction to Machine Translation. Academic Press, 1992.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The Processing of Natural Language by Analogy with Specific Reference to Machine Translation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "B" |
| ], |
| "last": "Jones", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D.B. Jones. The Processing of Natural Language by Analogy with Specific Reference to Machine Translation. PhD thesis, UMIST, Manchester, 1991.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Parallel Distributed Processing: Explorations in the Microstructure of Cognition", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mcclelland", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "E" |
| ], |
| "last": "Rumelhart", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.L. McClelland and D.E. Rumelhart. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models. MIT Press, 1986.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A Study of Recurrent Connectionist Architectures for Unsupervised Temporal Pattern Recognition. Master's thesis", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mclean", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I.J. McLean. A Study of Recurrent Connectionist Architectures for Unsupervised Temporal Pattern Recognition. Master's thesis, University of Manchester, 1991.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A Framework of Mechanical Translation between Japanese and English by Analogy Principle", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nagao", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Artificial and Human Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "173--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Nagao. A Framework of Mechanical Translation between Japanese and English by Analogy Principle. In A. Elithorn, editor, Artificial and Human Intelligence, pages 173-180. Elsevier, 1984.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Parallel Distributed Processing: Explorations in the Microstructure of Cognition", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "E" |
| ], |
| "last": "Rumelhart", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mcclelland", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Foundations", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D.E. Rumelhart and J.L. McClelland. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "The textual knowledge bank: Design, construction and applications", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Sadler", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "International Workshop on Fundamental Research for the Future Generation of Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "17--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Sadler. The textual knowledge bank: Design, construction and applications. In International Workshop on Fundamental Research for the Future Generation of Natural Language Processing, pages 17-32. ATR Telephony Laboratories, 1991.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Towards memory based translation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sato", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nagao", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "COLING 90 (Helsinki)", |
| "volume": "3", |
| "issue": "", |
| "pages": "247--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Sato and M. Nagao. Towards memory based translation. In COLING 90 (Helsinki), volume 3, pages 247-252, 1990.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Analogical Modeling of Language", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Skousen", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Skousen. Analogical Modeling of Language. Kluwer Academic Publishers, 1989.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Translating with Examples: A New Approach to Machine Translation", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Kohyama", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "The Third Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages. Linguistics Research Center", |
| "volume": "", |
| "issue": "", |
| "pages": "203--212", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Sumita, H. lida, and H. Kohyama. Translating with Examples: A New Approach to Machine Translation. In The Third Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages. Linguistics Research Center, University of Texas at Austin, pages 203-212, 1990.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "text": "EBMT Architecture. The figure shows EBMT architecture with conventional terminology shown in parentheses.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "text": "Network Architecture.", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "text": "Some networks are unsupervised and do not require a target pattern to learn but this type of network is beyond the scope of this paper.e.g.{[the, cat, sat, on, the, mat], 1).", |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "uris": null, |
| "text": "Weights indicating salient features. The figure shows the weights of the connections from units 180 through 194 (which encode words in the source sentences occupying the 7th position in those sentences) to the units encoding the translation pairs (1) and", |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "uris": null, |
| "text": "The delivery time is 4 months.(6) The delivery time is 8 months.and subsequently shown a third(7)The delivery time is 7 months.", |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "text": "", |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |