| { |
| "paper_id": "2010", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:15:53.902753Z" |
| }, |
| "title": "Source Text Characteristics and Technical and Temporal Post-Editing Effort: What is Their Relationship?", |
| "authors": [ |
| { |
| "first": "Midori", |
| "middle": [], |
| "last": "Tatsumi", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "midori.tatsumi2@mail.dcu.ie" |
| }, |
| { |
| "first": "Johann", |
| "middle": [], |
| "last": "Roturier", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "johann_roturier@symantec.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper focuses on the relationship between source text characteristics (ambiguity, complexity and style compliance) and machine-translation post-editing effort (both temporal and technical). Post-editing data is collected in a traditional translation environment and subsequently plotted against textual scores produced by a range of systems. Our findings show some strong correlation between ambiguity and complexity scores and technical post-editing effort, as well as moderate correlation between one of the style guide compliance scores and temporal postediting effort.", |
| "pdf_parse": { |
| "paper_id": "2010", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper focuses on the relationship between source text characteristics (ambiguity, complexity and style compliance) and machine-translation post-editing effort (both temporal and technical). Post-editing data is collected in a traditional translation environment and subsequently plotted against textual scores produced by a range of systems. Our findings show some strong correlation between ambiguity and complexity scores and technical post-editing effort, as well as moderate correlation between one of the style guide compliance scores and temporal postediting effort.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In the last few years, Machine-Translation postediting has clearly become mainstream with more and more language service providers offering this type of activity as part of their range of services. However, the Post-Editing (PE) work to be performed is not yet fully understood, as shown by the recent creation of a dedicated Post-Editing Service Level user group. 1 Besides, production-ready postediting environments are currently not optimized. 2 They tend to fall into two categories: recycled translation editors or native Machine Translation (MT) application clients. At best, the former connect to MT system(s) and retrieve raw translated strings, while the latter display system-specific information without necessarily interfacing with other systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the first type of post-editing environment, there is a clear lack of source-related knowledge for post-editors apart from a traditional fuzzy match metric, the presence of inline tags, and potential terminology hits. Everything else has to be \"computed\" implicitly by the post-editor on-the-fly when glancing at or reading the text; for example, is this a long sentence? Does it contain multiple clauses? Does it contain spell-checking errors?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "While adding extra information about a given translation unit may clutter the second type of postediting environment (especially if it is not configurable), Blanchon et al. (2009) report that finding \"good ways to compute scores reflecting the usefulness for post-edition of individual pretranslations of the text to translate\" is an open research issue. Such scores may indeed help posteditors prioritize their work especially when working under severe time constraints. Rather than simply sorting on the status of a given translation unit (for-review, raw, verified, etc\u2026), it may be useful to present scores (possibly using source text characteristics) to estimate how much time would have to be spent working on a particular segment (as long as a correlation exists between these source characteristics and the time spent post-editing).", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 179, |
| "text": "Blanchon et al. (2009)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Depending on their level of experience, posteditors may also prefer working on certain types of segments (short segments or segments that are not too complex), rather than working from a complete document from start to finish (which is less fre-quent nowadays with the fragmentation and parallelisation of translation tasks).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Several reasons exist for the lack of data on source text characteristics in post-editing environment:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Lack of tool interoperability: some values generated by one system may not be exportable, importable or visualizable. \u2022 Lack of system openness: it may be difficult to re-generate some of the values produced by specific systems (some of them may be proprietary and therefore inaccessible to users; some of them may not have APIs).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Lack of transparency: it may be decided that certain values generated by one system should not be presented to other stakeholders.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Some of these gaps, which have been described in Lewis et al. (2009) , suggest that more research work is required in identifying source text characteristics that can be linked to post-editing activity in order to make the post-editing task more efficient (and possibly enjoyable).", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 68, |
| "text": "Lewis et al. (2009)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we report the results of an analysis which aimed at exploring the relationship between source text characteristics and post-editing effort. We report findings on whether characteristics such as ambiguity, complexity and style guide compliance correlate with a traditional MT evaluation metric as well as post-editing time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of the paper is organized as follows: we briefly introduce related research in Section 2, and describe the methodology of our user study in Section 3. We present and discuss the results in Sections 4 and 5 respectively. Section 6 concludes and points out avenues for future research.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Several strands of research are related to the present work. The first one concerns the identification of translatability indicators (or negative translatability indicators) and their impact on post-editing activity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In Underwood and Jongejan's implementation of a translatability index (2001), two sets of translatability indicators are used: a set of phenomena identified in others' work on translatability, \"including a) structural ambiguity caused by PPattachment, relative and other sub-clause attachment and multiple coordination b) compounds comprising 3 or more nouns, c) \"sentences\" with-out (finite) verbs, d) lexical ambiguity and e) sentence length (both very long and very short sentences)\", as well as a set of MT system-specific indicators. However, no empirical results have been published to indicate a potential correlation between the score computed from these phenomena and the subsequent post-editing effort.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This contrasts with O'Brien's study, who used an IT user guide translated into German by the IBM WebSphere MT engine (O'Brien, 2006) to measure the effect of CL rules on temporal, technical and cognitive post-editing effort (cf. Krings, 2001) , using professional translators. Her findings were that post-editing effort can be decreased by suppressing Negative Translatability Indicators (NTIs) from the source text. This study also found that the removal of some NTIs had a greater impact on post-editing effort than the removal of others.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 132, |
| "text": "(O'Brien, 2006)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 229, |
| "end": 242, |
| "text": "Krings, 2001)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Another strand of research concerns the modeling of translation recommendations, such as the approach proposed in (He et al., 2010a) . This method is based on a \"Support Vector Machine classifier using features from the SMT system, the TM and additional linguistic features to estimate whether the SMT output is better than the hit from the TM\" (and therefore easier to post-edit). Evaluation results for the English-French language pair will be presented in (He et al., 2010b) . This work differs from the present work because the linguistic features used are limited to \"source-side language model score and perplexity and a pseudo-source fuzzy match score\".", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 132, |
| "text": "(He et al., 2010a)", |
| "ref_id": null |
| }, |
| { |
| "start": 459, |
| "end": 477, |
| "text": "(He et al., 2010b)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A final strand of research concerns the design of MT confidence estimation measures that should be useful in a TM environment, such as (Specia et al., 2009a) , by improving confidence measures for MT by training regression models to perform confidence estimation on scores assigned by posteditors. While the method described in this work has not been directly tested using post-editors, it has shown that its predicted quality estimate correlates better with human scores than referencebased MT evaluation metrics (Specia et al., 2009b) .", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 157, |
| "text": "(Specia et al., 2009a)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 514, |
| "end": 536, |
| "text": "(Specia et al., 2009b)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The present study, which focuses on the English-Japanese language pair, does not try to predict whether some sentences are going to take less time to post-edit. Rather it tries to analyze the relationship between the post-editing effort and source text characteristics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The source text chosen for this study was extracted from a user manual of a software publisher (Symantec), consisting of 3,916 English words in 269 sentences, which was machine translated into Japanese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test set", |
| "sec_num": null |
| }, |
| { |
| "text": "The English source text was in XML format, and written according to the controlled language rules used at Symantec, though the possibility of having uncontrolled sentences cannot be ruled out.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test set", |
| "sec_num": null |
| }, |
| { |
| "text": "Machine translation was performed in three steps: 1) pre-processing by using pre-processing scripts, 2) translating using Systran version 6, and 3) postprocessing by using post-processing scripts. The pre-processing scripts included commands to make the source text more amenable to machine translation, such as protection of XML tags. The postprocessing scripts included mainly commands that perform repetitive editing in the target text including the deletion of unnecessary spaces and personal pronouns, correction of style and expressions, such as inappropriate endings and misuse of polite and non-polite forms, and replacement of punctuations, counters, and other lexical items that are constantly inappropriately translated and difficult to be controlled by user dictionaries. In using Systran, general dictionaries and Symantec's product-specific user dictionaries were activated to ensure customised translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test set", |
| "sec_num": null |
| }, |
| { |
| "text": "The MT output was post-edited by means of SDL Trados Translator's Workbench and TagEditor by nine Japanese professional translators; seven of them had experience in post-editing IT-related documentation, one in non-IT-related documentation, and one had no experience in post-editing. Participant post-editors were provided with brief PE guidelines that emphasized that PE should only be performed to make the MT output convey the correct meaning of the source text, and conform to Japanese grammar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Post-editing", |
| "sec_num": null |
| }, |
| { |
| "text": "The scoring of source text characteristics was performed using the following three software programs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring systems", |
| "sec_num": null |
| }, |
| { |
| "text": "Systran version 6 offers a function that measures the syntactic complexity and lexical ambiguity of the source sentences. These metrics are provided to help the authors of the documentation to produce source text well-suited for translation by Systran.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systran: complexity and ambiguity", |
| "sec_num": null |
| }, |
| { |
| "text": "The complexity metric takes into account a number of aspects of the source text, including \"the number of clauses, conjunctions, phrases in parentheses, prepositional phrases, sentence length, sentence type (question or declarative sentence) as well as multiple additional language-specific criteria\" (SYSTRAN: p.141), and calculates the scores for each sentence; the lowest score is 1, and the higher it becomes the more complex the sentence is.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systran: complexity and ambiguity", |
| "sec_num": null |
| }, |
| { |
| "text": "The ambiguity score is given based on the number of ambiguous words in a sentence. A word is considered ambiguous if it has a) multiple meanings, or b) multiple parts of speech, and the latter criteria has higher significance in Systran's scoring system. According to the user guide, a high ambiguity score \"reflects poor User Dictionary coverage\" and adding user dictionary entries help to reduce the ambiguity in most cases. (SYSTRAN: p.141).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systran: complexity and ambiguity", |
| "sec_num": null |
| }, |
| { |
| "text": "After the Deadline is an open-source technology offered by Automattic Inc. We used the version of its API 3 available on August 10 th 2010 with no customization. It offers three language checking functions: spelling, style, and grammar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "After the Deadline: style", |
| "sec_num": null |
| }, |
| { |
| "text": "The spell checker finds misspellings as well as the words whose spelling is correct but possibly inappropriate in the context. The style checker reviews the document against Plain English 4 and detects complex phrases, passive voice, nominalisations, phrasal redundancy, etc. to help the author write clearly and concisely. The grammar checker spots repetition, disagreement of auxiliary verbs, disagreement of determiners, etc. to prevent common grammatical errors. 5 In the present study, only the style checker was employed for the following reasons. 1) The test set did not include any true spelling errors; the spell checker detected XML placeholders, legitimate IT terms, for example, \"Ctrl\", and 'misused' words that were appropriate in the specific context of our test set. 2) No grammatical errors were detected.", |
| "cite_spans": [ |
| { |
| "start": 467, |
| "end": 468, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "After the Deadline: style", |
| "sec_num": null |
| }, |
| { |
| "text": "acrolinx IQ supports controlled authoring by checking the document against a defined set of terms and rules to minimize ambiguity and promote consistency in English source content. We used the full set of controlled language rules used at Symantec. acrolinx IQ reviews the source text in terms of grammar and stylistic appropriateness (Bredenkamp et al., 2000) , and assigns flags to indicate the absolute number of detected problems, and scores, which is the normalized values of flags in relation to the sentence length. We employed the flag count for two reasons: 1) ease of analysis, since scores were distributed in a heavily skewed manner, as 84% of the sentences were problem free (score 0), and the rest was scattered in the range from 250 to 5,000, while even though the distribution of flags was also skewed, it was milder compared to the scores as flag counts fell in the range from only zero to four, 2) suitability for sentence level analysis, since the scoring mechanism is designed to be more appropriate for document level analysis. 6", |
| "cite_spans": [ |
| { |
| "start": 335, |
| "end": 360, |
| "text": "(Bredenkamp et al., 2000)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acrolinx IQ: grammar and style", |
| "sec_num": null |
| }, |
| { |
| "text": "We examined the relationship between these scores and the amount of PE effort from two aspects: technical and temporal, following Krings' three aspects of PE effort: technical, temporal, and cognitive (Krings, 2001) . We employed the textual difference between MT output and the post-edited product as a proxy for technical PE effort, and measured it using GTM (General Text Matcher) (Melamed et al., 2003 version 1.3 with exponent set at 1.2, which mildly penalizes the word order difference (Callison-Burch et al., 2007). 7 GTM was chosen among other auto-matic evaluation metrics as it proved to have higher correlation with Japanese PE speed than BLEU, NIST, and TER in a related study (Tatsumi, 2009) . As the Japanese writing system does not insert spaces to mark the boundary of words, the text was tokenised by means of Me-Cab. 8 Temporal PE effort is represented by the PE speed (words/minute); the word count was provided by acrolinx IQ and the time data was obtained by means of SDL Trados Translator's Workbench with a custom macro. GTM and PE speed data were obtained for each sentence.", |
| "cite_spans": [ |
| { |
| "start": 201, |
| "end": 215, |
| "text": "(Krings, 2001)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 384, |
| "end": 405, |
| "text": "(Melamed et al., 2003", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 690, |
| "end": 705, |
| "text": "(Tatsumi, 2009)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis method", |
| "sec_num": null |
| }, |
| { |
| "text": "In analysing the results, we took into account the difference in sentence structures. All sentences in the test set were classified into three categories: simple sentence, complex/compound sentence, and incomplete sentence. Simple and complex/compound sentences were identified according to Leech's definition: a simple sentence contains only one clause, a compound sentence contains two or more clauses linked by coordination, such as 'and' and 'but', while a complex sentence contains one or more subordinate clauses (Leech, 2006) . Additionally, an incomplete sentence was defined for the purpose of this study: textual fragments consisting of words and phrases that cannot stand alone as a complete sentence. Examples of each category taken from the test corpus are shown below.", |
| "cite_spans": [ |
| { |
| "start": 519, |
| "end": 532, |
| "text": "(Leech, 2006)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis method", |
| "sec_num": null |
| }, |
| { |
| "text": "-Delete the item from the vault.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simple sentence:", |
| "sec_num": null |
| }, |
| { |
| "text": "-An envelope with a paperclip indicates an email with one or more attachments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Simple sentence:", |
| "sec_num": null |
| }, |
| { |
| "text": "-The shortcut is a direct link to the archived item, and it has the following icon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Compound sentence:", |
| "sec_num": null |
| }, |
| { |
| "text": "-Select the items that XXX is processing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complex sentence:", |
| "sec_num": null |
| }, |
| { |
| "text": "-Put the item in the Restored Items folder in the mailbox that is specified in the Settings dialog box.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complex sentence:", |
| "sec_num": null |
| }, |
| { |
| "text": "-File size -For a file system vault: -If there is more than one page of search results:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incomplete sentence:", |
| "sec_num": null |
| }, |
| { |
| "text": "The box plots in Figures 1 to 8 represent the distribution of average GTM scores or PE speed of nine post-editors by score categories. Although GTM scores can range from 0 to 1, average scores for individual post-editors all fell within the range of 0.4 to 1, thus the y-axes for GTM scores show only the applicable range. For PE speed, the average speed for post-editors fell within the range of 0 to 80 words/min.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 17, |
| "end": 31, |
| "text": "Figures 1 to 8", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incomplete sentence:", |
| "sec_num": null |
| }, |
| { |
| "text": "The white line in each box shows the median value among nine post-editors, and the box represents the range of distribution in the interquartile range (IQR, the range between 25th and 75th percentile), which shows approximately the middle 50% of the data. The horizontal lines above and below each box show the highest and the lowest values within the range of 1.5 IQR above and below the IQR respectively. Any values outside this range are shown by dots. The number in parentheses under each category indicates the number of observations found in the category. We excluded from the analysis the categories that have only one observation for statistical validity reasons. The Spearman correlation coefficient (Woods et al., 1986) is shown in the upper right corner in each figure. The Systran complexity scores have a clear negative relationship with the average GTM scores except for the score 1 category, while they have an indefinite relationship with the PE speed. The PE speed can be divided into three groups: the sentences in the score 1 category are slowest to post-edit, those in the score 2 to 5 categories are fastest, and the sentences in the score 6 to 11 categories are in the middle. This may partly be explained by the proportion of sentence structures in each score category; all 27 sentences in the score 1 category are incomplete sentences, and as the score increases, the proportion of simple sentences increases, and as the score increases further, the proportion of complex/compound sentences increases. Table 1 shows the overall average GTM and PE speed for all post-edited MT sentences by sentence structure. As can be seen, average GTM score is highest for simple sentences, and lowest for complex/compound sentences, while average PE speed is fastest for simple sentences and slowest for incomplete sentences.", |
| "cite_spans": [ |
| { |
| "start": 709, |
| "end": 729, |
| "text": "(Woods et al., 1986)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1526, |
| "end": 1533, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incomplete sentence:", |
| "sec_num": null |
| }, |
| { |
| "text": "! !\"#$%&#'()*' !\"#$%! !\"#$%&#'()' !\"##$% !\"#$%&'()*+! !\"#$%&'()(! !\"#$! !\"#$%! !\"#$%&!", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Systran: complexity", |
| "sec_num": null |
| }, |
| { |
| "text": "!\"#$! !\"#$!! !\"#$%&'(!\"#$\")*+! !\"#$! !!\"#!! Table 1 . The effect of sentence structures on GTM and PE speed", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 51, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Systran: complexity", |
| "sec_num": null |
| }, |
| { |
| "text": "Systran: ambiguity Figures 3 and 4 show the relationship between Systran ambiguity scores and the GTM scores and the PE speed, respectively. The Systran ambiguity scores, similar to the complexity scores, have a clear negative relationship with the GTM scores, and somewhat quadratic relationship with the PE speed. This, similar to the complexity scores, may have a relationship with the sentence structures; 92% of score 0 and 1 items are incomplete sentences, as the score becomes higher, the proportion of simple sentences gradually increases, and 96% of the items with score 10 or higher are complex/compound sentences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 34, |
| "text": "Figures 3 and 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "4.2", |
| "sec_num": null |
| }, |
| { |
| "text": "After the Deadline: style Figures 5 and 6 show the relationship between the After the Deadline style flag scores and the GTM scores and the PE speed, respectively. The After the Deadline style flag scores correlate somewhat negatively with both GTM scores and PE speed.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 26, |
| "end": 41, |
| "text": "Figures 5 and 6", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "4.3", |
| "sec_num": null |
| }, |
| { |
| "text": "One of the reasons for the relatively slow PE speed for the score zero category may again be the ratio of sentence structures in each category. Table 2 shows the proportion of sentences in each structure in each score category. As can be seen, the ratio of incomplete sentences is exceptionally high for the score 0 items, which might have slowed down PE for the sentences in this category. Table 2 . Distribution of sentence structure by score categories Figures 7 and 8 show the relationship between the acrolinx IQ flag scores and the GTM scores and the PE speed, respectively The biggest reason that over 80% of the sentences fall in the score zero category is the fact that the controlled authoring at Symantec is aided by acrolinx IQ, and in theory, all sentences should have been checked beforehand. Both the GTM scores and the PE speed have somewhat negative relationships with acrolinx IQ flag scores, though the distribution is small and the differences between the categories are rather small.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 144, |
| "end": 152, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 392, |
| "end": 399, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 457, |
| "end": 472, |
| "text": "Figures 7 and 8", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "4.3", |
| "sec_num": null |
| }, |
| { |
| "text": "! !! !! !! !! !\"#$%&'()(! !!! !! !! !! !\"#$%&! !!! !! !! !! !\"#$%&'(!\"#$\")*+! !\"! !!! !\"! !!", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.3", |
| "sec_num": null |
| }, |
| { |
| "text": "Among all relationships examined, the strongest correlation can be observed between Systran complexity and ambiguity scores and the GTM scores, both of which are negative (Spearman correlation coefficient: !=-0.75 and !=-0.79 respectively). This relationship, however, may be related to the sentence lengths. Both Systran complexity and ambiguity scores have high correlations with the source sentence length (!=0.90 and !=0.87 respectively). Figure 9 shows the distribution of the average GTM scores for nine post-editors by sentence length categories. The sentences were categorised into groups according to the number of words contained: 1-5, 6-10, 11-15, 16-20, 21-25 , and over 25 words. As can be seen, the source sentence length and GTM scores have a clear negative correlation (! =-0.75). The After the Deadline and acrolinx IQ scores also show negative relationships with GTM scores (!=-0.61 and !=-0.54, respectively), though the evidence is moderate with the values for most categories close to the overall average. This may partly be because most of the sentences are categorised as 'error free' by these scoring systems, and a few are given a small number of flags, thus the difference in average GTM scores or PE speed were not large enough to be clearly seen.", |
| "cite_spans": [ |
| { |
| "start": 641, |
| "end": 671, |
| "text": "1-5, 6-10, 11-15, 16-20, 21-25", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 443, |
| "end": 451, |
| "text": "Figure 9", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "While the GTM scores had moderate to strong correlation with these scores, the PE speed did not show direct linear relationships with any of the tested scores except with acrolinx IQ (!=-0.59). This may suggest that, though we are more interested in predicting the amount of temporal PE effort, these scores are more capable in predicting the amount of technical PE effort. However, the GTM scores and the PE speed have been proven to have moderate correlation (Pearson correlation coefficient r=0.56) (Tatsumi, 2009) . A detailed an-alysis revealed that some of the variance between the two can be explained by taking into consideration other source text characteristics, including the aforementioned sentence structures, the document component parts (for example. procedural sentences are faster to post-edit than other types of sentences), and the presence or absence of user interface terms. In addition, we cannot ignore the post-editors' individual differences. We found that both within and between post-editor variance is much higher in terms of PE speed compared to the amount of technical PE effort. This means that the amount of textual changes made during PE is more or less similar within and between post-editors, while the time taken to make the changes varies greatly both within and between post-editors.", |
| "cite_spans": [ |
| { |
| "start": 502, |
| "end": 517, |
| "text": "(Tatsumi, 2009)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This work investigated the relationship between source text characteristics and technical and temporal post-editing effort for the English-Japanese language pair. Despite being limited to a small number of segments and one language pair, strong correlation was found between SYSTRAN's complexity and ambiguity scores and technical postediting effort (using GTM scores), as well as moderate correlation between acrolinx IQ scores and temporal post-editing effort. This work could be extended by looking at larger data sets, more varied types of sentences (from a controlled language compliance perspective), additional source text characteristics (such as those described in Section 5), additional language pairs and possibly other types of systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In terms of future work, we suggest conducting studies to investigate how post-editors would interact with these scores if they were presented in their post-editing environment-would they find them useful? Would their usage vary based on the posteditor's experience? As discussed in the previous section, PE speed varies from one post-editor to the next, so a specific category of post-editors may benefit more from these scores (for example, posteditors with little experience).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We feel it would also be worthwhile to investigate whether these characteristics could be included as features in a translation recommendation system, such as the one mentioned in Section 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally we would like to make some recommendations for developers: tools or systems gener-ating such scores should be designed in such a way that these values are leveraged by other systems. Existing standards (such as XLIFF) could easily accommodate such values using extension points to include application-specific information. Besides, future user interfaces should be flexible enough to allow users to display such scores in an intuitive manner, possibly using bookmarklets or extensions if working in a Web-based environment, or allowing for custom plug-ins to be easily created or extended if working in a desktop application scenario.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "http://www.linkedin.com/groups?home=&gid=30564232 Some experimental research prototypes also exist (such as Caitra (http://tool.statmt.org/) or SECTra_w (http://eolss.imag.fr/xwiki/bin/view/Main/)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://service.afterthedeadline.com/ 4 http://www.plainlanguage.gov/ 5 http://www.afterthedeadline.com/features.slp", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.acrolinx.com/uploads/documents/ doc-center/acrolinxIQSuite1.0/Plug-inUserGuides/ EN/acrocheck%20for%20Word%20Plug-in%20User%20Guide.pdf7 While it has been reported that the smaller exponent results in a better correlation with human evaluation in terms of adequacy and the larger exponent results in a better correlation with human evaluation in terms of with fluency(Lin and Och, 2004), as a result of testing with different settings in the present study, it was found that exponent 1.2 had the highest correlation with PE speed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Developed by Kyoto University and NTT. Accessible from:http://mecab.sourceforge.net/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was made possible thanks to funding received from Enterprise Ireland and Symantec Corporation. The authors would also like to acknowledge the comments made by Sharon O'Brien on an earlier version of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Web Service Enabling Gradable Post-edition of Pre-translations Produced by Existing Translation Tools: Practical Use to Provide Highquality Translation of an Online Encyclopedia", |
| "authors": [ |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Herv\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Cong-Phap", |
| "middle": [], |
| "last": "Boitet", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Huynh", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of MT Summit XII 2009, Beyond Translation Memories: New Tools for Translators Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "20--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "acrolinx IQ: grammar and style References Blanchon, Herv\u00e9, Christian Boitet and Cong-Phap Huynh. 2009. A Web Service Enabling Gradable Post-edition of Pre-translations Produced by Existing Translation Tools: Practical Use to Provide High- quality Translation of an Online Encyclopedia. In Proceedings of MT Summit XII 2009, Beyond Translation Memories: New Tools for Translators Workshop. pp. 20-27. Ottawa, Ontario, Canada.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Looking for Errors: A Declarative Formalism for Resource-Adaptive Language Checker", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Bredenkamp", |
| "suffix": "" |
| }, |
| { |
| "first": "Berthold", |
| "middle": [], |
| "last": "Crysmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirela", |
| "middle": [], |
| "last": "Petrea", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 2nd International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "667--673", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bredenkamp, Andrew, Berthold Crysmann, and Mirela Petrea. 2000. Looking for Errors: A Declarative Formalism for Resource-Adaptive Language Checker. In Proceedings of the 2nd International Conference on Language Resources and Evaluation, pp. 667-673. Athens, Greece.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Meta-) Evaluation of Machine Translation", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Cameron", |
| "middle": [], |
| "last": "Fordyce", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| }, |
| { |
| "first": "Josh", |
| "middle": [], |
| "last": "Schroeder", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of The Second Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "136--158", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Callison-Burch, Chris, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-) Evaluation of Machine Translation. In Proceedings of The Second Workshop on Statistical Machine Translation, pp. 136-158. Prague, 2007. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Bridging SMT and TM with Translation Recommendation", |
| "authors": [ |
| { |
| "first": "Yifan", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanjun", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Way", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL '10)", |
| "volume": "", |
| "issue": "", |
| "pages": "622--630", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "He, Yifan, Yanjun Ma, Josef van Genabith and Andy Way. 2010. Bridging SMT and TM with Translation Recommendation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL '10), pp. 622-630. Uppsala, Sweden.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Repairing Texts: Empirical Investigations of Machine Translation Post-Editing Processes", |
| "authors": [ |
| { |
| "first": "Hans", |
| "middle": [ |
| "P" |
| ], |
| "last": "Krings", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Krings, Hans P. 2001. Repairing Texts: Empirical Investigations of Machine Translation Post-Editing Processes. The Kent State University Press, Kent, OH. Leech, Geoffrey. 2006. A Glossary of English Grammar. Edinburgh University Press Ltd., Edinburgh.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Supporting Flexibility and Awareness in Localisation Workflows", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Curran", |
| "suffix": "" |
| }, |
| { |
| "first": "Gavin", |
| "middle": [], |
| "last": "Doherty", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Feeney", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikiforos", |
| "middle": [], |
| "last": "Karamanis", |
| "suffix": "" |
| }, |
| { |
| "first": "Saturnino", |
| "middle": [], |
| "last": "Luz", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "LRC XIV \"Localisation in The Cloud\": The 14th Annual Internationalisation and Localisation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lewis, David, Stephen Curran, Gavin Doherty, Kevin Feeney, Nikiforos Karamanis and Saturnino Luz. 2009. Supporting Flexibility and Awareness in Localisation Workflows. In LRC XIV \"Localisation in The Cloud\": The 14th Annual Internationalisation and Localisation Conference, Limerick, Ireland.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "ORANGE: A Method for Evaluating Automatic Evaluation Metrics for Machine Translation", |
| "authors": [ |
| { |
| "first": "Chin-Yew", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz Josef", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of CoLing 2004: 20th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "501--507", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Chin-Yew and Franz Josef Och. 2004. ORANGE: A Method for Evaluating Automatic Evaluation Metrics for Machine Translation. In Proceedings of CoLing 2004: 20th International Conference on Computational Linguistics, University of Geneva, Switzerland, pp. 501-507.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Precision and Recall of Machine Translation", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Melamed", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Dan", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "P" |
| ], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turian", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of HLT-NAACL 2003: Conference combining Human Language Technology conference series and the North American Chapter of the Association for Computational Linguistics conference series", |
| "volume": "", |
| "issue": "", |
| "pages": "61--63", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Melamed, I. Dan, Ryan Green, and Joseph P. Turian. 2003. Precision and Recall of Machine Translation. In Proceedings of HLT-NAACL 2003: Conference combining Human Language Technology conference series and the North American Chapter of the Association for Computational Linguistics conference series, Edmonton, Canada, pp. 61-63.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Machine-Translatability and Post-Editing Effort: An Empirical Study Using Translog and Choice Network Analysis", |
| "authors": [ |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "O'brien", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O'Brien, Sharon. 2006. Machine-Translatability and Post-Editing Effort: An Empirical Study Using Translog and Choice Network Analysis. PhD Dissertation. Dublin City University.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Improving the Confidence of Machine Translation Quality Estimates", |
| "authors": [ |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Saunders", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Turchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhuoran", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Shawe-Taylor", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of MT Summit XII", |
| "volume": "", |
| "issue": "", |
| "pages": "136--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Specia, Lucia, Craig Saunders, Marco Turchi, Zhuoran Wang and John Shawe-Taylor. 2009a. Improving the Confidence of Machine Translation Quality Estimates. In Proceedings of MT Summit XII 2009, Ottawa, Ontario, Canada, pp. 136-143.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Estimating the Sentence-Level Quality of Machine Translation Systems", |
| "authors": [ |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Cancedda", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Dymetman", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Turchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Nello", |
| "middle": [], |
| "last": "Cristianini", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 13th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Specia, Lucia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009b. Estimating the Sentence-Level Quality of Machine Translation Systems. In Proceedings of the 13th", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Annual Conference of the European Association for Machine Translation (EAMT '09)", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "28--35", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Conference of the European Association for Machine Translation (EAMT '09), Barcelona, Spain, pp. 28-35.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "SYSTRAN SYSTRAN 6 Desktop User Guide. SYSTRAN", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "SYSTRAN SYSTRAN 6 Desktop User Guide. SYSTRAN.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Correlation between Automatic Evaluation Metric Scores, Post-Editing Speed, and some other Factors", |
| "authors": [ |
| { |
| "first": "Midori", |
| "middle": [], |
| "last": "Tatsumi", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of MT Summit XII 2009", |
| "volume": "", |
| "issue": "", |
| "pages": "332--339", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tatsumi, Midori. 2009. Correlation between Automatic Evaluation Metric Scores, Post-Editing Speed, and some other Factors. In Proceedings of MT Summit XII 2009, Ottawa, Ontario, Canada, pp. 332-339.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Evaluation of Machine Translation and its Evaluation", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [ |
| "P" |
| ], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [ |
| "Dan" |
| ], |
| "last": "Melamed", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of MT Summit IX", |
| "volume": "", |
| "issue": "", |
| "pages": "386--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Turian, Joseph P., Luke Shen, and I. Dan Melamed. 2003. Evaluation of Machine Translation and its Evaluation. In Proceedings of MT Summit IX, New Orleans, USA, pp. 386-393.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Translatability Checker: A Tool to Help Decide Whether to Use MT", |
| "authors": [ |
| { |
| "first": "Nancy", |
| "middle": [ |
| "L" |
| ], |
| "last": "Underwood", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Jongejan", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of MT Summit VII: Machine Translation in the Information Age", |
| "volume": "", |
| "issue": "", |
| "pages": "363--368", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Underwood, Nancy L. and Bart Jongejan. 2001. Translatability Checker: A Tool to Help Decide Whether to Use MT. In Proceedings of MT Summit VII: Machine Translation in the Information Age, ed. Bente Maegaard, Santiago de Compostela, Spain, pp. 363-368.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Statistics in Language Studies", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Woods", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Fletcher", |
| "suffix": "" |
| }, |
| { |
| "first": "Arthur", |
| "middle": [], |
| "last": "Hughes", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Woods, Anthony, Paul Fletcher, and Arthur Hughes. 1986. Statistics in Language Studies. Cambridge University Press, Cambridge.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "and 2 show the distribution of average GTM scores and PE speed of nine post-editors by Systran complexity score categories.", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF1": { |
| "text": "Systran complexity scores and GTM scores", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF2": { |
| "text": "Systran complexity scores and PE speed", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF3": { |
| "text": "Systran ambiguity scores and GTM scores", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF4": { |
| "text": "Systran ambiguity scores and PE speed", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF5": { |
| "text": "After the Deadline styles flag and GTM scoresFigure 6. After the Deadline styles flag and PE speed", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF6": { |
| "text": "acrolinx IQ flag and GTM scoresFigure 8. acrolinx IQ flag and PE speed", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF7": { |
| "text": "Source sentence length and GTM scores", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| } |
| } |
| } |
| } |