ACL-OCL / Base_JSON /prefixA /json /A94 /A94-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "A94-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:13:55.272650Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Industrial applications of a reversible, string-based, unification approach called Humor (High-speed Unification Morphology) is introduced in the paper. It has been used for creating a variety of proofing tools and dictionaries, like spelling checkers, hyphenators, lemmatizers, inflectional thesauri, intelligent bilingual dictionaries and, of course, full morphological analysis and synthesis. The first industrialized versions of all of the above modules work and licensed by well-known software companies for their products' Hungarian versions. Development of the same modules for other agglutinative (e.g. Turkish, Estonian) and other (highly) inflectional languages (e.g. Polish, French, German) have also begun.",
"pdf_parse": {
"paper_id": "A94-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "Industrial applications of a reversible, string-based, unification approach called Humor (High-speed Unification Morphology) is introduced in the paper. It has been used for creating a variety of proofing tools and dictionaries, like spelling checkers, hyphenators, lemmatizers, inflectional thesauri, intelligent bilingual dictionaries and, of course, full morphological analysis and synthesis. The first industrialized versions of all of the above modules work and licensed by well-known software companies for their products' Hungarian versions. Development of the same modules for other agglutinative (e.g. Turkish, Estonian) and other (highly) inflectional languages (e.g. Polish, French, German) have also begun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Supported Morphological Processes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "The morphological analyser is the kernel module of the system: almost all of the applications derived from Humor based on it. It provides all the possible segmentations of the word-form in question covering inflections, derivations, prefixations, compounding and creating basic lexical forms of the stems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis/Synthesis and Lemmatizing",
"sec_num": "1.1"
},
{
"text": "Morphological synthesis is based on analysis, that is, all the possible morphemic combinations built by the core synthesis module are filtered by the analyzer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis/Synthesis and Lemmatizing",
"sec_num": "1.1"
},
{
"text": "Lemmatizer is a simplified version of the morphological analysis system. It provides all the possible lexical stems of a word-form, but does not provide inflectional and derivational information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis/Synthesis and Lemmatizing",
"sec_num": "1.1"
},
{
"text": "Spelling checking of agglutinative languages cannot be based on simple wordlist based method because of the incredibly high number of possible word-forms of these languages. Algorithmic solutions, that is morphology based applications, are the only way to solve the problem (Solak and Oflazer 1992) . The spelling checker based on our unification morphology method provides a logical answer whether the wordform in question can be constructed according to the actual morphological descriptions of the system, or not. In case of negative answer a correction strategy starts to work. It is based on orthographic, morphophonological, morphological and lexical properties of the words. This strategy also works in real corpus applications where automatic corrections of some typical mis-typings have to be made.",
"cite_spans": [
{
"start": 274,
"end": 298,
"text": "(Solak and Oflazer 1992)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spelling Checking and Correction",
"sec_num": "1.2"
},
{
"text": "There are languages in which 100% hyphenation cannot be made without exact morphological segmentation of the words. Hungarian is a language of this type: boundaries between prefixes and stems, or between the components of compounds override the main hyphenation rules that cover around 85% of the hyphenation points. Our unification based hyphenator guarantees, in principle, perfect hyphenation (including the critical Hungarian hyphenation of long double consonants where new letters have to be inserted while hyphenated).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyphenation",
"sec_num": "1.3"
},
{
"text": "Besides the above described well-known types of applications there are two new tools based on the same strategy, the inflectional thesaurus called Helyette (Pr6sz4ky ~5 Tihanyi 1993) , and the series of intelligent bi-lingual dictionaries called MoBiDic. Both are dictionaries with morphological knowledge: Helyette is monolingual, while MoBiDic --as its name suggests 1 --bi-lingual. Having analyzed the input word both systems look for the lemma in the main dictionary. The inflectional thesaurus stores the information encoded in the analyzed affixes, and adds to the synonym chosen by the user. The morphological synthesis module starts to work here, and provides the user with the adequate inflected form 1 MorphoLogic's Bi-lingual Dictionary of the word in question. This procedure has a great importance in case of highly inflectional languages.",
"cite_spans": [
{
"start": 156,
"end": 182,
"text": "(Pr6sz4ky ~5 Tihanyi 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mono-and Bi-lingual Dictionaries",
"sec_num": "1.4"
},
{
"text": "Humor unification morphology systems have been fully implemented for Hungarian. The same package for Polish, Turkish, German, French are under development. The whole software package is written in standard C using C++ like objects. It runs on any platforms where C compiler can be found. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "2"
},
{
"text": "The Hungarian morphological analyzer which is the largest and most precise implementation needs around 100 Kbytes of core memory and 600 Kbytes disk space for spell-checking and hyphenation (plus 300 Kbytes for full analysis and synthesis). The stem dictionary contains more than 90.000 stems which cover all (approx. 70.000) lexemes of the Concise Explanatory Dictionary of the Hungarian Language. Suffix dictionaries contain all the inflectional suffixes and the productive derivational morphemes of present-day Hungarian. With the help of these dictionaries Humor is able to analyze and/or generate around 2.000.000.000 well-formed Hungarian wordforms. Its speed is between 50 and 100 words/s on an average 40 MHz 386 machine. The whole system can be tuned 3 according to the speed requirements: the needed RAM size can be between 50 and 900 Kbytes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "2"
},
{
"text": "The synonym system of Helyette contains 40.000 headwords. The first version of the inflectional thesaurus Helyette needs 1.6 Mbytes disk space and runs under MS-Windows. The size of the MoBiDic packages vary depending on the applied terminological collection. E.g. the Hungarian-English Business Dictionary needs 1.8 Mbytes space. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "2"
},
{
"text": "Humor-based lemmatizers support free text search in Verity's Topic and Oracle, and it is used by the lexicographers of the Institute of Linguistics of the Hungarian Academy of Sciences in their every-day work. That is, the corpus used in creation of Historical Dictionary of Hungarian has been lemmatized by tools based on our unification morphology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "2"
},
{
"text": "Numerous versions of other Humor-based applications run under DOS, OS/2, UNIX and on Macintosh systems. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "2"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Fast Morphological Analyzer for Lemmatizing Corpora of Agglutinative Languages",
"authors": [
{
"first": "G",
"middle": [],
"last": "Pr6sz4ky",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Tihanyi",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Pr6sz4ky",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Tihanyi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Helyette",
"suffix": ""
}
],
"year": 1992,
"venue": "Inflectional Thesaurus for Agglutinative Languages. Proceedings of the 6th Conference of EA CL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pr6sz4ky, G., Tihanyi, L. A Fast Morphological Analyzer for Lemmatizing Corpora of Agglutina- tive Languages. In: Kiefer, F., Kiss, G, 8J Pa- jzs, J. (eds.) Papers in Computational Lexicography --COMPLEX 92. Linguistics Institute, Budapest: 265-278. (1992) Pr6sz4ky, G., Tihanyi, L. Helyette: Inflectional Thesaurus for Agglutinative Languages. Proceedings of the 6th Conference of EA CL, Utrecht: 473. (1993)",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "purposes: lemmatizers, hyphenators, spelling checkers and correctors. They (called HelyesLem, Helyesel and Helyes-e?, respectively) have been built into several word-processing and full-text retrieval systems. Spelling checkers and hyphenators are available either as a part of Microsoft Word for Windows, Works, Excel, Lotus 1-2-3 and AmiPro, Aldus Page-Maker, WordPerfect, etc. or in stand-alone form for DOS, Windows and Macintosh. Microsoft and Lotus licensed the above proofing tool packages for all of their localized Hungarian products. 2Up to now, DOS, Windows, OS/2, UNIX and Macintosh environments have been tested. 3Even by the end-users. 4Its language specific and not application specific parts cannot be multiplied",
"authors": [
{
"first": "A",
"middle": [],
"last": "Solak",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Oflazer",
"suffix": ""
}
],
"year": 1992,
"venue": "Parsing Agglutinative Word Structures and Its Application to Spelling Checking for Turkish. Proceedings of the COLING-92",
"volume": "",
"issue": "",
"pages": "39--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Solak, A. and K. Oflazer. Parsing Agglutina- tive Word Structures and Its Application to Spelling Checking for Turkish. Proceedings of the COLING- 92, Nantes: 39-45. (1992) purposes: lemmatizers, hy- phenators, spelling checkers and correctors. They (called HelyesLem, Helyesel and Helyes-e?, respec- tively) have been built into several word-processing and full-text retrieval systems. Spelling checkers and hyphenators are available either as a part of Microsoft Word for Windows, Works, Excel, Lotus 1-2-3 and AmiPro, Aldus Page- Maker, WordPerfect, etc. or in stand-alone form for DOS, Windows and Macintosh. Microsoft and Lo- tus licensed the above proofing tool packages for all of their localized Hungarian products. 2Up to now, DOS, Windows, OS/2, UNIX and Mac- intosh environments have been tested. 3Even by the end-users. 4Its language specific and not application specific parts cannot be multiplied if other vocabularies also need Hungarian and/or English. 5For OEM partners there is a weU-defined API to Humor.",
"links": null
}
},
"ref_entries": {}
}
}