FigAgent / 2004.10353 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
29fb4a3 verified

Introduction

Hindi is written in the Devanagari script, which is an abugida, an orthographic system where the basic unit consists of a consonant and an optional vowel diacritic or a single vowel. Devanagari is fairly regular, but a Hindi word's actual pronunciation can differ from what is literally written in the Devanagari script.[^1] For instance, in the Hindi word ppr $\langle$pep@R@$\rangle$ 'paper', there are three units p $\langle$pe$\rangle$, p $\langle$p@$\rangle$, and r $\langle$R@$\rangle$, corresponding to the pronounced forms , , and . The second unit's inherent schwa is retained in the pronounced form, but the third unit's inherent schwa is deleted.

Predicting whether a schwa will be deleted from a word's orthographic form is generally difficult. Some reliable rules can be stated, e.g. 'delete any schwa at the end of the word', but these do not perform well enough for use in an application that requires schwa deletion, like a text-to-speech synthesis system.

This work approaches the problem of predicting schwa deletion in Hindi with machine learning techniques, achieving high accuracy with minimal human intervention. We also successfully apply our Hindi schwa deletion model to a related language, Punjabi. Our scripts for obtaining machine-readable versions of the Hindi and Punjabi pronunciation datasets are published to facilitate future comparisons.[^2]

Previous approaches to schwa deletion in Hindi broadly fall into two classes.

The first class is characterized by its use of rules given in the formalism of The Sound Pattern of English [@spe]. Looking to analyses of schwa deletion produced by linguists [e.g., @ohala_1983] in this framework, others built schwa deletion systems by implementing their rules. For example, this is a rule used by @narasimhan_schwa-deletion_2004, describing schwa deletion for words like jglF $\langle$@Ng@li:$\rangle$:

::: center


C V C C a C V C V C C C V @ N g @ l i: $\rightarrow$ @ N g l i:


:::

Paraphrasing, this rule could be read, "if a schwa occurs with a vowel and two consonants to its left, and a consonant and a vowel to its right, it should be deleted." A typical system of this class would apply many of these rules to reach a word's output form, sometimes along with other information, like the set of allowable consonant clusters in Hindi. These systems were able to achieve fair accuracy (@narasimhan_schwa-deletion_2004 achieve 89%), but were ill-equipped to deal with cases that seemed to rely on detailed facts about Hindi morphology and prosody.

A representative example of the linguistic representations used by @tyson_prosodic_2009 [-@tyson_prosodic_2009]. Proceeding from top to bottom, a prosodic word (PrWd) consists of feet, syllables (which have weights), and syllable templates.{#fig:pstructure width="50%"}

Systems of the second class make use of linguistically richer representations of words. Typical of this class is the system of @tyson_prosodic_2009, which analyzes each word into a hierarchical phonological representation (see figure 1{reference-type="ref" reference="fig:pstructure"}). These same representations had been used in linguistic analyses: @pandey, for instance, as noted by @tyson_prosodic_2009, "claimed that schwas in Hindi cannot appear between a strong and weak rhyme[^3] within a prosodic foot." Systems using prosodic representations perform fairly well, with system achieving performance ranging from 86% to 94% but prosody proved not to be a silver bullet; @tyson_prosodic_2009 remark, "it appears that schwa deletion is a phenomenon governed by not only prosodic information but by the observance of the phonotactics of consonant clusters."

There are other approaches to subsets of the schwa-deletion problem. One is the diachronic analysis applied by @choudhury which achieved 99.80% word-level accuracy on native Sanskrit-derived terms.

Machine learning has not been applied to schwa deletion in Hindi prior to our work. @johny_brahmic_2018 used neural networks to model schwa deletion in Bengali (which is not a binary classification problem as in Hindi) and achieved great advances in accuracy. We employ a similar approach to Hindi, but go further by applying gradient-boosting decision trees to the problem, which are more easily interpreted in a linguistic format.

Similar research has been undertaken in other Indo-Aryan languages that undergo schwa-deletion, albeit to a lesser extent than Hindi. @wasala-06, for example, proposed a rigorous rule-based G2P system for Sinhala.

Method

We frame schwa deletion as a binary classification problem: orthographic schwas are either fully retained or fully deleted when spoken. Previous work has shown that even with rich linguistic representations of words, it is difficult to discover categorical rules that can predict schwa deletion. This led us to approach the problem with machine learning, which we felt would stand a better chance at attaining high performance.

We obtained training data from digitized dictionaries hosted by the University of Chicago Digital Dictionaries of South Asia project. The Hindi data, comprised of the original Devanagari orthography and the phonemic transcription, was parsed out of @mcgregor and @bahri and transcribed into an ASCII format. The Punjabi data was similarly processed from @singh. 1{reference-type="ref+Label" reference="table:entry-example"} gives an example entry from the @mcgregor Hindi dataset.

To find all instances of schwa retention and schwa deletion, we force-aligned orthographic and phonemic representations of each dictionary entry using a linear-time algorithm. In cases where force-alignment failed due to idiosyncrasies in the source data (typos, OCR errors, etc.) we discarded the entire word. We provide statistics about our datasets in 2{reference-type="ref+label" reference="table:datasets"}. We primarily used the dataset from @mcgregor in training our Hindi models due to its comprehensiveness and high quality.

:::: center ::: {#table:entry-example}


  **Devanagari** akwAhV
**Orthographic** `a ~ k a rr aa h a tt a`
    **Phonemic** `a ~ k      rr aa h a tt`

: An example entry from the Hindi training dataset. ::: ::::

:::: center ::: {#table:datasets} Hindi Dict. Entries Schwas Deletion Rate


         McGregor        34,952       36,183              52.94%
            Bahri         9,769       14,082              49.41%
           Google           847        1,098              56.28%
**Punjabi Dict.**   **Entries**   **Schwas**   **Deletion Rate**
            Singh        28,324       34,576              52.25%

: Statistics about the datasets used. The deletion rate is the percentage of schwas that are deleted in their phonemic representation. The Google dataset, taken from @johny_brahmic_2018, was not considered in our final results due to its small size and over-representation of proper nouns. ::: ::::

Each schwa instance was an input in our training set. The output was a boolean value indicating whether the schwa was retained. Our features in the input column were a one-hot encoding of a variable window of phones to the left ($c_{-n}, \dots, c_{-1}$) and right ($c_{+1}, \dots, c_{+m}$) of the schwa instance ($c_0$) under consideration. The length of the window on either side was treated as a hyperparamater and tuned. We also tested whether including phonological features (for vowels: height, backness, roundedness, and length; for consonants: voice, aspiration, and place of articulation) of the adjacent graphemes affected the accuracy of the model.

We trained three models on each dataset: logistic regression from scikit-learn, MLPClassifier (multilayer perceptron neural network) from scikit-learn, and XGBClassifier (gradient-boosting decision trees) from XGBoost. We varied the size of the window of adjacent phonemes and trained with and without phonological feature data.