| { |
| "paper_id": "D13-1011", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:43:25.012957Z" |
| }, |
| "title": "Learning to Freestyle: Hip Hop Challenge-Response Induction via Transduction Rule Segmentation", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Karteek", |
| "middle": [], |
| "last": "Addanki", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Saers", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Meriem", |
| "middle": [], |
| "last": "Beloucif", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a novel model, Freestyle, that learns to improvise rhyming and fluent responses upon being challenged with a line of hip hop lyrics, by combining both bottomup token based rule induction and top-down rule segmentation strategies to learn a stochastic transduction grammar that simultaneously learns both phrasing and rhyming associations. In this attack on the woefully under-explored natural language genre of music lyrics, we exploit a strictly unsupervised transduction grammar induction approach. Our task is particularly ambitious in that no use of any a priori linguistic or phonetic information is allowed, even though the domain of hip hop lyrics is particularly noisy and unstructured. We evaluate the performance of the learned model against a model learned only using the more conventional bottom-up token based rule induction, and demonstrate the superiority of our combined token based and rule segmentation induction method toward generating higher quality improvised responses, measured on fluency and rhyming criteria as judged by human evaluators. To highlight some of the inherent challenges in adapting other algorithms to this novel task, we also compare the quality of the responses generated by our model to those generated by an out-ofthe-box phrase based SMT system. We tackle the challenge of selecting appropriate training data for our task via a dedicated rhyme scheme detection module, which is also acquired via unsupervised learning and report improved quality of the generated responses. Finally, we report results with Maghrebi French hip hop lyrics indicating that our model performs surprisingly well with no special adaptation to other languages.", |
| "pdf_parse": { |
| "paper_id": "D13-1011", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a novel model, Freestyle, that learns to improvise rhyming and fluent responses upon being challenged with a line of hip hop lyrics, by combining both bottomup token based rule induction and top-down rule segmentation strategies to learn a stochastic transduction grammar that simultaneously learns both phrasing and rhyming associations. In this attack on the woefully under-explored natural language genre of music lyrics, we exploit a strictly unsupervised transduction grammar induction approach. Our task is particularly ambitious in that no use of any a priori linguistic or phonetic information is allowed, even though the domain of hip hop lyrics is particularly noisy and unstructured. We evaluate the performance of the learned model against a model learned only using the more conventional bottom-up token based rule induction, and demonstrate the superiority of our combined token based and rule segmentation induction method toward generating higher quality improvised responses, measured on fluency and rhyming criteria as judged by human evaluators. To highlight some of the inherent challenges in adapting other algorithms to this novel task, we also compare the quality of the responses generated by our model to those generated by an out-ofthe-box phrase based SMT system. We tackle the challenge of selecting appropriate training data for our task via a dedicated rhyme scheme detection module, which is also acquired via unsupervised learning and report improved quality of the generated responses. Finally, we report results with Maghrebi French hip hop lyrics indicating that our model performs surprisingly well with no special adaptation to other languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The genre of lyrics in music has been severely understudied from the perspective of computational linguistics despite being a form of language that has perhaps had the most impact across almost all human cultures. With the motivation of spurring further research in this genre, we apply stochastic transduction grammar induction algorithms to address some of the modeling issues in song lyrics. An ideal starting point for this investigation is hip hop, a genre that emphasizes rapping, spoken or chanted rhyming lyrics against strong beats or simple melodies. Hip hop lyrics, in contrast to poetry and other genres of music, present a significant number of challenges for learning as it lacks well-defined structure in terms of rhyme scheme, meter, or overall meaning making it an interesting genre to bring to light some of the less studied modeling issues.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The domain of hip hop lyrics is particularly unstructured when compared to classical poetry, a domain on which statistical methods have been applied in the past. Hip hop lyrics are unstructured in the sense that a very high degree of variation is permitted in the meter of the lyrics, and large amounts of colloquial vocabulary and slang from the subculture are employed. The variance in the permitted meter makes it hard to make any assumptions about the stress patterns of verses in order to identify the rhyming words used when generating output. The broad range of unorthodox vocabulary used in hip hop make it difficult to use off-the-shelf NLP tools for doing phonological and/or morphological analysis. These problems are further exacerbated by differences in intonation of the same word and lack of robust transcription (Liberman, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 828, |
| "end": 844, |
| "text": "(Liberman, 2010)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We argue that stochastic transduction grammars, 1 given their success in the area of machine translation and efficient unsupervised learning algorithms, are ideal for capturing the structural relationship between lyrics. Hence, our Freestyle system models the problem of improvising a rhyming response given any hip hop lyric challenge as transducing a challenge line into a rhyming response. We use a stochastic transduction grammar induced in a completely unsupervised fashion using a combination of token based rule induction and segmenting as the underlying model to fully-automatically learn a challenge-response system and compare its performance against a simpler token based transduction grammar model. Both our models are completely unsupervised and use no prior phonetic or linguistic knowledge whatsoever despite the highly unstructured and noisy domain.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 49, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We believe that the challenge-response system based on an interpolated combination of token based rule induction and rule segmenting transduction grammars will generate more fluent and rhyming responses compared to one based on token based transduction grammars models. This is based on the observation that token based transduction grammars suffer from a lack of fluency; a consequence of the degree of expressivity they permit. Therefore, as a principal part of our investigation we compare the quality of responses generated using a combination of token based rule induction and top-down rule segmenting transduction grammars to those generated by pure token based transduction grammars.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We also hypothesize that in order to generate fluent and rhyming responses, it is not sufficient to train the transduction grammars on all adjacent lines of a hip hop verse. Therefore, we propose a data selection scheme using a rhyme scheme detector acquired through unsupervised learning to generate the training data for the challenge-response systems. The rhyme scheme detector segments each verse of a hip hop song into stanzas and identifies the lines in each stanza that rhyme with each other which are then added as training instances. We demonstrate the superiority of our training data selection method by comparing the quality of the responses generated by the models trained on data selected with and without 1 Also known in SMT as \"synchronous grammars\". using the rhyme scheme detector.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Unlike conventional spoken and written language, disfluencies and backing vocals 2 occur very frequently in the domain of hip hop lyrics which affect the performance of NLP models designed for processing well-formed sentences. We propose two strategies to mitigate the effect of disfluencies on our model performance and compare their efficacy using human evaluations. Finally, in order to illustrate the challenges faced by other NLP algorithms, we contrast the performance of our model against a conventional, widely used phrase-based SMT model. A brief terminological note: \"stanza\" and \"verse\" are frequently confused and sometimes conflated. Worse yet, their usage for song lyrics is often contradictory to that for poetry. To avoid ambiguity we consistently follow these technical definitions for segments in decreasing size of granularity:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "verse a large unit of a song's lyrics. A song typically contains several verses interspersed with choruses. In the present work, we do not differentiate choruses from verses. In song lyrics, a verse is most commonly represented as a separate paragraph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "stanza a segment within a verse which has a meter and rhyme scheme. Stanzas often consist of 2, 3, or 4 lines, but stanzas of more lines are also common. Particularly in hip hop, a single verse often contains many stanzas with different rhyme schemes and meters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "line a segment within a stanza consisting of a single line. In poetry, strictly speaking this would be called a \"verse\", which however conflicts with the conventional use of \"verse\" in song lyrics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In Section 2, we discuss some of the previous work that applies statistical NLP methods to less conventional domains and problems. We describe our experimental conditions in Section 3. We compare the performance of token and segment based transduction grammar models in Section 4. We compare our data selection schemes and disfluency handling strategies in Sections 5 and 6. Finally, in Section 7 we describe some preliminary results obtained using our approach on improvising hip hop responses in French and conclude in Section 8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although a few attempts have been made to apply statistical NLP learning methods to unconventional domains, Freestyle is among the first to tackle the genre of hip hop lyrics Wu et al., 2013a,b) . Our preliminary work suggested the need for further research to identify models that capture the correct generalizations to be able to generate fluent and rhyming responses. As a step towards this direction, we contrast the performance of interpolated bottom-up token based rule induction and top-down segmenting transduction grammar models and token based transduction grammar models. We briefly describe some of the past work in statistical NLP on unconventional domains below.", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 194, |
| "text": "Wu et al., 2013a,b)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Most of the past work either uses some form of prior linguistic knowledge or enforces harsher constraints such as set number of words in a line, or a set meter which are warranted by more structured domains such as poetry. However, in hip hop lyrics it is hard to make any linguistic or structural assumptions. For example, words such as sho, flo, holla which frequently appear in the lyrics are not part of any standard lexicon and hip hop does not require a set number of syllables in a line, unlike poems. Also, surprising and unlikely rhymes in hip hop are frequently achieved via intonation and assonance, making it hard to apply prior phonological constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A phrase based SMT system was trained to \"translate\" the first line of a Chinese couplet or duilian into the second by Jiang and Zhou (2008) . The most suitable next line was selected by applying linguistic constraints to the n best output of the SMT system. However in contrast to Chinese couplets, which adhere to strict rules requiring, for example, an identical number of characters in each line and one-to-one correspondence in their metrical length, the domain of hip hop lyrics is far more unstructured and there exists no clear constraint that would ensure fluent and rhyming responses to hip hop challenge lyrics. Barbieri et al. (2012) use controlled Markov processes to semi-automatically generate lyrics that satisfy the structural constraints of rhyme and meter.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 140, |
| "text": "Jiang and Zhou (2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 623, |
| "end": 645, |
| "text": "Barbieri et al. (2012)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Tamil lyrics were automatically generated given a melody using conditional random fields by A. et al. (2009) . The lyrics were represented as a sequence of labels using the KNM system where K, N and M represented the long vowels, short vowels and consonants respectively. Genzel et al. (2010) used SMT in conjunction with stress patterns and rhymes found in a pronunciation dictionary to produce translations of poems. Although many constraints were applied in translating full verses of poems, it was challenging to satisfy all the constraints. Stress patterns were assigned to words given the meter of a line in Shakespeare's sonnets by Greene et al. (2010) , which were then combined with a language model to generate poems. Sonderegger (2011) attempted to infer the pronunciation of words in old English by identifying the rhyming patterns using graph theory. However, their heuristic of clustering words with similar IPA endings resulted in large clusters of false positives such as bloom and numb. A language-independent generative model for stanzas in poetry was proposed by Reddy and Knight (2011) via which they could discover rhyme schemes in French and English poetry.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 108, |
| "text": "(2009)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 272, |
| "end": 292, |
| "text": "Genzel et al. (2010)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 639, |
| "end": 659, |
| "text": "Greene et al. (2010)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 728, |
| "end": 746, |
| "text": "Sonderegger (2011)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1082, |
| "end": 1105, |
| "text": "Reddy and Knight (2011)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Before introducing our Freestyle models, we first detail our experimental assumptions and the evaluation scheme under which the responses generated by different models are compared against one another. We describe our training data as well as a phrasebased SMT (PBSMT) contrastive baseline. We also define the evaluation scheme used to compare the responses of different systems on criteria of fluency and rhyming.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We used freely available user generated hip hop lyrics on the Internet to provide training data for our experiments. We collected approximately 52,000 English hip hop song lyrics amounting to approximately 800Mb of raw HTML content. The data was cleaned by stripping HTML tags, metadata and normalized for special characters and case differences. The processed corpus contained 22 million tokens with 260,000 verses and 2.7 million lines of hip hop lyrics. As human evaluation using expert hip hop listeners is expensive, a small subset of 85 lines was chosen as the test set to provide challenges for comparing the quality of responses generated by different systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The performance of various Freestyle versions was evaluated on the task of generating a improvised fluent and rhyming response given a single line of a hip hop verse as a challenge. The output of all the systems on the test set was given to three independent frequent hip hop listeners for manual evaluation. They were asked to evaluate the system outputs according to fluency and the degree of rhyming. They were free to choose the tune to make the lyrics rhyme as the beats of the song were not used in the training data. Each evaluator was asked to score the response of each system on the criterion of fluency and rhyming as being good, acceptable or bad.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation scheme", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In order to evaluate the performance of an out-ofthe-box phrase-based SMT (PBSMT) system toward this novel task of generating rhyming and fluent responses, a standard Moses baseline (Koehn et al., 2007) was also trained in order to compare its performance with our transduction grammar induction model. A 4-gram language model which was trained on the entire training corpus using SRILM (Stolcke, 2002) was used to generate responses in conjunction with the phrase-based translation model. As no automatic quality evaluation metrics exist for hip hop responses analogous to BLEU for SMT, the model weights cannot be tuned in conventional ways such as MERT (Och, 2003) . Instead, a slightly higher than typical language model weight was empirically chosen using a small development set to produce fluent outputs.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 202, |
| "text": "(Koehn et al., 2007)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 387, |
| "end": 402, |
| "text": "(Stolcke, 2002)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 656, |
| "end": 667, |
| "text": "(Och, 2003)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-based SMT baseline", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We compare the performance of transduction grammars induced via interpolated token based and rule segmenting (ISTG) versus token based transduction grammars (TG) on the task of generating a rhyming and fluent response to hip hop challenges. We use the framework of stochastic transduction grammars, specifically bracketing ITGs (inversion transduction grammars) (Wu, 1997) , as our translation model for \"transducing\" any given challenge into a rhyming and fluent response. Our choice is motivated by the significant amount of empirical evidence for the representational capacity of transduction grammars across a spectrum of natural language tasks such as textual entailment (Wu, 2006) , mining parallel sentences (Wu and Fung, 2005) and machine translation (Zens and Ney, 2003) . Further, existence of efficient learning algorithms (Saers et al., 2012; Saers and Wu, 2011) that make no language specific assumptions, make inversion transduction grammars a suitable framework for our modeling needs. Examples of lexical transduction rules can be seen in Tables 3 and 5. In addition, the grammar also includes structural transduction rules for the straight case A \u2192 [A A] and also the inverted case A \u2192 <A A>.", |
| "cite_spans": [ |
| { |
| "start": 362, |
| "end": 372, |
| "text": "(Wu, 1997)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 676, |
| "end": 686, |
| "text": "(Wu, 2006)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 715, |
| "end": 734, |
| "text": "(Wu and Fung, 2005)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 759, |
| "end": 779, |
| "text": "(Zens and Ney, 2003)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 834, |
| "end": 854, |
| "text": "(Saers et al., 2012;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 855, |
| "end": 874, |
| "text": "Saers and Wu, 2011)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interpolated segmenting model vs. token based model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The degenerate case of ITGs are token based ITGs wherein each translation rule contains at most one token in input and output languages. Efficient induction algorithms with polynomial run time exist for token based ITGs and the expressivity they permit has been empirically determined to capture most of the word alignments that occur across natural languages. The parameters of the token based ITGs can be estimated using expectation maximization through an efficient dynamic programming algorithm in conjunction with beam pruning (Saers and Wu, 2011) . In contrast to token based ITGs, each rule in a segmental ITG grammar can contain more than one token in both input and output languages. In machine translation applications, segmental models produce translations that are more fluent as they can capture lexical knowledge at a phrasal level. However, only a handful of purely unsupervised algorithms exist for learning segmental ITGs under matched training and testing assumptions. Most other approaches in SMT use a variety of ad hoc heuristics for extracting segments from token alignments, justified purely by short term improvements in automatic MT evaluation metrics such as BLEU (Papineni et al., 2002) which cannot be transferred to our current task. Instead, we use a completely unsupervised learning algorithm for segmental ITGs that stays strictly within the transduction grammar optimization framework for both training and testing as proposed in Saers et al. (2013) . induce a phrasal inversion transduction grammar via interpolating the bottomup rule chunking approach proposed in Saers et al. (2012) with a top-down rule segmenting approach driven by a minimum description length objective function (Solomonoff, 1959; Rissanen, 1983 ) that trades off the maximum likelihood against model size. Saers et al. (2013) report improvements in BLEU score (Papineni et al., 2002) on their translation task. In our current approach instead of using a bottom-up rule chunking approach we use a simpler token based grammar instead. Given two grammars (G a and G b ) and an interpolation parameter \u03b1 the probability function of the interpolated grammar is given by:", |
| "cite_spans": [ |
| { |
| "start": 532, |
| "end": 552, |
| "text": "(Saers and Wu, 2011)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1190, |
| "end": 1213, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1476, |
| "end": 1482, |
| "text": "(2013)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1599, |
| "end": 1618, |
| "text": "Saers et al. (2012)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1718, |
| "end": 1736, |
| "text": "(Solomonoff, 1959;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1737, |
| "end": 1751, |
| "text": "Rissanen, 1983", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1826, |
| "end": 1832, |
| "text": "(2013)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1867, |
| "end": 1890, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token based vs. segmental ITGs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "p a+b (r) = \u03b1p a (r) + (1 \u2212 \u03b1)p b (r)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token based vs. segmental ITGs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "for all rules r in the union of the two rule sets, and where p a+b is the rule probability function of the combined grammar and p a and p b are the rule probability functions of G a and G b respectively. The pseudocode for the top-down rule segmenting algorithm is shown in 1. The algorithm uses the methods collect_biaffixes, eval_dl, sort_by_delta and make_segmentations. These methods collect all the biaffixes in an ITG, evaluate the difference in description length, sort candidates by these differences, and commit to a given set of candidates, respectively. The suitable interpolation parameter is chosen empirically based on the responses generated on a small development set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token based vs. segmental ITGs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We compare the performance of inducing a token based ITG versus inducing a segmental ITG using interpolated bottom-up token based rule induction and top-down rule segmentation. To highlight some of the inherent challenges in adapting other algorithms to this novel task, we also compare the quality of the responses generated by our model to those generated by an off-the-shelf phrase based SMT system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token based vs. segmental ITGs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We use our in-house ITG decoder implemented according to the algorithm mentioned in Wu (1996) for the generating responses to challenges by decoding with the trained transduction grammars. The decoder uses a CKY-style parsing algorithm (Cocke, Algorithm 1 Iterative rule segmenting learning driven by minimum description length.", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 93, |
| "text": "(1996)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 236, |
| "end": 243, |
| "text": "(Cocke,", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding heuristics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "1: \u03a6 \u25b7 The ITG being induced 2: repeat 3: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding heuristics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u03b4 sum \u2190 0 4: bs \u2190 collect_biaffixes(\u03a6) 5: b\u03b4 \u2190 [] 6: for all b \u2208 bs do 7: \u03b4 \u2190 eval_dl(b, \u03a6) 8: if \u03b4 < 0 then 9: b\u03b4 \u2190 [b\u03b4,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding heuristics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u03b4 \u2032 \u2190 eval_dl(b, \u03a6) 13: if \u03b4 \u2032 < 0 then 14: \u03a6 \u2190 make_segmentations(b, \u03a6) 15: \u03b4 sum \u2190 \u03b4 sum + \u03b4 \u2032 16: until \u03b4 sum \u2265 0 17: return \u03a6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding heuristics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "1969) with cube pruning (Chiang, 2007) . The decoder builds an efficient hypergraph structure which is then scored using the induced grammar. The trained transduction grammar model was decoded using the 4-gram language model and the model weights determined as described in 3.3.", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 38, |
| "text": "(Chiang, 2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding heuristics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In our decoding algorithm, we restrict the reordering to only be monotonic as we want to produce output that follows the same rhyming order of the challenge. Interleaved rhyming order is harder to evaluate without the larger context of the song and we do not address that problem in our current model. We also penalize singleton rules to produce responses of similar length as successive lines in a stanza are typically of similar length. Finally, we add a penalty to reflexive translation rules that map the same surface form to itself such as A \u2192 yo/yo. We obtain these rules with a high probability due to the presence of sentence pairs where both the input and output are identical strings as many stanzas in our data contain repeated chorus lines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding heuristics", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Results in Table 1 indicate that the ISTG outperforms the TG model towards the task of generating fluent and rhyming responses. On the criterion of fluency, Both TG and ISTG model perform significantly better than the PBSMT baseline. Upon inspecting the learned rules, we noticed that the ISTG models capture rhyming correspondences both at the token and segmental levels. Table 2 shows some examples of the transduction rules learned by ISTG grammar trained using rhyme scheme detection.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 18, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 373, |
| "end": 380, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results: Rule segmentation improves responses", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We now compare two data selection approaches for generating the training data for transduction grammar induction via a rhyme scheme detection module and choosing all adjacent lines in a verse. We also briefly describe the training of the rhyme scheme detection module and determine the efficacy of our data selection scheme by training the ISTG model, TG model and the PBSMT baseline on training data generated with and without employing the rhyme scheme detection module. As the rule segmenting approach was intended to improve the fluency as opposed to the rhyming nature of the responses, we only train the rule segmenting model on the randomly chosen subset of all adjacent lines in the verse. Further, adding adjacent lines as the training data to the segmenting model maintains the context of the responses generated thereby producing higher quality responses. The segmental transduction grammar model was combined with the token based transduction grammar model trained on data selected with and without using rhyme scheme detection model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data selection via rhyme scheme detection vs. adjacent lines", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Although our approach adapts a transduction grammar induction model toward the problem of generating fluent and rhyming hip hop responses, it would be undesirable to train the model directly on all the successive lines of the verses-as done by Jiang and Zhou (2008) -due to variance in hip hop rhyming patterns. For example, adding successive lines of a stanza which follows ABAB rhyme scheme as training instances to the transduction grammar causes incorrect rhyme correspondences to be learned. The fact that a verse (which is usually represented as a separate paragraph) may contain multiple stanzas of varying length and rhyme schemes worsens this problem. Adding all possible pairs of lines in a verse as training examples not only introduces a lot of noise but also explodes the size of the training data due to the large size of the verse.", |
| "cite_spans": [ |
| { |
| "start": 244, |
| "end": 265, |
| "text": "Jiang and Zhou (2008)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rhyme scheme detection", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We employ a rhyme scheme detection model (Addanki and in order to select training instances that are likely to rhyme. Lines belonging to the same stanza and marked as rhyming according to the rhyme scheme detection model are added to the training corpus. We believe that this data selection scheme will improve the rhyming associations learned during the transduction grammar induction thereby biasing the model towards producing fluent and rhyming output.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rhyme scheme detection", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The rhyme scheme detection model proposes a HMM based generative model for a verse of hip hop lyrics similar to Reddy and Knight (2011) . However, owing to the lack of well-defined verse structure in hip hop, a number of hidden states corresponding to stanzas of varying length are used to automatically obtain a soft-segmentation of the verse. Each state in the HMM corresponds to a stanza with a particular rhyme scheme such as AA, ABAB, AAAA while the emissions correspond to the final words in the stanza. We restrict the maximum length of a stanza to be four to maintain a tractable number of states and further only use states to represent stanzas whose rhyme schemes could not be partitioned into smaller schemes without losing a rhyme correspondence.", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 135, |
| "text": "Reddy and Knight (2011)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rhyme scheme detection", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The parameters of the HMM are estimated using the EM algorithm (Devijer, 1985) using the corpus generated by taking the final word of each line in the hip hop lyrics. The lines from each stanza that rhyme with each other according to the Viterbi parse using the trained model are added as training instances for transduction grammar induction. As the source and target languages are identical, each selected pair generates two training instances: a challenge-response and a response-challenge pair.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 78, |
| "text": "(Devijer, 1985)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rhyme scheme detection", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The training data for the rhyme scheme detector was obtained by extracting the end-of-line tokens from each verse. However, upon data inspection we noticed that shorter lines in hip hop stanzas are typically joined with a comma and represented as a single line of text and hence all the tokens before the commas were also added to the training corpus. We obtained a corpus containing 4.2 million tokens corresponding to potential rhyming candidates comprising of around 153,000 unique token types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rhyme scheme detection", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We evaluated the performance of our rhyme scheme detector on the task of correctly labeling a given verse with rhyme schemes. As our model is completely unsupervised, we chose a random sample of 75 verses from our training data as our test set. Two native English speakers who were frequent hip hop listeners were asked to partition the verse into stanzas and assign them with a gold standard rhyme scheme. Precision and recall were aggregated for the Viterbi parse of each verse against this gold standard and f-score was calculated. The rhyme scheme detection module employed in our data selection obtained a precision of 35.81% and a recall of 57.25%, giving an f-score of 44.06%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rhyme scheme detection", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We obtained around 600,000 training instances upon extracting a training corpus using rhyme scheme detection module as described in Section 5.1. We added those lines that were adjacent and labeled as rhyming by the rhyme scheme detector as training instances resulting in a training corpus of size 200,000.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training data selection via rhyme scheme detection", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Considering all adjacent lines in a verse resulted in a corpus with over 5 million training instances. In order to ensure fair comparison of models trained on data selected using rhyme scheme detection, we randomly chose 200,000 training instances from the generated corpus. The training corpus thus generated shared around 15% of training instances with the corpus generated through our proposed data selection scheme.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training data selection via adjacent lines", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Results in Table 1 indicate that using the rhyme scheme detector for training data selection helps produce significantly more fluent responses compared to using adjacent lines. A possible explanation for this could be that adding all adjacent lines as training instances introduces a lot of noise into the model which hurts the fluency of the responses generated. Also, the cumulative fraction of sentences that were labeled good or \u2265acceptable on the criterion of rhyming is larger when rhyme scheme detection was used to generate the training data (although the TG model trained on the corpus generated using adjacent lines produces a higher percentage of rhyming responses that were rated good). Given the significantly higher rate of response fluency when using rhyme scheme detection, we argue that using rhyme scheme detector for data selection is beneficial. challenge man i die to see em all thun i just don't care tg+rs in the sky and me the in polla and the you there tg and the ride the me the and white the i the air pbsmt+rs man i live to see em all i just don't care challenge did a twelve year bid in the streets and held it down tg+rs to the girls here kid the and to the thought the now tg to the p's here did the a the i was the the pbsmt+rs did a year in the streets and it down challenge oh i believe in yesterday tg+rs can you see the day tg now you see the way pbsmt+rs oh i believe in tomorrow challenge what would i do tg+rs just me and you tg and you and you pbsmt+rs what would you do challenge cause you ain't going home till the early morn tg+rs and the you this alone i i gotta on tg and i you my on the a home we pbsmt+rs cause you and your friends aint nothing but It is also interesting to note from Table 1 that ISTG+RS performs better than TG+RS indicating that transduction grammar induced via interpolating token based grammar and rule segmenting produces better responses than token based transduction grammar on both data selection schemes. Although the average fraction of responses rated good on fluency are slightly lower for ISTG+RS compared to TG+RS (34.12% vs. 30.98%), the fraction of responses rated \u2265acceptable are higher (61.18% vs. 57.64%). It is important to note that the fraction of sentences rated good and \u2265acceptable on rhyming are much larger for ISTG+RS model. Although the fluency of the responses generated by PBSMT+RS drastically improves compared to PBSMT it still lags behind the TG+RS and ISTG+RS models on both fluency and rhyming. The results in Table 1 confirm our hypothesis that off-the-shelf SMT systems are not guaranteed to be effective on our novel task. Table 3 shows some of the challenges and the corresponding responses of PBSMT+RS, TG+RS and TG model. While PBSMT+RS and TG+RS models generate responses reflecting a high degree of fluency, the output of the TG contains a lot of articles. It is interesting to note that TG+RS produces responses comparable to PBSMT+RS despite being a token based transduction grammar. However, PB-SMT tends to produce responses that are too similar to the challenge. Moreover, TG models produce responses that indeed rhyme better (shown in boldface). In fact, TG tries to rhyme words not only at the end but also in middle of the lines, as our transduction grammar model captures structural associations more effectively than the phrase-based model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 18, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 1732, |
| "end": 1739, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 2511, |
| "end": 2518, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 2627, |
| "end": 2634, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results: Rhyme scheme detection helps", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In this section, we compare the effect of two disfluency mitigating strategies on the quality of the responses generated by the PBSMT baseline and token based transduction grammar model with and without using rhyme scheme detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Disfluency handling via disfluency correction and filtering", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Error analysis of our initial runs showed a disturbingly high proportion of responses generated by our system that contained disfluencies with successive repetitions of words such as the and I. Upon inspection of data we noticed that the training lyrics actually did contain such disfluencies and backing vocal lines, amounting to 10% of our training data. We therefore compared two alternative strategies to tackle this problem. The first strategy involved filtering out all lines from our training corpus which contained such disfluencies. In the second strategy, we implemented a disfluency detection and correction algorithm (for example, the the the, which frequently occurred in the training corpus, was corrected to simply the). The PBSMT baseline and the TG model were trained on both the filtered and corrected versions of the training corpus and the quality of the responses were compared.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Correction vs. filtering", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The results in Table 4 indicate that the disfluency correction strategy outperforms the filtering strategy for both TG and TG+RS models. For the model TG+RS, disfluency correction generated 34.12% good responses in terms of fluency, while the filtering strategy produced only 28.63% good responses. Similarly for the model TG, disfluency correction produced 21.8% of responses with good fluency and the filtering strategy produced only 17.25%. Disfluency correction strategy produces higher fraction of responses with \u2265acceptable fluency compared to the filtering strategy for both TG and TG+RS models. This result is not surprising, as harshly pruning the training corpus causes useful word association information necessary for rhyming to be lost. Surprisingly, for both PBSMT and PBSMT+RS models, the disfluency correction has a negative effect on the fluency level of the response but still falls behind TG and TG+RS models. As disfluency correction yields more fluent responses for TG and TG+RS models, the results for ISTG and ISTG+RS models in Table 1 were obtained using disfluency correction strategy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 22, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 1051, |
| "end": 1059, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results: Disfluency correction helps", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We have begun to apply Freestyle to rap in languages other than English, taking advantage of the language independence and linguistics-light approach of our unsupervised transduction grammar induction methods. With no special adaption our transduction grammar based model performs surprisingly well, even with significantly smaller training data size and noisier data. These results across different languages are encouraging as they can be used to discover truly language independence assumptions. We briefly describe our initial experiments on Maghrebi French hip hop lyrics below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maghrebi French hip hop", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We collected freely available French hip hop lyrics of approximately 1300 songs. About 85% of the songs were by Maghrebi French artists of Algerian, Moroccan, or Tunisian cultural backgrounds, while the remaining were by artists from the rest of Francophonie. As the large majority of songs are in Maghrebi French, the lyrics are sometimes interspersed with romanized Arabic such as \"De la travers\u00e9e du d\u00e9sert au bon couscous de Y\u00e9ma\" (Y\u00e9ma means My mother). Some songs also contain Berber phrases, for instance \"a yemmi ino, a thizizwith\" (which means my son, a bee). Furthermore, some songs also contained English phrases in the style of gangster rap such as \"T'es game over, game over... Le son de Chicken wings\". As mentioned earlier, it is complexity like this which dissuaded us from making language specific assumptions in our model. We extracted the end-of-line words and obtained a corpus containing 120,000 tokens corresponding to potential rhyming candidates with around 29,000 unique token types which was used as the training data for the rhyme scheme detector module. For the transduction grammar induction, the training data contained about 47,000 sentence pairs selected using rhyme scheme detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "After human evaluation by native French speakers and frequent hip hop listeners, our transduction grammar based model generates about 9.2% and 14.5% of the responses that are rated good by the human evaluators on the criterion of fluency and rhyming respectively. About 30.2% and 38% of the responses are rated as \u2265acceptable. These numbers are encouraging given the noisy lyrics and much smaller amount of training data. Some examples of the challenge-response pairs and learned transduction rules in French are shown in Tables 5 and 6 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 522, |
| "end": 536, |
| "text": "Tables 5 and 6", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "From Table 5 , we can see that responses generated by the system rhyme with the challenges. The first response is fluent and not only perfectly rhymes with the challenge but also semantically valid. In the second example, the model realizes a less common AABA rhyme scheme through the response. The re- sponse in the third example, exhibits strong rhyming with the challenge and both the challenge and the response contain words like souffrance, combat and d\u00e9cadence which are related. Similarly in the fourth example, the challenge and response also contain semantically related tokens which also rhyme. These examples illustrate that our transduction grammar formalism coupled with our rhyme scheme detection module does capture the necessary correspondences between lines of hip hop lyrics without assuming any language specific resources.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 12, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "We presented a new machine learning approach for improvising hip hop responses to challenge lyrics by inducing stochastic transduction grammars, and demonstrated that inducing the transduction rules by interpolating bottom-up token based rule induction and rule segmentation strategies outperforms a token based baseline. We compared the performance of our Freestyle model against a widely used off-theshelf phrase-based SMT model, showing that PB-SMT falls short in tackling the noisy and highly unstructured domain of hip hop lyrics. We showed that the quality of responses improves when the training data for the transduction grammar induction is selected using a rhyme scheme detector. Several domain related oddities such as disfluencies and backing vocals have been identified and some strategies for alleviating their effects have been compared. We also reported results on Maghrebi French hip hop lyrics which indicate that our model works surprisingly well with no special adaptation for languages other than English. In the future, we plan to investigate alternative training data selection techniques, disfluency handling strategies, search heuristics, and novel transduction grammar induction models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Particularly the repetitive chants, exclamations, and interjections in hip hop \"hype man\" style backing vocals.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This material is based upon work supported in part by the Hong Kong Research Grants Council (RGC) research grants GRF620811, GRF621008, GRF612806; by the Defense Advanced Research Projects Agency (DARPA) under BOLT contract no. HR0011-12-C-0016, and GALE contract nos. HR0011-06-C-0022 and HR0011-06-C-0023; and by the European Union under the FP7 grant agreement no. 287658. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the RGC, EU, or DARPA.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Automatic generation of Tamil lyrics for melodies", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ananth Ramakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "Lalitha Devi", |
| "middle": [], |
| "last": "Sankar Kuppan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sobha", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Workshop on Computational Approaches to Linguistic Creativity (CALC-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ananth Ramakrishnan A., Sankar Kuppan, and Lalitha Devi Sobha. \"Automatic generation of Tamil lyrics for melodies.\" Workshop on Computa- tional Approaches to Linguistic Creativity (CALC-09).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Unsupervised rhyme scheme identification in hip hop lyrics using hidden Markov models", |
| "authors": [ |
| { |
| "first": "Karteek", |
| "middle": [], |
| "last": "Addanki", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "1st International Conference on Statistical Language and Speech Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karteek Addanki and Dekai Wu. \"Unsupervised rhyme scheme identification in hip hop lyrics using hidden Markov models.\" 1st International Conference on Sta- tistical Language and Speech Processing (SLSP 2013).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Markov constraints for generating lyrics with style", |
| "authors": [ |
| { |
| "first": "Gabriele", |
| "middle": [], |
| "last": "Barbieri", |
| "suffix": "" |
| }, |
| { |
| "first": "Fran\u00e7ois", |
| "middle": [], |
| "last": "Pachet", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirko Degli", |
| "middle": [], |
| "last": "Esposti", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "20th European Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabriele Barbieri, Fran\u00e7ois Pachet, Pierre Roy, and Mirko Degli Esposti. \"Markov constraints for gen- erating lyrics with style.\" 20th European Conference on Artificial Intelligence, (ECAI 2012). 2012.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Hierarchical phrase-based translation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Chiang. \"Hierarchical phrase-based translation.\" Computational Linguistics, 33(2), 2007.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Programming languages and their compilers: Preliminary notes", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Cocke", |
| "suffix": "" |
| } |
| ], |
| "year": 1969, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Cocke. Programming languages and their compil- ers: Preliminary notes. Courant Institute of Mathemat- ical Sciences, New York University, 1969.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Baum's forward-backward algorithm revisited", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "A" |
| ], |
| "last": "Devijer", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Pattern Recognition Letters", |
| "volume": "3", |
| "issue": "6", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P.A. Devijer. \"Baum's forward-backward algorithm re- visited.\" Pattern Recognition Letters, 3(6), 1985.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Poetic statistical machine translation: rhyme and meter", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Genzel", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP 2010). Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Genzel, J. Uszkoreit, and F. Och. \"Poetic statisti- cal machine translation: rhyme and meter.\" 2010 Con- ference on Empirical Methods in Natural Language Processing (EMNLP 2010). Association for Computa- tional Linguistics, 2010.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Automatic analysis of rhythmic poetry with applications to generation and translation", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Greene", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Bodrumlu", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP 2010). Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Greene, T. Bodrumlu, and K. Knight. \"Auto- matic analysis of rhythmic poetry with applications to generation and translation.\" 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010). Association for Computational Lin- guistics, 2010.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Generating Chinese couplets using a statistical MT approach", |
| "authors": [ |
| { |
| "first": "Long", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "22nd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Long Jiang and Ming Zhou. \"Generating Chinese couplets using a statistical MT approach.\" 22nd In- ternational Conference on Computational Linguistics (COLING 2008). 2008.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Moses: Open source toolkit for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Hoang", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Bertoldi", |
| "suffix": "" |
| }, |
| { |
| "first": "Brooke", |
| "middle": [], |
| "last": "Cowan", |
| "suffix": "" |
| }, |
| { |
| "first": "Wade", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Moran", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ondrej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Constantin", |
| "suffix": "" |
| }, |
| { |
| "first": "Evan", |
| "middle": [], |
| "last": "Herbst", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Interactive Poster and Demonstration Sessions of the 45th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. \"Moses: Open source toolkit for statistical machine translation.\" Interactive Poster and Demonstration Sessions of the 45th Annual Meeting of the Association for Computa- tional Linguistics (ACL 2007). June 2007.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Rap scholarship, rap meter, and the anthology of mondegreens", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Liberman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "2013--2019", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Liberman. \"Rap scholarship, rap meter, and the an- thology of mondegreens.\" http://languagelog.ldc. upenn.edu/nll/?p=2824, December 2010. Accessed: 2013-06-30.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Minimum error rate training in statistical machine translation", |
| "authors": [ |
| { |
| "first": "Franz Josef", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "41st Annual Meeting of the Association for Computational Linguistics (ACL-2003)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och. \"Minimum error rate training in sta- tistical machine translation.\" 41st Annual Meeting of the Association for Computational Linguistics (ACL- 2003). July 2003.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "40th Annual Meeting of the Association for Computational Linguistics (ACL-02)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. \"BLEU: a method for automatic evalu- ation of machine translation.\" 40th Annual Meeting of the Association for Computational Linguistics (ACL- 02). July 2002.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Unsupervised discovery of rhyme schemes", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Reddy", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL HLT 2011)", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Reddy and K. Knight. \"Unsupervised discovery of rhyme schemes.\" 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies (ACL HLT 2011), vol. 2. Association for Computational Linguistics, 2011.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A universal prior for integers and estimation by minimum description length", |
| "authors": [ |
| { |
| "first": "Jorma", |
| "middle": [], |
| "last": "Rissanen", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "The Annals of Statistics", |
| "volume": "11", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jorma Rissanen. \"A universal prior for integers and es- timation by minimum description length.\" The Annals of Statistics, 11(2), June 1983.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "From finite-state to inversion transductions: Toward unsupervised bilingual grammar induction", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Saers", |
| "suffix": "" |
| }, |
| { |
| "first": "Karteek", |
| "middle": [], |
| "last": "Addanki", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "24th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Markus Saers, Karteek Addanki, and Dekai Wu. \"From finite-state to inversion transductions: Toward un- supervised bilingual grammar induction.\" 24th In- ternational Conference on Computational Linguistics (COLING 2012). December 2012.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Combining top-down and bottom-up search for unsupervised induction of transduction grammars", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Saers", |
| "suffix": "" |
| }, |
| { |
| "first": "Karteek", |
| "middle": [], |
| "last": "Addanki", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Seventh Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-7)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Markus Saers, Karteek Addanki, and Dekai Wu. \"Combining top-down and bottom-up search for un- supervised induction of transduction grammars.\" Sev- enth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-7). June 2013.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Reestimation of reified rules in semiring parsing and biparsing", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Saers", |
| "suffix": "" |
| }, |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-5)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Markus Saers and Dekai Wu. \"Reestimation of reified rules in semiring parsing and biparsing.\" Fifth Work- shop on Syntax, Semantics and Structure in Statistical Translation (SSST-5). Association for Computational Linguistics, June 2011.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A new method for discovering the grammars of phrase structure languages", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ray", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Solomonoff", |
| "suffix": "" |
| } |
| ], |
| "year": 1959, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ray J. Solomonoff. \"A new method for discov- ering the grammars of phrase structure languages.\" International Federation for Information Processing Congress (IFIP). 1959.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Applications of graph theory to an English rhyming corpus", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sonderegger", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Computer Speech & Language", |
| "volume": "25", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Sonderegger. \"Applications of graph theory to an English rhyming corpus.\" Computer Speech & Lan- guage, 25(3), 2011.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "SRILM -an extensible language modeling toolkit", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "7th International Conference on Spoken Language Processing (ICSLP2002 -INTER-SPEECH", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Stolcke. \"SRILM -an extensible language modeling toolkit.\" 7th International Conference on Spoken Language Processing (ICSLP2002 -INTER- SPEECH 2002). September 2002.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A polynomial-time algorithm for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "34th Annual Meeting of the Association for Computational Linguistics (ACL96)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu. \"A polynomial-time algorithm for statisti- cal machine translation.\" 34th Annual Meeting of the Association for Computational Linguistics (ACL96).", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Computational Linguistics", |
| "volume": "23", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu. \"Stochastic inversion transduction grammars and bilingual parsing of parallel corpora.\" Computa- tional Linguistics, 23(3), 1997.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Textual entailment recognition using inversion transduction grammars", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop (MLCW 2005)", |
| "volume": "3944", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu. \"Textual entailment recognition using inver- sion transduction grammars.\" Joaquin Qui\u00f1onero- Candela, Ido Dagan, Bernardo Magnini, and Flo- rence d'Alch\u00e9 Buc (eds.), Machine Learning Chal- lenges, Evaluating Predictive Uncertainty, Visual Ob- ject Classification and Recognizing Textual Entail- ment, First PASCAL Machine Learning Challenges Workshop (MLCW 2005), vol. 3944 of Lecture Notes in Computer Science. Springer, 2006.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "FREESTYLE: A challenge-response system for hip hop lyrics via unsupervised induction of stochastic transduction grammars", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Karteek", |
| "middle": [], |
| "last": "Addanki", |
| "suffix": "" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Saers", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "14th Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu, Karteek Addanki, and Markus Saers. \"FREESTYLE: A challenge-response system for hip hop lyrics via unsupervised induction of stochastic transduction grammars.\" 14th Annual Conference of the International Speech Communication Association (Interspeech 2013). 2013a.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Modeling hip hop challenge-response lyrics as machine translation", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Karteek", |
| "middle": [], |
| "last": "Addanki", |
| "suffix": "" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Saers", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "14th Machine Translation Summit (MT Summit XIV)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu, Karteek Addanki, and Markus Saers. \"Modeling hip hop challenge-response lyrics as ma- chine translation.\" 14th Machine Translation Summit (MT Summit XIV). 2013b.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Inversion transduction grammar constraints for mining parallel sentences from quasi-comparable corpora", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascale", |
| "middle": [], |
| "last": "Fung", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Second International Joint Conference on Natural Language Processing (IJCNLP 2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu and Pascale Fung. \"Inversion transduc- tion grammar constraints for mining parallel sentences from quasi-comparable corpora.\" Second Interna- tional Joint Conference on Natural Language Process- ing (IJCNLP 2005). Springer, 2005.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A comparative study on reordering constraints in statistical machine translation", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "41st Annual Meeting of the Association for Computational Linguistics (ACL-2003)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Zens and Hermann Ney. \"A comparative study on reordering constraints in statistical machine trans- lation.\" 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003). Association for Computational Linguistics, 2003.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "text": "Percentage of \u2265good and \u2265acceptable (i.e., either good or acceptable) responses on fluency and rhyming criteria. PBSMT, TG and ISTG models trained using corpus generated from all adjacent lines in a verse. PBSMT+RS, TG+RS, ISTG+RS are models trained on rhyme scheme based corpus selection strategy. Disfluency correction strategy was used in all cases.", |
| "content": "<table><tr><td>model</td><td colspan=\"4\">fluency ( \u2265good) fluency (\u2265acceptable) rhyming ( \u2265good) rhyming (\u2265acceptable)</td></tr><tr><td>PBSMT</td><td>3.14%</td><td>4.70%</td><td>1.57%</td><td>4.31%</td></tr><tr><td>TG</td><td>21.18%</td><td>54.51%</td><td>23.53%</td><td>39.21%</td></tr><tr><td>ISTG</td><td>26.27%</td><td>57.64%</td><td>27.45%</td><td>48.23%</td></tr><tr><td colspan=\"2\">PBSMT+RS 30.59%</td><td>43.53%</td><td>1.96%</td><td>9.02%</td></tr><tr><td>TG+RS</td><td>34.12%</td><td>60.39%</td><td>20.00%</td><td>42.74%</td></tr><tr><td>ISTG+RS</td><td>30.98%</td><td>61.18%</td><td>30.98%</td><td>53.72%</td></tr></table>", |
| "html": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "num": null, |
| "text": "Transduction rules learned by ISTG model.", |
| "content": "<table><tr><td>transduction grammar rule</td><td>log prob.</td></tr><tr><td>A \u2192 long/wrong</td><td>-11.6747</td></tr><tr><td>A \u2192 rhyme/time</td><td>-11.6604</td></tr><tr><td colspan=\"2\">A \u2192 felt bad/couldn't see what i really had -11.3196</td></tr><tr><td colspan=\"2\">A \u2192 matter what you say/leaving anyway -11.8792</td></tr><tr><td>A \u2192 arhythamatic/this rhythm is sick</td><td>-12.3492</td></tr></table>", |
| "html": null |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "num": null, |
| "text": "English hip hop challenge-response examples.", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "num": null, |
| "text": "Effect of the disfluency correction strategies on fluency of the responses generated for the TG induction models vs PBSMT baselines using both rhyme scheme detection and adjacent lines as the corpus selection method.", |
| "content": "<table><tr><td>model+disfluency strat.</td><td colspan=\"4\">fluency (good) fluency (\u2265acceptable) rhyming (good) rhyming (\u2265acceptable)</td></tr><tr><td>PBSMT+filtering</td><td>4.3%</td><td>13.72%</td><td>3.53%</td><td>7.06%</td></tr><tr><td>PBSMT+correction</td><td>3.14%</td><td>4.70%</td><td>1.57%</td><td>4.31%</td></tr><tr><td>PBSMT+RS+filtering</td><td>31.76%</td><td>43.91%</td><td>12.15%</td><td>21.17%</td></tr><tr><td colspan=\"2\">PBSMT+RS+correction 30.59%</td><td>43.53%</td><td>1.96%</td><td>9.02%</td></tr><tr><td>TG+filtering</td><td>17.25%</td><td>46.27%</td><td>18.04%</td><td>33.33%</td></tr><tr><td>TG+correction</td><td>21.18%</td><td>54.51%</td><td>23.53%</td><td>39.21%</td></tr><tr><td>TG+RS+filtering</td><td>28.63%</td><td>56.86%</td><td>14.90%</td><td>34.51%</td></tr><tr><td>TG+RS+correction</td><td>34.12%</td><td>60.39%</td><td>20.00%</td><td>42.74%</td></tr></table>", |
| "html": null |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "num": null, |
| "text": "French hip hop challenge-response examples. Si je me trompe response faut que je raconte challenge Un jour je suis un livre response et ce que je de vivre challenge Pacha mama ils ne voient pas ta souffrance response Combat ni leur de voulait de la d\u00e9cadence challenge la palestine n'etait pas une terre sans peuple. response le darfour d'autre de la guerre on est challenge Une banlieue qui meut", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "num": null, |
| "text": "Transduction rules for Maghrebi French hip hop.", |
| "content": "<table><tr><td>transduction grammar rule</td><td>log prob.</td></tr><tr><td>A \u2192 terre/la guerre</td><td>-9.4837</td></tr><tr><td>A \u2192 haine/peine</td><td>-9.77056</td></tr><tr><td>A \u2192 mal/pays natal</td><td>-10.6877</td></tr><tr><td colspan=\"2\">A \u2192 je frissonne/mi corazon -11.0931</td></tr><tr><td>A \u2192 gratteurs/rappeurs</td><td>-11.7306</td></tr></table>", |
| "html": null |
| } |
| } |
| } |
| } |