ACL-OCL / Base_JSON /prefixC /json /ccnlg /2019.ccnlg-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:37:29.491228Z"
},
"title": "Efficient text generation of user-defined topic using generative adversarial networks",
"authors": [
{
"first": "Chenhan",
"middle": [],
"last": "Yuan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Virginia Tech VA",
"location": {
"country": "USA"
}
},
"email": "chenhan@vt.edu"
},
{
"first": "Yi-Chin",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {},
"email": "ychin.huang@gmail.com"
},
{
"first": "Cheng-Hung",
"middle": [],
"last": "Tsai",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This study focused on efficient text generation using generative adversarial networks (GAN). Assuming that the goal is to generate a paragraph of a user-defined topic and sentimental tendency, conventionally the whole network has to be retrained to obtain new results each time when a user changes the topic. This would be time-consuming and impractical. Therefore, we propose a User-Defined GAN (UD-GAN) with two-level discriminators to solve this problem. The first discriminator aims to guide the generator to learn paragraphlevel information and sentence syntactic structure, which is constructed by multiple-LSTMs. The second one copes with higher level information, such as the user-defined sentiment and topic for text generation. The cosine similarity based on TF-IDF and length penalty are adopted to determine the relevance of the topic. Then, the second discriminator is retrained with generator if the topic or sentiment for text generation is modified. The system evaluations are conducted to compare the performance of the proposed method with other GAN-based ones. The objective results showed that the proposed method is capable of generating texts with less time than others and the generated text are related to the user-defined topic and sentiment. We will further investigate the possibility of incorporating more detailed paragraph information such as semantics into text generation to enhance the result.",
"pdf_parse": {
"paper_id": "2019",
"_pdf_hash": "",
"abstract": [
{
"text": "This study focused on efficient text generation using generative adversarial networks (GAN). Assuming that the goal is to generate a paragraph of a user-defined topic and sentimental tendency, conventionally the whole network has to be retrained to obtain new results each time when a user changes the topic. This would be time-consuming and impractical. Therefore, we propose a User-Defined GAN (UD-GAN) with two-level discriminators to solve this problem. The first discriminator aims to guide the generator to learn paragraphlevel information and sentence syntactic structure, which is constructed by multiple-LSTMs. The second one copes with higher level information, such as the user-defined sentiment and topic for text generation. The cosine similarity based on TF-IDF and length penalty are adopted to determine the relevance of the topic. Then, the second discriminator is retrained with generator if the topic or sentiment for text generation is modified. The system evaluations are conducted to compare the performance of the proposed method with other GAN-based ones. The objective results showed that the proposed method is capable of generating texts with less time than others and the generated text are related to the user-defined topic and sentiment. We will further investigate the possibility of incorporating more detailed paragraph information such as semantics into text generation to enhance the result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text generation, as a basic natural language processing task, has many applications, such as dialogue robots , machine translation (Hu et al., 2017) , paraphrasing (Power and Scott, 2005 ) and so on. With the rise of deep learning, different neural networks are introduced to generate text. For example, researchers use the recur-rent neural network (RNN) (Mikolov et al., 2010) to train the language model because of its capability to process sequential data. However, the RNN suffers from the gradient vanishing problem (Hochreiter, 1998) when the sequence becomes longer. To address this problem, Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) is further adopted as a sequential neural network model to generate sentences.",
"cite_spans": [
{
"start": 131,
"end": 148,
"text": "(Hu et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 164,
"end": 186,
"text": "(Power and Scott, 2005",
"ref_id": "BIBREF19"
},
{
"start": 356,
"end": 378,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF15"
},
{
"start": 522,
"end": 540,
"text": "(Hochreiter, 1998)",
"ref_id": "BIBREF7"
},
{
"start": 630,
"end": 664,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lately, the Generative Adversarial Networks (GAN) framework (Goodfellow et al., 2014) has been introduced into the NLP community. GAN has two different models for completing the datagenerating task. One of them is Generator G, which is responsible for generating data, and another one is discriminator D, which determines whether the input data is the real data or not. The generator G continuously optimizes generated data based on the judgment of discriminator D. After several epochs, the generated data will become more realistic.",
"cite_spans": [
{
"start": 60,
"end": 85,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, GAN was originally designed to process continuous data, and using discrete data as input would make it impossible to update the gradients of the GAN framework (Husz\u00e1r, 2015) . To process discrete data, several variants of the GAN model for generating text have been proposed. These GAN variants could achieve good performances in text generation task, such as MaskGAN (Fedus et al., 2018) , RankGAN , and TextGAN (Zhang et al., 2016) .",
"cite_spans": [
{
"start": 168,
"end": 182,
"text": "(Husz\u00e1r, 2015)",
"ref_id": "BIBREF10"
},
{
"start": 377,
"end": 397,
"text": "(Fedus et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 422,
"end": 442,
"text": "(Zhang et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to make these models fit the distribution of real text data better, the number of parameters of text generation models based on neural network are increased, which means that training these neural network models often takes a lot of time even using GPU. Conventionally, topicrelated text generation models incorporate an arbitrary topic as an input by adopting mechanisms like attention (Feng et al., 2018) . Therefore, each time when the user wants to generate new sentences with another topic or sentimental tendency, the text generation models have to be retrained with all parameters to satisfy new requirements. In some scenarios, e.g., news generation, spending lots of time retraining model is not practical and the user wants new responding quickly.",
"cite_spans": [
{
"start": 396,
"end": 415,
"text": "(Feng et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle this problem, a novel text generation model based on GAN is proposed, which is called User-Defined Generative Adversarial Networks (UD-GAN). The key idea is to separate the sentence syntax model as the basic model and the topic-related model as a higher-level model, and these two could be trained independently from each other. So, when the topic or other userdefined information is modified, e.g., sentimental tendency, only one of both models needs to be retrained. In this way, once the basic syntax model is established, the following training will become much faster, since only the higher-level model has to be retrained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our proposed method, the discriminator is constructed based on this idea. One of the discriminators called discriminator-general, which learns to determine the proper context information and whether the input sentence is a valid syntactic structure. Another discriminator is called the discriminator-special, which ensures the output is user-defined. Inspired by SeqGAN (Yu et al., 2017) , we use the evaluation results of the generated text from discriminators as a reward to guide the generator to select future actions, which is to generate an updated word.",
"cite_spans": [
{
"start": 373,
"end": 390,
"text": "(Yu et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For training the discriminator-special, it will take feature vectors as input, instead of sentences. The feature vector is defined based on the sentiment detection and topic relevance of generated sentence. The cosine similarity based on TF-IDF and length penalty are jointly adopted to represent topic relevance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Note that the UD-GAN is designed to be more practical to generate short paragraphs, which means sentences generated by it should be context-aware and behave like a paragraph together with surrounding sentences. To achieve this idea, discriminator-general is designed with hierarchical multiple LSTM layers. The LSTM at the top of the network processes paragraphlevel information while the bottom LSTMs process sentence-level information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The organization of the paper is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, we discussed the related works of our method in the section 2. The proposed method is described in the Section 3, including the feature extraction and model definition and training. In the Section 4, the experiment settings and evaluation results of the comparing methods are depicted. Finally, the concluding remarks and future works are described in the Section 5. Text generation is a basic task in natural language processing (NLP). In previous works, many researchers (Power and Scott, 2005) extracted grammar rules from text to generate new texts. These works are capable of generating semantically rich and grammatically correct text, but due to the fixed grammar rules, generated sentences are quite lack of diversity. As neural networks could fit the dis- ",
"cite_spans": [
{
"start": 480,
"end": 503,
"text": "(Power and Scott, 2005)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "for l \u2190 1 to T do 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generate feature vectors corresponding to negative samples generated by G\u03b8 Train D\u03b3 with negative and synthetic feature vectors via Eq.6 10: end for 11: end for tribution of real data better, some researchers design GAN-based models as language models to generate text. Unlike standard GAN, the loss function or training method of generator are modified to enable GAN to process discrete data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For example, In TextGAN (Zhang et al., 2016) , researchers apply feature matching with standard GAN loss function to train the generator. Reinforcement learning (Sutton et al., 2000) is another useful machine learning technique to train model with unlabeled data. Trained model will choose next actions to maximize expected reward, which is given by interface environment. Yu proposed SeqGAN (Yu et al., 2017) , which combine reinforcement learning with GAN. In SeqGAN, the generator uses the result of discriminator as a reward and choose next actions, which is to generate the next words in text generation task. To generate longer text, LeakGAN (Guo et al., 2018) is introduced to enable the discriminator leaks features extracted from its input to generator, which then uses this signal to guide the outputs in each generation step before generating the entire sentence.",
"cite_spans": [
{
"start": 24,
"end": 44,
"text": "(Zhang et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 161,
"end": 182,
"text": "(Sutton et al., 2000)",
"ref_id": "BIBREF22"
},
{
"start": 392,
"end": 409,
"text": "(Yu et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 648,
"end": 666,
"text": "(Guo et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another vital application of NLP is the sentiment analysis (Pang et al., 2008; Wilson et al., 2005) . Generally, the sentiment analysis task measures the emotional tendency of the whole sentence based on the word usage that can represent emotions in that sentence. Therefore, the establishment of an emotional word dictionary is essential. Affective Norms for English Words (ANEW) (Bradley and Lang, 1999) lexicon sorts all words according to rating score from 1 to 9. The highest score means the sentence convey a very positive emotion, and the lowest one represents the most negative emotion for the sentence. Based on that, some researchers (Hutto and Gilbert, 2014) construct a gold-standard list of lexical features then combine these lexical features with consideration for five general rules, which could represent the sentiment of a sentence. The VADER algorithm proposes a rule-based sentiment analyzer that has outperformed the other machine learning-based algorithms.",
"cite_spans": [
{
"start": 59,
"end": 78,
"text": "(Pang et al., 2008;",
"ref_id": "BIBREF16"
},
{
"start": 79,
"end": 99,
"text": "Wilson et al., 2005)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3 Proposed Method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As shown in Fig.1 , UD-GAN contains a generator G \u03b8 that is capable of generating contextdependent sentences and the two-level discriminators. Discriminator-general D \u03c6 guides the generator to learn the paragraph-level information and correct syntactic structure, while discriminatorspecial D \u03b3 determines whether the generated text is related to the user-defined topic and sentiment. Discriminator-special D \u03b3 is trained with synthetic perfect data and generated text data, while discriminator-general D \u03c6 is trained with real text data and generated text data.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 17,
"text": "Fig.1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Basic Structure of UD-GAN",
"sec_num": "3.1"
},
{
"text": "As we apply reinforcement learning with policy gradient to train the generator, the outputs of the two discriminators for the generated text will be combined and served as a reward to train the generator. Generator G \u03b8 will choose the best next actions based on the reward it received. After the first training via Algorithm 1, the discriminator-general parameters are saved as the pre-trained model. In the subsequent trainings, we only train the parameters of the generator G \u03b8 and discriminator-special D \u03b3 via Algorithm 2. The details about training method and structure of discriminators and generator are described as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Structure of UD-GAN",
"sec_num": "3.1"
},
{
"text": "The Feature Vector of D-Special Discriminator-special D \u03b3 takes a vector containing 5 elements as input, which could represent the sentimental and topical relevance of each sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "In our model, users can describe the cause and effect of an event in one sentence, which is used as the topic for generating sentences. We use the first element to represent the similarity between sentence entered by the user and generated sentence, which could also represent the user-defined topic relevance of the generated text. Based on the TF-IDF (Sparck Jones, 1972) value of each word in the sentence that the user entered and the generated sentence, the cosine similarity between these two sentences is calculated as a parameter to measure the user-defined topic relevance of the generated sentence. A larger value of cosine similarity means that the generated sentence is related to the user-defined topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "However, if only this element is used to instruct the generator G \u03b8 to generate topic-related sentences, the resulting sentences will be substantially as long as the user-defined topic sentence. More importantly, the generated sentences will lack diversity with same meaning. Therefore, we propose the second element, length penalty, to reduce the negative impact of the first element. The difference between the length of the generated sentence and the length of the topic sentence defined by user is mapped in [0, 1] via Eq.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "penalty g = len g \u2212 len i max g\u2208G |len g \u2212 len i | \u2212 min g\u2208G |len g \u2212 len i | (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "where i is input sentence, g is the evaluated generated sentence and G is the set of generated sentences. We set 0.5 to the optimal length penalty, which means that if the length of the sentence is very close to or very far from the length of topic sentence, it is unqualified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "We implemented the VADER algorithm to calculate the probability that a generated sentence belongs to a positive, negative or neutral emotion class. As VADER gives three values that correspond to the probability of each sentiment category, the sum of which is 1, these three values will be saved in the third to fifth elements. The optimal sentiment is defined by the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "In conventional GAN training, the discriminator treats real text data as the positive sample and generated text as the negative sample. However, there is no sentence in real corpus that has exactly the same features as the positive sample, since its feature vector is constructed by applying the above mention algorithm, while the userdefined feature vector is a specific value. Therefore, we train the discriminator-special D \u03b3 with synthetic data, which is treated as positive sample. For example, supposing that the user would like to generate an essay with one positive emotion, then the UD-GAN will generate [1,0.5,1,0,0] vectors corresponding to the number of generated sentences, which will be combined with vectors corresponding to the generated sentences as the input of discriminator-special.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "The Structure of D-Special Two linear layers with Relu as the activation function are used as discriminator-special D \u03b3 . The output of this network will be part of the reward to train generator G \u03b8 after it passed through a softmax layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "We explain here why the multiple fully connected layer is implemented as a discriminatorspecial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "The first reason is that after the Discriminator-General is constructed, in the subsequent training, the discriminator-special will be continuously retrained when demands of user change. This requires spending as little time as possible to train a good discriminator-special. The Figure 2 : The proposed framework for Discriminator-general multiple fully connected layer has fewer parameters, which means this network will converge faster than others will. Another reason is that the aim of training discriminator-special is to distinguish whether the input vector corresponds the user-defined one. For an input with only five variables, a neural network with two fully connected layers is complicated enough to determine the class of input vector correctly.",
"cite_spans": [],
"ref_spans": [
{
"start": 280,
"end": 288,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Framework of D-Special",
"sec_num": "3.2"
},
{
"text": "Unlike conventional ideas of using classifierbased models as a discriminator, the discriminatorgeneral D \u03c6 needs to process sequence data and context information, such as the paragraph information for each sentence to generate paragraphlevel text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-General",
"sec_num": "3.3"
},
{
"text": "Therefore, as shown in Fig.2 , we designed a hierarchical-multiple-LSTM neural network as the discriminator-general D \u03c6 . The bottom multilayers LSTM takes an embedding vector for each word in a sentence as the input and it outputs a feature matrix representing the corresponding sentence. The top bidirectional LSTM (Graves and Schmidhuber, 2005) takes the feature matrices of these sentences, which belong to the same paragraph, as input and it outputs a feature matrix representing that paragraph. After transforming through two different linear layers respectively, the above two feature matrices will be combined together. Finally, the discriminator-general calculates the score of the input sentence via Eq. 2.",
"cite_spans": [
{
"start": 317,
"end": 347,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 23,
"end": 28,
"text": "Fig.2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Framework of D-General",
"sec_num": "3.3"
},
{
"text": "R(Y ) = sof tmax[(1 \u2212 \u03b2)LST M \u03b1 + \u03b2LST M \u03b7 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-General",
"sec_num": "3.3"
},
{
"text": "(2) where \u03b2 is a trainable parameter ranging 0-1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Framework of D-General",
"sec_num": "3.3"
},
{
"text": "Generator G \u03b8 is designed with GRU (Chung et al., 2014) . In UD-GAN, due to the excessive parameters of the two discriminators, it is easy to guide the generator to be over-fitting. As a commonly used variant of LSTM, GRU avoids this over-fitting problem. In addition, having fewer parameters than conventional LSTM allows GRU to take less time to converge, which is the first priority in UD-GAN.",
"cite_spans": [
{
"start": 35,
"end": 55,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generator",
"sec_num": "3.4"
},
{
"text": "The reinforcement learning has been incorporated to enable GAN to process discrete data. In this scenario, generator G \u03b8 will use the results from discriminators on the generated text as reward to generate next words. In UD-GAN, the reward is calculated based on results of two discriminators. Generator G \u03b8 tries to maximize expected reward from the initial state till the end state via Eq.3(loss function).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "J(\u03b8) = T t=1 E(R t |S t\u22121 , \u03b8) = T t=1 G \u03b8 (y t |Y )[\u03bb(D \u03c6 (Y )) + (1 \u2212 \u03bb)D \u03b3 (Y ))] (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "Where \u03bb is a manually set weight and Y is a complete sequence and R t is the reward for a whole sequence. In our experiments, we set \u03bb to 0.8 to give more weight to the discriminatorgeneral D \u03c6 for generating sentences with better syntactic structure. Note that since discriminators can only make the judgement with a complete sequence, the Monte Carlo search (Silver et al., 2016 ) is adopted to find out some of the possible generated complete sequences of each state. the average judgment results of the discriminators for these sequences are calculated as a reward of this state.",
"cite_spans": [
{
"start": 360,
"end": 380,
"text": "(Silver et al., 2016",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "In this paper, we implemented policy gradient method. The gradient of Eq.3 can be derived approximately as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2207 \u03b8 J(\u03b8) T t=1 E yt\u223cG \u03b8 [\u2207 \u03b8 logG \u03b8 (y t |Y )Q G \u03b8 D \u03c6 ,D\u03b3 (y t |Y )]",
"eq_num": "(4)"
}
],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "where Q G \u03b8 D \u03c6 ,D\u03b3 (y t |y 1:t\u22121 ) can be derived via Eq.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q G \u03b8 D \u03c6 ,D\u03b3 (y t |y 1:t\u22121 ) = \u03bb(D \u03c6 (Y ))+(1\u2212\u03bb)D \u03b3 (Y ))",
"eq_num": "(5)"
}
],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "The loss function of both discriminators is introduced as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J = \u2212(E Y P data [R(Y )] \u2212 E Y G \u03b8 [1 \u2212 R(Y )])",
"eq_num": "(6)"
}
],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "where R(Y ) is the reward from two discriminators for a whole sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reward and Policy Gradient Training",
"sec_num": "3.5"
},
{
"text": "We crawled nearly 10,000 press released from the opinion section of Newsweek as the training corpus. The opinion section of Newsweek is selected as training corpus because the paragraphs of the essays in Newsweek are generally closely related and not long. The other reason is that through the articles in the opinion section, authors can often convey their own sentiment tendencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "NER is used to replace name-entities with their name-entity tags to decrease vocabulary. After tokenizing the corpus, long sentences of more than 45 words in the corpus were removed. The final training corpus has 425K sentences and 103K paragraphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "SeqGAN and LeakGAN are used as the baseline system to evaluate UD-GAN. We train SeqGAN and LeakGAN for 20 epochs, which is same as the number of times UD-GAN is trained. Other parameters of baselines remain unchanged as implemented in their original papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.2"
},
{
"text": "The bottom of the discriminator-general consists of three layers of LSTM. The hidden dimension of discriminator-general bidirectional LSTMs Table 1 : The ROUGE-L score for each system. UD-GAN(G+S) represents initial training and UD-GAN(S) represents following training. UD-GAN(G) only has discriminator-general and generator. Note that this score is the sum of all generated sentences' ROUGE-L results.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.2"
},
{
"text": "for the UD-GAN and the bottom LSTMs is set to 64. Besides, the hidden dimension of discriminator-special linear layer and GRU unit of generator is set to 32. In each epoch of initial training, generator G is trained once, and the discriminator-general is trained four times while the discriminator-special is trained twice. For evaluating the effectiveness of the proposed method, we first compared the sentences relevance to user-defined topic and sentimental tendency, and then compare the training time of each system. Finally, the fluency and correctness of UD-GAN and baseliens were evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.2"
},
{
"text": "As an objective summary accuracy evaluation method that is widely used, ROUGE (Lin, 2004) is also adopted here to evaluate whether generated sentences are related to user-defined topics. Generated sentences are treated as summaries to be evaluated, and the topic sentence defined by user is used as a reference summary to evaluate whether the generated sentence is related to the topic. Note that even if the ROUGE scores of the generated sentences are not high, it does not mean that these sentences are not closely related to the user-defined topic necessarily. One possibility is that the generated sentences will use other words or syntactic structures to describe the topic sentence.",
"cite_spans": [
{
"start": 78,
"end": 89,
"text": "(Lin, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance of Topic",
"sec_num": null
},
{
"text": "In this paper, we report the sum of ROUGE-L scores of all sentences. Based on the longest common subsequence, ROUGE-L is a score related to recall rate. As shown in Table. 1, the ROUGE-L scores for UD-GAN (G+S) and UD-GAN(S) are slightly higher than baseline systems and UD-GAN (G). ",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 171,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relevance of Topic",
"sec_num": null
},
{
"text": "The VADER algorithm is used to calculate the probability that the sentimental tendency of the generated sentences to be positive, negative or neutral. Here, we evaluated the system performance by setting the target sentimental tendency as positive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance of Sentimental Tendency",
"sec_num": null
},
{
"text": "As shown in Table. 2, the average probability in each sentimental tendency category of all sentences is calculated. With training discriminatorspecial, UD-GAN (G+S) and UD-GAN (S) are more likely to generate positive sentences than baselines. Which proves that the proposed method is capable to generate the sentences with the desired sentiment. However, since the total number of sentences expressing positive sentimental tendency in the training corpus is quite low, the probability of UD-GAN generating positive sentiment is still not higher than 0.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 18,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relevance of Sentimental Tendency",
"sec_num": null
},
{
"text": "To demonstrate that UD-GAN can generate context-dependent sentences, we show sentences generated by UD-GAN and baselines. As shown in Table 3 , one can see that the proposed UD-GAN does generate sentences related to the user-defined topic. UD-GAN tries to add some conjunctions when generating sentences so that the sentences seem to be related, and each sentence is extended with other related words based on the topic. Note that there are some Name-Entity (NE) tags generated by the models because the NE tagging has been done for simplifying the corpus lexicon.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Generate Context-dependent Sentences",
"sec_num": null
},
{
"text": "However, semantically, these sentences are not intrinsically related to each other, which is a problem we will address in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generate Context-dependent Sentences",
"sec_num": null
},
{
"text": "The time spending on gradient propagation and update of UD-GAN and baselines are compared, instead of the time spending on loading and saving data. Our platform is a workstation with a GeForce topic: the attack in douma occurred days after trump indicated that he wanted to pull us troops out UD-GAN(S): 1. the country contacts to the u.s. and trains troops for government living on the federal system in LOCATION . 2. we are discussed actively : if u.s. is the facts that citizens in the country will likely vote for type elections ? 3. during these attack things occurred days , i say just PERSON who pulls in the exchange best troops out as trade in LOCATION . 4. and he often enthusiastic , telling only having heard nothing happened while you can indicate to pull out from country . 5. but these generations in LOCATION can predict the next five attacks occur. LeakGAN: 1. it prompted the opposition during a \" real \" of subtlety , and video straws . 2. but if PERSON know that we serve the best drives these country purposes . \" 3. besides disarming our administration and pricing and its traditional views . 4. with her contempt for all enough neighbors . \" 5. one day i 'd go beyond my candor . SeqGAN: 1. we do n't mean . 2. you should be \" changed \" that you know . 3. i 've always been proposing the findings . 4. in other words , he 's because you have a testament to his goodness -not a result . 5. he gave economic law . (Paszke et al., 2017) framework to eliminate the impact of different frameworks on time consumption.",
"cite_spans": [
{
"start": 1435,
"end": 1456,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Time Evaluation",
"sec_num": "4.4"
},
{
"text": "As shown in Table. 4, because the structure of discriminator-general is more complex than the structure of discriminator D of baselines, initial training of UD-GAN takes the longest time. However, in the subsquent trainings, due to the gradient propagation and parameter update of discriminator-special is quite fast, the time required to train UD-GAN (S) is the shortest. The UD-GAN (S) takes only about an hour and a half to complete training, which is much less than the nearly eight hours of training time for baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 18,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Time Evaluation",
"sec_num": "4.4"
},
{
"text": "As shown in table 5, we report BLEU (Papineni et al., 2002) scores of UD-GAN and baselines to compare the fluency and accuracy of text they generate. The BLEU we use here is the average value of 1-gram BLEU, 2-gram BLEU and 3-gram BLEU, which are given the same weights .",
"cite_spans": [
{
"start": 36,
"end": 59,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency and Accuracy",
"sec_num": "4.5"
},
{
"text": "In the case of training the discriminator-general only, the BLEU score of the UD-GAN (G) is between SeqGAN and LeakGAN. Therefore, the accuracy and fluency evaluation of using multi-layer LSTMs as a discriminator is comparable to that of using a classifier-based model, such as CNN, as the discriminator. When the discriminatorgeneral and discriminator-special are simultaneously trained (initial training), UD-GAN (G+S) has a slightly higher BLEU score than UD-GAN (G). That is to say, even if discriminator-special is added and the result of discriminator-general, which can distinguish the correctness of the sentence, is less weighted, the resultant generator of UD-GAN (G+S) can still learn how to generate a sentence with the correct syntax. Then we change the user-defined topic and sentimental tendency to train the discriminator-special only (subsequent training). The results showed that the BLEU score of the UD-GAN(S) is still between LeakGAN and SeqGAN. It means that retraining the discriminator-special has no effect on whether the generator can learn the correct syntax without changing the weights of rewards generated by discriminator-general and discriminator-special.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency and Accuracy",
"sec_num": "4.5"
},
{
"text": "GAN-based models BLEU score UD-GAN(G+S) 0.6412 UD-GAN(S) 0.6409 UD-GAN(G) 0.6357 SeqGAN 0.6303 LeakGAN 0.7161 Table 5 : The average BLEU score for each system. Note that UD-GAN(S) achieves comparable BLEU performance with baselines, whose training needs far less time than baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fluency and Accuracy",
"sec_num": "4.5"
},
{
"text": "In this paper, we propose a UD-GAN method to re-train text generation model more efficiently to generate sentences that are consistent with the new user-defined topic and sentimental tendency. We compared the accuracy and fluency of sentences generated by UD-GAN with other GAN-based text generation models. The experimental results showed that sentences generated by UD-GAN are competent. Meanwhile, UD-GAN takes much less time in the re-train stage than other models. According to experimental results, UD-GAN can also successfully generate sentences related to the user-defined topic and sentimental tendency, while baselines does not have this capability. Besides, UD-GAN can also generate paragraph-level text. However, the sentences generated by UD-GAN are still inferior to the state-of-the-art method, i.e., LeakGAN, in terms of fluency. And the current paragraph-level information used here does not include complex linguistic information, such as the order of sentences. In future work, we will try to maintain the existing advantages of UD-GAN while improving the readability of generated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Affective norms for english words (anew): Instruction manual and affective ratings",
"authors": [
{
"first": "M",
"middle": [],
"last": "Margaret",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Bradley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret M Bradley and Peter J Lang. 1999. Affective norms for english words (anew): Instruction manual and affective ratings. Technical report, Citeseer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Maskgan: Better text generation via filling in the",
"authors": [
{
"first": "William",
"middle": [],
"last": "Fedus",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Andrew M",
"middle": [],
"last": "Dai",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.07736"
]
},
"num": null,
"urls": [],
"raw_text": "William Fedus, Ian Goodfellow, and Andrew M Dai. 2018. Maskgan: Better text generation via filling in the . arXiv preprint arXiv:1801.07736.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Topic-to-essay generation with neural networks",
"authors": [
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiahao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Yibo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "In IJCAI",
"volume": "",
"issue": "",
"pages": "4078--4084",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin, Yibo Sun, and Ting Liu. 2018. Topic-to-essay generation with neural networks. In IJCAI, pages 4078-4084.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in neural information processing systems, pages 2672-2680.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "5-6",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural Net- works, 18(5-6):602-610.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long text generation via adversarial training with leaked information",
"authors": [
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In Thirty-Second AAAI Conference on Artificial Intelli- gence.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The vanishing gradient problem during learning recurrent neural nets and problem solutions",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
}
],
"year": 1998,
"venue": "International Journal of Uncertainty",
"volume": "6",
"issue": "02",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter. 1998. The vanishing gradient prob- lem during learning recurrent neural nets and prob- lem solutions. International Journal of Uncer- tainty, Fuzziness and Knowledge-Based Systems, 6(02):107-116.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Controllable text generation",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.00955"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Controllable text generation. arXiv preprint arXiv:1703.00955, 7.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "How (not) to train your generative model: Scheduled sampling, likelihood",
"authors": [
{
"first": "Ferenc",
"middle": [],
"last": "Husz\u00e1r",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.05101"
]
},
"num": null,
"urls": [],
"raw_text": "Ferenc Husz\u00e1r. 2015. How (not) to train your genera- tive model: Scheduled sampling, likelihood, adver- sary? arXiv preprint arXiv:1511.05101.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Clayton",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Hutto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2014,
"venue": "Eighth international AAAI conference on weblogs and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clayton J Hutto and Eric Gilbert. 2014. Vader: A par- simonious rule-based model for sentiment analysis of social media text. In Eighth international AAAI conference on weblogs and social media.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adversarial learning for neural dialogue generation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Tianlin",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.06547"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, Tianlin Shi, S\u00e9bastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversar- ial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adversarial ranking for language generation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Dianqi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zhengyou",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming-Ting",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3155--3165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Advances in Neural Infor- mation Processing Systems, pages 3155-3165.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ja\u0148",
"middle": [],
"last": "Cernock\u1ef3",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "Eleventh annual conference of the international speech communication association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech com- munication association.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Opinion mining and sentiment analysis. Foundations and Trends R in Information Retrieval",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends R in In- formation Retrieval, 2(1-2):1-135.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic generation of large-scale paraphrases",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Power",
"suffix": ""
},
{
"first": "Donia",
"middle": [],
"last": "Scott",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Power and Donia Scott. 2005. Automatic gen- eration of large-scale paraphrases.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mastering the game of go with deep neural networks and tree search",
"authors": [
{
"first": "David",
"middle": [],
"last": "Silver",
"suffix": ""
},
{
"first": "Aja",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"J"
],
"last": "Maddison",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Guez",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Sifre",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Driessche",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Schrittwieser",
"suffix": ""
},
{
"first": "Veda",
"middle": [],
"last": "Antonoglou",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Panneershelvam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lanctot",
"suffix": ""
}
],
"year": 2016,
"venue": "nature",
"volume": "529",
"issue": "7587",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Ju- lian Schrittwieser, Ioannis Antonoglou, Veda Pan- neershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A statistical interpretation of term specificity and its application in retrieval",
"authors": [
{
"first": "Karen Sparck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1972,
"venue": "Journal of documentation",
"volume": "28",
"issue": "1",
"pages": "11--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 28(1):11-21.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Policy gradient methods for reinforcement learning with function approximation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcallester",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Satinder",
"suffix": ""
},
{
"first": "Yishay",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mansour",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1057--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradi- ent methods for reinforcement learning with func- tion approximation. In Advances in neural informa- tion processing systems, pages 1057-1063.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Recognizing contextual polarity in phraselevel sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase- level sentiment analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Pro- cessing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Seqgan: Sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Generating text via adversarial training",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2016,
"venue": "NIPS workshop on Adversarial Training",
"volume": "21",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Zhe Gan, and Lawrence Carin. 2016. Generating text via adversarial training. In NIPS workshop on Adversarial Training, volume 21.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The framework of the proposed UD-GAN Algorithm 2 Following training generator G\u03b8, discriminator-special D\u03b3 1: Initialize G\u03b8, D\u03b3 with random weights\u03b8, D\u03b3 2: Load trained D\u03c6 3: Do 2 5 steps in Algorithm 1 4: for i \u2190 1 to M do",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "The probability of sentiment tendency of generated sentences",
"html": null,
"content": "<table/>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "An example of the generated sentences from different systems",
"html": null,
"content": "<table><tr><td>GAN-based models</td><td>Time s</td></tr><tr><td>UD-GAN(GS)</td><td>29061.48</td></tr><tr><td>UD-GAN(S)</td><td>4841.99</td></tr><tr><td>UD-GAN(G)</td><td>29036.65</td></tr><tr><td>SeqGAN</td><td>27011.08</td></tr><tr><td>LeakGAN</td><td>30471.95</td></tr></table>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "Time spending on training of each models GTX 1080 Ti graphics card with 11G RAM. All GAN-based models compared here are implemented in pytorch",
"html": null,
"content": "<table/>"
}
}
}
}