ACL-OCL / Base_JSON /prefixW /json /W16 /W16-0204.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W16-0204",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:47:53.588361Z"
},
"title": "Gender-Distinguishing Features in Film Dialogue",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Schofield",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14850",
"region": "NY"
}
},
"email": ""
},
{
"first": "Leo",
"middle": [],
"last": "Mehr",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14850",
"region": "NY"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Film scripts provide a means of examining generalized western social perceptions of accepted human behavior. In particular, we focus on how dialogue in films describes gender, identifying linguistic and structural differences in speech for men and women and in same and different-gendered pairs. Using the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil et al., 2012a), we identify significant linguistic and structural features of dialogue that differentiate genders in conversation and analyze how those effects relate to existing literature on gender in film. Author's Note (July 2020) The subsequent work below makes gender determinations based on a binary assignment assessed using statistics from most common baby names. We regret and recommend against this heuristic for several reasons:",
"pdf_parse": {
"paper_id": "W16-0204",
"_pdf_hash": "",
"abstract": [
{
"text": "Film scripts provide a means of examining generalized western social perceptions of accepted human behavior. In particular, we focus on how dialogue in films describes gender, identifying linguistic and structural differences in speech for men and women and in same and different-gendered pairs. Using the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil et al., 2012a), we identify significant linguistic and structural features of dialogue that differentiate genders in conversation and analyze how those effects relate to existing literature on gender in film. Author's Note (July 2020) The subsequent work below makes gender determinations based on a binary assignment assessed using statistics from most common baby names. We regret and recommend against this heuristic for several reasons:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "1. Acceptance of a 6% error rate promotes misgendering as an acceptable consequence of analysis instead of an act of systemic violence against those who would be misgendered by this system (Hamidi et al., 2018; Keyes, 2018; Cao and Daum\u00e9 III, 2020).",
"cite_spans": [
{
"start": 189,
"end": 210,
"text": "(Hamidi et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 211,
"end": 223,
"text": "Keyes, 2018;",
"ref_id": "BIBREF13"
},
{
"start": 224,
"end": 224,
"text": "",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "2. Even for characters in this dataset with binary gender, this particular labeling strategy is systematically biased against individuals with non-Western or traditionally genderneutral names (Larson, 2017) .",
"cite_spans": [
{
"start": 192,
"end": 206,
"text": "(Larson, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "3. Were the system theoretically 100% correct on the predictions it did make, the structure of this task still performs erasure by not considering gender as something that can be nonbinary, fluid, and private (Keyes, 2018) . We affirm that it can be all these things.",
"cite_spans": [
{
"start": 209,
"end": 222,
"text": "(Keyes, 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Though we considered two ideas for recreating this work -using database entries for the actors and actresses playing these roles, and automatically extracting pronouns from plot synopses -we have decided that neither option is sufficient. First, casting choices in films have often used actors of a gender other than their character (including the damaging practice of casting cis actors in trans roles, e.g., Eddie Redmayne in The Danish Girl and Jared Leto in Dallas Buyers Club (Ford, 2016; Reitz, 2017) ). Second, the proxy of the labels \"actor\" and \"actress\" would still cause nonbinary erasure: at the time of writing, IMDb apparently distinguishes \"actresses\" as those who use she/her pronouns and \"actors\" as those who do not or for whom it is unknown (imd, ). Third, the trope of trans identity as a plot device for the narrative of a cis character (Ford, 2016) can mean that pronouns from synopses may reflect either a character's or writer's transphobia.",
"cite_spans": [
{
"start": 481,
"end": 493,
"text": "(Ford, 2016;",
"ref_id": "BIBREF6"
},
{
"start": 494,
"end": 506,
"text": "Reitz, 2017)",
"ref_id": "BIBREF21"
},
{
"start": 858,
"end": 870,
"text": "(Ford, 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The authors sincerely apologize for the harm this paper may have caused. We have left this note instead of retracting the paper in the hope that those interested in using computational methods to understand film dialogue will consider these concerns and put forth a more inclusive theory of gender in their own analyses..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Film characterizations often rely on archetypes as shorthand to conserve narrative space. This effect comes out strongly when examining gender representations in films: assumptions about stereotypical gender roles can help establish expectations for characters and tension. It is also worth examining whether the gendered behaviors in film reflect known language differences across gender lines, such as women's tendency towards speaking less or more politely (Lakoff, 1973) , or the phenomenon of \"troubles talk,\" a ritual in which women build relationships through talking about frustrating experiences or problems in their lives (Jefferson, 1988) in contrast to a more male process of using language primarily as a means of retaining status and attention (Tannen, 1991) . We look at a large sample of scripts from well-known films to try to better understand how features of conversation vary with character gender.",
"cite_spans": [
{
"start": 460,
"end": 474,
"text": "(Lakoff, 1973)",
"ref_id": "BIBREF14"
},
{
"start": 632,
"end": 649,
"text": "(Jefferson, 1988)",
"ref_id": "BIBREF12"
},
{
"start": 758,
"end": 772,
"text": "(Tannen, 1991)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We begin by examining utterances made by individual characters across a film, focusing on the classification task of identifying whether a speaker is male or female. We hypothesize that in film, speech between the two gender classes differs significantly. We isolate interesting lexical and structural features from the language models associated with male and female speech, subdividing to examine particular film genres to evaluate whether features are systematically different across all genres or whether distinguishing features differ on a per-genre basis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We then focus on the text of conversations between two characters to identify whether the two speakers are both male, both female, or of opposite genders. One belief about gendered conversation expressed in films is that women and men act fundamentally differently around each other than around people of the same gender, due partly to differences in the function of speech as perceived by men and women (Tannen, 1991) . We look into features that explore the hypothesis that there are significant differences in how men and women speak to each other that are not accounted for merely by the combination of a male and a female language model, and find distinguishing features in each of these three classes of language. Finally, we look at whether these conversation features have predictive power on the duration of a relationship in a film.",
"cite_spans": [
{
"start": 404,
"end": 418,
"text": "(Tannen, 1991)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our dataset comes from the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil et al., 2012a), a collection of dialogues from 617 film scripts. Of the characters in the corpus, 3015 have pre-existing gender labels. We obtain another 1358 gender labels for the remaining characters by taking the top 1000 US baby names for boys and girls and treating any character whose first listed name is on only one of these two lists as having the respective gender of the list. Based on hand-verification of a sample of 100 these newly-added labels, we achieved 94% labeling accuracy, implying that the 4373 character labels have about 98% accuracy. In practice, many of the mislabeled names seem to be from characters named for their job title or last name, suggesting that these characters have fairly little contribution to the dialogue. We investigated using IMDb data as an additional resource but discovered that variations in character naming make this task complex.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "2"
},
{
"text": "Women are less prominent than men across all films, both possessing fewer roles (30% of all roles in major films in 2014) and a smaller proportion of lead roles within them (Lauzen, 2015) . This observation is matched quite well in the Movie-Dialogs corpus, where after supplementing gender labels, only 33.1% of characters are female (previously, 32.0% of the original characters were female). In addition, we record 4676 unique relationships (judged by having one or more conversations) with known character genders. A chi-squared test to compare the expected distribution of gender pairs from our character set to the actual relationships shows that the characters are not intermingling independently of gender (p < 10 \u22125 ), with only 374 of the expected 509 relationships between women and 2225 interactions between men compared to the expected 2099.",
"cite_spans": [
{
"start": 173,
"end": 187,
"text": "(Lauzen, 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "2"
},
{
"text": "Subdividing our data further, we find that certain film genres as represented in this dataset have disproportionate representation of certain gender pairs with respect to gender. Table 1 shows the significant differences within genders of actual vs. expected number of characters and relationships of each gender type. Though we hypothesized that the gender gap may have narrowed over time, we find the gender ratio fairly consistent across time in our corpus, as shown in Figure 1 . Table 1 : Chi-squared test results on number of characters of each gender and number of gender relationship pairs given gender proportions. The character gender test is done in comparison to the 33% female baseline expectation for that number of characters, whereas the gender-pairs are with respect to the expected proportion of gender pairs were one to randomly draw two characters for each of the relationships observed. Only genres with more than 100 observed characters with assigned gender were included. Stars mark significance levels of p = 0.05 * , 0.01 * * , 0.001 * * * , and 0.0001 * * * * .",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 186,
"text": "Table 1",
"ref_id": null
},
{
"start": 473,
"end": 481,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 484,
"end": 491,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Description",
"sec_num": "2"
},
{
"text": "3 Methods ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "2"
},
{
"text": "Our text processing uses the Natural Language Toolkit (NLTK) (Bird et al., 2009) . We use a simple tokenizer in our analysis that treats any sequence of alphanumerics as a word for our classifiers, splitting on punctuation and whitespace characters. We elect not to stem or remove stopwords, as non-contentful variation in language is important for our analysis. Based on theory that women will have more hedging (Lakoff, 1973) , we hypothesized that strength of sentiment or signals of arousal or dominance might also signal gender differences in convesation. We used sentiment labels from VADER (Hutto and Gilbert, 2014 ) and a list of 13,915 English words with scores describing valence, arousal, and dominance (Warriner et al., 2013) . We group these features as well as several nonlexical discourse features into several primary groups, described in Table 2. We also experimented with part-of-speech labels using the Stanford POS tagger (Toutanova et al., 2003) , but found they do not significantly influence results.",
"cite_spans": [
{
"start": 61,
"end": 80,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 413,
"end": 427,
"text": "(Lakoff, 1973)",
"ref_id": "BIBREF14"
},
{
"start": 597,
"end": 621,
"text": "(Hutto and Gilbert, 2014",
"ref_id": "BIBREF10"
},
{
"start": 714,
"end": 737,
"text": "(Warriner et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 942,
"end": 966,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering",
"sec_num": "3.1"
},
{
"text": "We surveyed several types of simple classifiers in our prediction tasks: Gaussian and Multinomial Naive Bayes, and Logistic Regression. These implementations came from the scikit-learn Python library (Pedregosa et al., 2011 Table 2 : List of feature groups. \u2206 indicates the absolute, unsigned difference between the text for each speaker. We discarded LEX features that arose fewer than 5 times.",
"cite_spans": [
{
"start": 200,
"end": 223,
"text": "(Pedregosa et al., 2011",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Engineering",
"sec_num": "3.1"
},
{
"text": "In comparing the language of males and females, we want to ensure that confounding factors do not result in significant results; the classification tasks should not yield better/worse results because of the structure of our dataset or the data we used to train/test. The first essential measure we take is to select equal numbers of males and females from each movie. Second, we only further select characters that have non-trivial amount of speech in the film. When specifying which characters to select for single-speaker analysis, we use only those which had at least 3 conversations with other characters, 10 utterances, and 100 words spoken in total. This removes 45% of the characters from the original dataset. While the specific numbers are arbitrary, they were roughly selected after examining random character dialogs by hand. Third, we control for the language of a given movie or the style of its screenwriter(s) by using a leave-one-label-out split when running our classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Controlling Data",
"sec_num": "3.2"
},
{
"text": "Similarly for conversations, we control for each of the gender classes (male-male, female-male, and female-female), by including from each film the same number of conversations from each class. This results in a set of roughly 3500 conversations for consideration, a substantial subset of the original corpus but one with representation of a variety of di-alogue lengths and less affected by the gender variation within particular films, to avoid classifying film content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Controlling Data",
"sec_num": "3.2"
},
{
"text": "We first examine the language differences in male and female utterances, selecting an equal number k i of random male and female characters from each movie i. We then develop language models based upon the unigram, bigram, and trigram frequencies across all utterances from selected male characters versus female characters. As our focus is on usage of common words, we use raw term frequency instead of boolean features or TF-IDF weighting. While this does not fully control for the amount of speech of a given gender, it does control for variation in gender ratios and conversation subjects within films and genres.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating individual gender features",
"sec_num": "4.1"
},
{
"text": "We analyze the interesting n-grams using the weighted log-odds ratio metric with an informative Dirichlet prior (Monroe et al., 2008) , distinguishing the significant tokens based upon single-tailed z-scores. Notably, with a large vocabulary, it is expected that some terms will randomly have large zscores. We therefore only highlight n-grams with z-scores of greater magnitude than what arose in 19 out of 20 tests of random reshufflings of the lines of dialogue between gender classes (equivalent to the 95% certainty level of what is significant). The important n-grams are displayed in Figure 2 .",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "(Monroe et al., 2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 591,
"end": 599,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating individual gender features",
"sec_num": "4.1"
},
{
"text": "The findings here conform to findings we would expect, such as cursing as a male-favored practice (Cressman et al., 2009) and polite words like greetings and \"please\" as more favored by women (Holmes, 2013) . Interesting as well is the predominance of references to women in men's speech and men in women's speech: \"she\" and \"her\" are strongly favored by male speakers, while \"he\" and \"him\" are strongly favored by female speakers (p < 0.00001). We also observe that in contrast to men's cursing, adverbial emphatics like \"so\", and \"really\" are favored by women, conforming to classic hypothesis about gendered language in the real world (Pennebaker et al., 2003; Lakoff, 1973) . Figure 2 : Tokens with significance plotted with respect to log-odds ratio. We ran 20 randomization trials and found that in those trials, the largest magnitude z-score we saw was 4.7. Blue labels at the top refer to female words above that significance magnitude, while orange labels at the bottom refer to words below that significance.",
"cite_spans": [
{
"start": 98,
"end": 121,
"text": "(Cressman et al., 2009)",
"ref_id": "BIBREF3"
},
{
"start": 192,
"end": 206,
"text": "(Holmes, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 638,
"end": 663,
"text": "(Pennebaker et al., 2003;",
"ref_id": "BIBREF20"
},
{
"start": 664,
"end": 677,
"text": "Lakoff, 1973)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 680,
"end": 688,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating individual gender features",
"sec_num": "4.1"
},
{
"text": "Given only the words a character has spoken in conversations over the course of the movie, can we accurately predict the character gender?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Speaker Gender",
"sec_num": "4.2"
},
{
"text": "As outlined in Controlling Data, we select characters equitably from each movie, each having spoken a significant amount during the movie. Using this method, we obtain 552 male and female characters each. We extract features from the all the lines spoken by each of these characters (as outlined in Feature Engineering), and train/test various scikit-learn built-in classifiers (as from Classifiers) in 10-fold cross-validation. As surveyed here, using a Logistic Regression classifier with different features, we obtain 72.2% classification accuracy (per feature accuracy outlined in Table 3 ). A multinomial Naive Bayes classifier performs slightly better, on which we applied the more appropriate leaveone-label-out cross-validation method to split training and test data, at 73.6%. Table 3 : Performance of single-speaker gender classification. Bolded outcomes are those statistically insignificantly different from the best result (using a two-tailed z-test).",
"cite_spans": [],
"ref_spans": [
{
"start": 585,
"end": 592,
"text": "Table 3",
"ref_id": null
},
{
"start": 786,
"end": 793,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Predicting Speaker Gender",
"sec_num": "4.2"
},
{
"text": "While the previous section demonstrates systemic differences in language between male and female speakers, an additional factor to consider is the conversation participants of each of these dialogues. We can hypothesize that, in addition to having different lexical content between men and women, movies also demonstrate significant content differ-ences between pairs of interacting genders, such that the conversation patterns of men and women talking to each other have different content than samegendered conversations. We can examine this hypothesis by repeating the analysis performed on single characters throughout a film on individual conversations from films. We use the controlled dataset described in the Methods section, this time contrasting each class of gender pair: male-male, female-male, and female-female (MM, FM, and FF, respectively) . We include the most significant words in each class in Table 4 . As with the single-gender analysis, we see that men seem to speak about women with other men, and women about men with other women. We also note that several pronouns including \"she\" and \"he\" from before are actually considered statistically less probable in two-gendered conversations.",
"cite_spans": [
{
"start": 824,
"end": 854,
"text": "(MM, FM, and FF, respectively)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 912,
"end": 919,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating relationship text",
"sec_num": "4.3"
},
{
"text": "This is an interesting signal of men speaking differently around men than around women, which, in conjunction with the high log-odds ratio of \"feel\", \"you\", and \"you love\" favoring dual-gendered conversations, suggests that men and women are more likely to be talking about feelings and each other, while they are more likely to talk about experiences of the other-gendered people in their lives with their same-gendered friends. While this finding does not fully support that women and men are not friends in films, it does suggest the idea that men and women in films are typically interacting in a way distinct from men and women without consideration of context. It also contrasts with the typical understanding of sharing personal problems as a female practice (Tannen, 1991) , as it seems that both men and women in films use words discussing feelings and people of the other gender.",
"cite_spans": [
{
"start": 766,
"end": 780,
"text": "(Tannen, 1991)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating relationship text",
"sec_num": "4.3"
},
{
"text": "In order to focus on the linguistic differences of the content of conversations between our gender pair classes instead of the success of per-character gender classifiers, we took as our additional classification task the problem of predicting the gender pair of the speakers in a conversation. This task is considerably more difficult than most, as conversations are often short and will include multiple speakers. We again use leave-one-label-out training to avoid learning dialogue cues from movies. While we Table 4 : The six top words and z-scores correlated with the topic positively and negatively when comparing log-odds ratios for each gender class with respect to the other two. While a z-score of magnitude 2.8 has a significance of p < 0.003, the size of the considered vocabulary makes it unsurprising that several words have scores of this magnitude randomly; however, in twenty trials of randomization of the text between classes, only one z-scores emerged greater than magnitude 3.1. We therefore infer zscores higher than 3.1 or lower than -3.1 are unlikely to be the consequence of random variation between classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 512,
"end": 519,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Predicting gender pairs",
"sec_num": "4.4"
},
{
"text": "can again attain better accuracy with a multinomial Naive Bayes classifier on LEX features, for our objective of simply demonstrating that features provide indication of gender differences, we are satisfied to use logistic regression to incorporate all features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting gender pairs",
"sec_num": "4.4"
},
{
"text": "As Table 5 shows, the only features producing significant improvement over a random accuracy baseline of 33% are lexical, structural, and discourse features. While the fact that lexical content has distinguishing power is perhaps unsurprising, given the preceding analysis, it is somewhat more surprising that more simple structural and discourse features are also producing significant results.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Predicting gender pairs",
"sec_num": "4.4"
},
{
"text": "While there no obvious significant structural differences, one can spot minor variation that seems to provide the slight improvement above random in our classification in Figure 3 . We observe in Figure 3a that while utterance length is significantly higher for all-male than all-female conversations, two-gender conversations seem to behave more like all-female conversations on average. Figure 3b looks at speaker utterances in combination with their imbalance between speakers, the \"delta\" average utterance length. Our comparison shows a significant difference between men talking to men and men talking to women. As delta utterance length here explicitly is described by average female utterance length minus average male utterance length, this demonstrates that women are speaking in shorter utterances than men in male-female conversations, in contrast to having longer utterances overall. Word length also is significantly shorter for women than men in single-gender conversations, but in this case, the two-gendered value appears to be just the interpolation of the two single-gender values, suggesting that word length is not decreased for male characters in two-gender conversation.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 179,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 196,
"end": 205,
"text": "Figure 3a",
"ref_id": "FIGREF3"
},
{
"start": 389,
"end": 404,
"text": "Figure 3b looks",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Predicting gender pairs",
"sec_num": "4.4"
},
{
"text": "We also can see some interesting discourse features in Figure 3c . While looking at the data confirms that the average type-to-token ratio does not differ between our three conversation classes, we find that the type-token ratio difference is significantly higher for conversations between two genders, which suggests that two-gender conversations may have an increased probability of demonstrating one character as less articulate than another. Looking into the data, this slightly but insignificantly favors women having a higher type-to-token ratio than men, suggesting they use more unique words in their speech than do men in conversation. Finally, we note that conversations with women have significantly higher unigram similarity than men. This hints there may be some linguistic mirroring effect that women in film demonstrate more than men, which may relate to the hypothesis that women coordi-nate language more to build relationships (Danescu-Niculescu-Mizil et al., 2012b; Tannen, 1991) .",
"cite_spans": [
{
"start": 945,
"end": 984,
"text": "(Danescu-Niculescu-Mizil et al., 2012b;",
"ref_id": "BIBREF5"
},
{
"start": 985,
"end": 998,
"text": "Tannen, 1991)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 55,
"end": 64,
"text": "Figure 3c",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Predicting gender pairs",
"sec_num": "4.4"
},
{
"text": "In addition to testing the prediction of genders in conversations and relationships, we attempted to use the same features to distinguish from a single conversation whether a relationship would be short (3 or fewer conversations) or long (more than 3 conversations). We tested on a dataset of conversations split evenly between gender pairs and between long and short relationships, using leave-one-labelout cross validation to test conversations from one relationship at a time. With a multinomial Naive Bayes classifier, we are able to achieve 60 \u00b1 2% accuracy with a combination of n-gram features, gender labels, and structural and discourse features. Performing ablation with each feature set used, we find that results worsen by omitting either structural features (54 \u00b1 2%) or n-gram features (54 \u00b1 2%), but that omitting gender from the classification does not significantly impact the classification accuracy (60 \u00b1 2%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship prediction",
"sec_num": "4.5"
},
{
"text": "Some of this result is predictable from the limits of the data: controlling for the number of conversations in a relationship heavily limits the number of possible short female relationships. Our dataset has few labels for minor female roles and thus short, explicitly female-female relationships are hard to find. In addition, though, analysis of the lexical features that predict this suggest that the difference is fairly subtle, more so than a gender divide might suggest: the significant positive indicators of a long relationship with respect to randomly significant are \"it,\" \"we,\" and \"we ll\", while the negative indicators are \"name,\" \"he,\" and \"mr,\" which suggest that the identification of a collective \"we\" might show a longer connection but very little else that obviously signals a relationship's length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relationship prediction",
"sec_num": "4.5"
},
{
"text": "There exists prior work analyzing the differences in language between male and female writing, by Argamon, Koppel, Fine, and Shimoni (Argamon et al., 2003) . Herring and Paolillo at Indiana University have shown relations in the style and content of weblogs to the gender of the writer (Her-Average utterance length ring and Paolillo, 2006) . The investigative strategy we use for comparing n-gram probabilities stems from work done by Monroe, Colaresi, and Quinn on distinguishing the contentful differences in language of conservatives and liberals on political subjects (Monroe et al., 2008) . Recently, researchers used a simpler version of n-gram analysis to distinguish funded from not-funded Kickstarter campaigns based on linguistic cues (Mitra and Gilbert, 2014) .",
"cite_spans": [
{
"start": 98,
"end": 155,
"text": "Argamon, Koppel, Fine, and Shimoni (Argamon et al., 2003)",
"ref_id": "BIBREF0"
},
{
"start": 325,
"end": 340,
"text": "Paolillo, 2006)",
"ref_id": "BIBREF8"
},
{
"start": 573,
"end": 594,
"text": "(Monroe et al., 2008)",
"ref_id": "BIBREF18"
},
{
"start": 746,
"end": 771,
"text": "(Mitra and Gilbert, 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Finding words that are stereotypically male or female came can be done rather quickly and roughly. Yet more sophisticated techniques provide more reliable and believable data. Isolating the right subset of the data to use with proper control methods, and then extracting useful information from this subset results in interesting and significant results. In our small dataset, we find that simple lexical features were by far the most useful for prediction, and that sentiment and structure prove less effective in the setting of our movie scripts corpus. We also isolate several simpler discourse features that suggest interesting differences between single-gender and twogender conversations and gendered speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "We thank C. Danescu-Niculescu-Mizil, L. Lee, D. Mimno, J. Hessel, and the members of the NLP and Social Interaction course at Cornell for their support and ideas in developing this paper. We thank the workshop chairs and our anonymous reviewers for their thoughtful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Gender, genre, and writing style in formal written texts",
"authors": [
{
"first": "Shlomo",
"middle": [],
"last": "Argamon",
"suffix": ""
},
{
"first": "Moshe",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Fine",
"suffix": ""
},
{
"first": "Anat Rachel",
"middle": [],
"last": "Shimoni",
"suffix": ""
}
],
"year": 2003,
"venue": "Text & Talk",
"volume": "23",
"issue": "3",
"pages": "321--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shlomo Argamon, Moshe Koppel, Jonathan Fine, and Anat Rachel Shimoni. 2003. Gender, genre, and writing style in formal written texts. Text & Talk, 23(3):321-346.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural language processing with Python. O'Reilly Media",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Nat- ural language processing with Python. O'Reilly Me- dia. Available at: http://www.nltk.org/book/.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Toward gender-inclusive coreference resolution",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Trista",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4568--4595",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Trista Cao and Hal Daum\u00e9 III. 2020. Toward gender-inclusive coreference resolution. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4568-4595, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Swearing in the cinema: An analysis of profanity in US teen-oriented movies",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dale L Cressman",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Callister",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robinson",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of Children and Media",
"volume": "3",
"issue": "2",
"pages": "117--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dale L Cressman, Mark Callister, Tom Robinson, and Chris Near. 2009. Swearing in the cinema: An anal- ysis of profanity in US teen-oriented movies, 1980- 2006. Journal of Children and Media, 3(2):117-135.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "You had me at hello: How phrasing affects memorability",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Kleinberg",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "892--901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Danescu-Niculescu-Mizil, Justin Cheng, Jon Kleinberg, and Lillian Lee. 2012a. You had me at hello: How phrasing affects memorability. In Pro- ceedings of ACL, pages 892-901.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Echoes of power: Language effects and power differences in social interaction",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "699--708",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012b. Echoes of power: Lan- guage effects and power differences in social interac- tion. In Proceedings of the 21st international confer- ence on World Wide Web, pages 699-708. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Whose club is it anyway?: The problematic of trans representation in mainstream films--rayon and dallas buyers club",
"authors": [
{
"first": "Akkadia",
"middle": [],
"last": "Ford",
"suffix": ""
}
],
"year": 2016,
"venue": "Screen Bodies",
"volume": "1",
"issue": "2",
"pages": "64--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akkadia Ford. 2016. Whose club is it anyway?: The problematic of trans representation in mainstream films--rayon and dallas buyers club. Screen Bodies, 1(2):64-86.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gender recognition or gender reductionism? the social implications of embedded gender recognition systems",
"authors": [
{
"first": "Foad",
"middle": [],
"last": "Hamidi",
"suffix": ""
},
{
"first": "Morgan",
"middle": [
"Klaus"
],
"last": "Scheuerman",
"suffix": ""
},
{
"first": "Stacy M",
"middle": [],
"last": "Branham",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M Branham. 2018. Gender recognition or gender reduc- tionism? the social implications of embedded gender recognition systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1-13.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Gender and genre variation in weblogs",
"authors": [
{
"first": "C",
"middle": [],
"last": "Susan",
"suffix": ""
},
{
"first": "John C",
"middle": [],
"last": "Herring",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paolillo",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Sociolinguistics",
"volume": "10",
"issue": "4",
"pages": "439--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan C Herring and John C Paolillo. 2006. Gender and genre variation in weblogs. Journal of Sociolinguis- tics, 10(4):439-459.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Women, men and politeness",
"authors": [
{
"first": "Janet",
"middle": [],
"last": "Holmes",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janet Holmes. 2013. Women, men and politeness. Rout- ledge.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Hutto",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2014,
"venue": "Eighth International AAAI Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CJ Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social me- dia text. In Eighth International AAAI Conference on Weblogs and Social Media.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Cast/acting credits guidelines",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cast/acting credits guidelines.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On the sequential organization of troubles-talk in ordinary conversation. Social problems",
"authors": [
{
"first": "Gail",
"middle": [
"Jefferson"
],
"last": "",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "35",
"issue": "",
"pages": "418--441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gail Jefferson. 1988. On the sequential organization of troubles-talk in ordinary conversation. Social prob- lems, 35(4):418-441.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The misgendering machines: Trans/hci implications of automatic gender recognition",
"authors": [
{
"first": "Os",
"middle": [],
"last": "Keyes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "2",
"issue": "",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Os Keyes. 2018. The misgendering machines: Trans/hci implications of automatic gender recognition. Pro- ceedings of the ACM on Human-Computer Interac- tion, 2(CSCW):1-22.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Language and woman's place. Language in society",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Lakoff",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "2",
"issue": "",
"pages": "45--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Lakoff. 1973. Language and woman's place. Lan- guage in society, 2(01):45-79.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gender as a variable in naturallanguage processing: Ethical considerations",
"authors": [
{
"first": "N",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Larson",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian N Larson. 2017. Gender as a variable in natural- language processing: Ethical considerations.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "It's a man's (celluloid) world: On-screen representations of female characters in the top 100 films of",
"authors": [
{
"first": "",
"middle": [],
"last": "Martha M Lauzen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha M Lauzen. 2015. It's a man's (celluloid) world: On-screen representations of female characters in the top 100 films of 2014.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The language that gets people to give: Phrases that predict success on kickstarter",
"authors": [
{
"first": "Tanushree",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 17th ACM conference on computer supported cooperative work & social computing",
"volume": "",
"issue": "",
"pages": "49--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanushree Mitra and Eric Gilbert. 2014. The language that gets people to give: Phrases that predict success on kickstarter. In Proceedings of the 17th ACM con- ference on computer supported cooperative work & social computing, pages 49-61. ACM.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Fightin'words: Lexical feature selection and evaluation for identifying the content of political conflict",
"authors": [
{
"first": "L",
"middle": [],
"last": "Burt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"M"
],
"last": "Colaresi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quinn",
"suffix": ""
}
],
"year": 2008,
"venue": "Political Analysis",
"volume": "16",
"issue": "4",
"pages": "372--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burt L Monroe, Michael P Colaresi, and Kevin M Quinn. 2008. Fightin'words: Lexical feature selection and evaluation for identifying the content of political con- flict. Political Analysis, 16(4):372-403.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duches- nay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825- 2830.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Psychological aspects of natural language use: Our words, our selves. Annual review of psychology",
"authors": [
{
"first": "Matthias",
"middle": [
"R"
],
"last": "James W Pennebaker",
"suffix": ""
},
{
"first": "Kate",
"middle": [
"G"
],
"last": "Mehl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Niederhoffer",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "54",
"issue": "",
"pages": "547--577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James W Pennebaker, Matthias R Mehl, and Kate G Niederhoffer. 2003. Psychological aspects of natural language use: Our words, our selves. Annual review of psychology, 54(1):547-577.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The representation of trans women in film and television",
"authors": [
{
"first": "Nikki",
"middle": [],
"last": "Reitz",
"suffix": ""
}
],
"year": 2017,
"venue": "Cinesthesia",
"volume": "7",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikki Reitz. 2017. The representation of trans women in film and television. Cinesthesia, 7(1):2.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "You just don't understand: Women and men in conversation",
"authors": [
{
"first": "Deborah",
"middle": [],
"last": "Tannen",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deborah Tannen. 1991. You just don't understand: Women and men in conversation. Virago London.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Feature-rich partof-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the ACL on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D Man- ning, and Yoram Singer. 2003. Feature-rich part- of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the ACL on Human Language Technology-Volume 1, pages 173-180. ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Norms of valence, arousal, and dominance for 13,915 English lemmas",
"authors": [
{
"first": "Amy",
"middle": [
"Beth"
],
"last": "Warriner",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Kuperman",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
}
],
"year": 2013,
"venue": "Behavior research methods",
"volume": "45",
"issue": "4",
"pages": "1191--1207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of valence, arousal, and domi- nance for 13,915 English lemmas. Behavior research methods, 45(4):1191-1207.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Proportion of character gender representation in movies, bucketed by decade, shaded by standard error.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Structural and discourse features plotted with respect to each other, focusing on the region of means (circled in black). Orange and blue refer to male-male and female-female conversations, while pink refers to two-gender conversations. Standard errors for both axes are plotted in each figure but are sometimes too small to distinguish.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>Category</td><td/><td>Key</td><td>Features</td></tr><tr><td>Lexical</td><td/><td>LEX</td><td>unigrams, bigrams, tri-</td></tr><tr><td/><td/><td/><td>grams</td></tr><tr><td colspan=\"2\">Vader Sentiment</td><td colspan=\"2\">VADER VADER scores for positive,</td></tr><tr><td>Scores</td><td/><td/><td>negative, neutral, and com-</td></tr><tr><td/><td/><td/><td>posite value</td></tr><tr><td>Valence,</td><td/><td>V/A/D</td><td>average scores across</td></tr><tr><td>Arousal,</td><td>and</td><td/><td>scored words</td></tr><tr><td>Dominance</td><td/><td/></tr><tr><td>Structural</td><td/><td>STR</td><td>average tokens per line, av-</td></tr><tr><td/><td/><td/><td>erage token length, type to</td></tr><tr><td/><td/><td/><td>token ratio</td></tr><tr><td>Discourse</td><td/><td>DIS</td><td>\u2206 average tokens per line,</td></tr><tr><td/><td/><td/><td>\u2206 average token length, \u2206</td></tr><tr><td/><td/><td/><td>type to token ratio, unigram</td></tr><tr><td/><td/><td/><td>similarity</td></tr></table>",
"html": null,
"num": null,
"text": ").",
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>: Classifier results using logistic regression</td></tr><tr><td>on the features from Table 2. Lexical features are</td></tr><tr><td>sufficient to produce nonrandom classification, as</td></tr><tr><td>well as structural and discourse features. Bolded text</td></tr><tr><td>indicates a result better than random (p &lt; 0.05).</td></tr></table>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
}
}
}
}