ACL-OCL / Base_JSON /prefixQ /json /Q18 /Q18-1033.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q18-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:10:26.090829Z"
},
"title": "Detecting Institutional Dialog Acts in Police Traffic Stops",
"authors": [
{
"first": "Vinodkumar",
"middle": [],
"last": "Prabhakaran",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We apply computational dialog methods to police body-worn camera footage to model conversations between police officers and community members in traffic stops. Relying on the theory of institutional talk, we develop a labeling scheme for police speech during traffic stops, and a tagger to detect institutional dialog acts (Reasons, Searches, Offering Help) from transcribed text at the turn (78% F-score) and stop (89% F-score) level. We then develop speech recognition and segmentation algorithms to detect these acts at the stop level from raw camera audio (81% Fscore, with even higher accuracy for crucial acts like conveying the reason for the stop). We demonstrate that the dialog structures produced by our tagger could reveal whether officers follow law enforcement norms like introducing themselves, explaining the reason for the stop, and asking permission for searches. This work may therefore inform and aid efforts to ensure the procedural justice of policecommunity interactions.",
"pdf_parse": {
"paper_id": "Q18-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "We apply computational dialog methods to police body-worn camera footage to model conversations between police officers and community members in traffic stops. Relying on the theory of institutional talk, we develop a labeling scheme for police speech during traffic stops, and a tagger to detect institutional dialog acts (Reasons, Searches, Offering Help) from transcribed text at the turn (78% F-score) and stop (89% F-score) level. We then develop speech recognition and segmentation algorithms to detect these acts at the stop level from raw camera audio (81% Fscore, with even higher accuracy for crucial acts like conveying the reason for the stop). We demonstrate that the dialog structures produced by our tagger could reveal whether officers follow law enforcement norms like introducing themselves, explaining the reason for the stop, and asking permission for searches. This work may therefore inform and aid efforts to ensure the procedural justice of policecommunity interactions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Improving the relationship between police officers and the communities they serve is a critical societal goal. We propose to study this relationship by applying NLP techniques to conversations between officers and community members in traffic stops. Traffic stops are one of the most common forms of police contact with community members, with 10% of U.S. adults pulled over every year (Langton and Durose, 2013 ). Yet past research on what people ex-perience during these traffic stops has mainly been limited to self-reported behavior and post-hoc narratives (Lundman and Kaufman, 2003; Engel, 2005; Brunson, 2007; Epp et al., 2014) .",
"cite_spans": [
{
"start": 386,
"end": 411,
"text": "(Langton and Durose, 2013",
"ref_id": "BIBREF35"
},
{
"start": 561,
"end": 588,
"text": "(Lundman and Kaufman, 2003;",
"ref_id": "BIBREF38"
},
{
"start": 589,
"end": 601,
"text": "Engel, 2005;",
"ref_id": "BIBREF13"
},
{
"start": 602,
"end": 616,
"text": "Brunson, 2007;",
"ref_id": "BIBREF6"
},
{
"start": 617,
"end": 634,
"text": "Epp et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rapid adoption of body-worn cameras by police departments in the U.S. (laws in 60% of states in the U.S. encourage the use of body cameras) and across the world has provided unprecedented insight into traffic stops. 1 While footage from these cameras is used as evidence in contentious cases, the unstructured nature and immense volume of video data means that most of this footage is untapped.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work by Voigt et al. (2017) demonstrated that body-worn camera footage could be used not just as evidence in court, but as data. They developed algorithms to automatically detect the degree of respect that officers communicated to drivers in close to 1,000 routine traffic stops captured on camera. It was the first study to use machine learning techniques to extract insights from this footage.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "Voigt et al. (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This footage can be further used to unearth the structure of police-community interactions and gain a more comprehensive picture of the traffic stop as an every day institutional practice. For instance, knowing which requests the officer makes, whether and when they introduce themselves or explain the reason for the stop is a novel way to measure procedural justice; a set of fairness principles recommended by the President's Task Force on 21st Century Policing, 2 and endorsed by police departments across the U.S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose automatically extracting dialog structure from body camera footage to contribute to our understanding of police-community interactions. We rely on the notion of institutional talk (Heritage, 2005) , which posits that dialog acts, topics, and narrative are heavily defined by the institutional context. Traffic stops are a kind of institutional talk; as are, for example, doctor-patient interactions, counseling conversations, and citizen calls for help from police. We introduce a model of institutional acts for traffic stop conversations. Since the officer holds a position of power within this institutional context, their dialog behavior has a greater influence in shaping the conversation (Coupland et al., 1991; Gnisci, 2005) ; hence, we focus on the institutional acts performed by the officer in this paper.",
"cite_spans": [
{
"start": 191,
"end": 207,
"text": "(Heritage, 2005)",
"ref_id": "BIBREF26"
},
{
"start": 705,
"end": 728,
"text": "(Coupland et al., 1991;",
"ref_id": "BIBREF11"
},
{
"start": 729,
"end": 742,
"text": "Gnisci, 2005)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions of our paper: 1) A typology of institutional dialog acts to model the structure of police-driver interactions during traffic stops. 2) An institutional act tagger that works from transcribed words (78% F-score) or from raw audio (60% Fscore). 3) A classifier that uses this dialog structure to detect acts at the stop level (e.g., \"Does this stop contain a Reason?\") (81% F-score from raw audio). 4) An analysis of salient dialog structure patterns in traffic stops; demonstrating its potential as a tool for police departments to assess and improve police community interactions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Computational work on human-human conversation has long focused on dialog structure, beginning with the influential work of Grosz showing the homology between dialog and task structure (Grosz, 1977) . Recent work has integrated speech act theory (Austin, 1975) and conversational analysis (Schegloff and Sacks, 1973; Sacks et al., 1974; Schegloff, 1979) into models of dialog acts for domains like meetings (Ang et al., 2005) , telephone calls (Stolcke et al., 2006) , emails (Cohen et al., 2004) , chats (Kim et al., 2010) , and Twitter (Ritter et al., 2010) .",
"cite_spans": [
{
"start": 185,
"end": 198,
"text": "(Grosz, 1977)",
"ref_id": "BIBREF25"
},
{
"start": 246,
"end": 260,
"text": "(Austin, 1975)",
"ref_id": "BIBREF3"
},
{
"start": 304,
"end": 316,
"text": "Sacks, 1973;",
"ref_id": "BIBREF50"
},
{
"start": 317,
"end": 336,
"text": "Sacks et al., 1974;",
"ref_id": "BIBREF47"
},
{
"start": 337,
"end": 353,
"text": "Schegloff, 1979)",
"ref_id": "BIBREF51"
},
{
"start": 407,
"end": 425,
"text": "(Ang et al., 2005)",
"ref_id": "BIBREF1"
},
{
"start": 444,
"end": 466,
"text": "(Stolcke et al., 2006)",
"ref_id": "BIBREF53"
},
{
"start": 476,
"end": 496,
"text": "(Cohen et al., 2004)",
"ref_id": "BIBREF9"
},
{
"start": 505,
"end": 523,
"text": "(Kim et al., 2010)",
"ref_id": "BIBREF30"
},
{
"start": 538,
"end": 559,
"text": "(Ritter et al., 2010)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our models extend this work by drawing on the notion of institutional talk (Atkinson and Drew, 1979) , an application of conversational analysis to environments in which the goals of participants are institution-specific. Actions, their sequences, and interpretations during institutional talk depend not only on the speaker (as speech act theory suggests) or the dialog (as conversational analysts argue), but they are inherently tied to the institutional context.",
"cite_spans": [
{
"start": 75,
"end": 100,
"text": "(Atkinson and Drew, 1979)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Institutional talk has been used as a tool to understand the work of social institutions. For example, Whalen and Zimmerman (1987) studied dialog structure in transcripts of citizen calls for help. They observed that the \"regular, repetitive and reproducible features of calls for police, fire or paramedic services [...] arise from situated practices responsive to the sequential and institutional contexts of this type of call\". Such recurring patterns in language and conversation exist across different institutional contexts such as doctor-patient interactions, psychological counseling, sales calls, court room conversations, as well as traffic stops (Heritage, 2005) .",
"cite_spans": [
{
"start": 103,
"end": 130,
"text": "Whalen and Zimmerman (1987)",
"ref_id": "BIBREF58"
},
{
"start": 316,
"end": 321,
"text": "[...]",
"ref_id": null
},
{
"start": 657,
"end": 673,
"text": "(Heritage, 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Deviations from these sequential configurations are consequential. A police officer failing to explain the reason for the traffic stop can lead to aggravation in the driver (Giles et al., 2007) , and an officer's perceived communication skills (e.g. do they listen, take civilian views into account) predict civilian's attitudes towards the police (Giles et al., 2006) .",
"cite_spans": [
{
"start": 173,
"end": 193,
"text": "(Giles et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 348,
"end": 368,
"text": "(Giles et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "These findings demonstrate the importance of understanding the role of institutional context in shaping conversation structure. In doing so, our paper also draws on recent research on automatically extracting structure from human-human dialog. Drawing on Grosz's original insights, Bangalore et al. (2006) show how to extract a hierarchical task structure for catalog ordering dialogs with subtasks like opening, contact-information, order-item, relatedoffers, and summary. Prabhakaran et al. (2012) and Prabhakaran et al. (2014) employ dialog act analysis to study correlates of gender and power in work emails, while Althoff et al. (2016) studied structural aspects of successful counseling conversations, and Yang et al. (2013) and Chandrasekaran et al. (2017) investigated structures in online classroom conversations that predict success or need for intervention. Our work also draws on an important line of unsupervised work that models topical structure of conversations (Blei and Moreno, 2001; Eisenstein and Barzilay, 2008; Paul, 2012; Nguyen et al., 2012) .",
"cite_spans": [
{
"start": 282,
"end": 305,
"text": "Bangalore et al. (2006)",
"ref_id": "BIBREF4"
},
{
"start": 474,
"end": 499,
"text": "Prabhakaran et al. (2012)",
"ref_id": null
},
{
"start": 619,
"end": 640,
"text": "Althoff et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 712,
"end": 730,
"text": "Yang et al. (2013)",
"ref_id": "BIBREF61"
},
{
"start": 735,
"end": 763,
"text": "Chandrasekaran et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 978,
"end": 1001,
"text": "(Blei and Moreno, 2001;",
"ref_id": "BIBREF5"
},
{
"start": 1002,
"end": 1032,
"text": "Eisenstein and Barzilay, 2008;",
"ref_id": "BIBREF12"
},
{
"start": 1033,
"end": 1044,
"text": "Paul, 2012;",
"ref_id": "BIBREF42"
},
{
"start": 1045,
"end": 1065,
"text": "Nguyen et al., 2012)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our work is closely related to the active line of research in NLP on dialog act classification. Recently, recurrent neural network-based dialog act taggers, e.g., Khanpour et al. (2016) , Li and Wu (2016) and Liu et al. (2017) , have posted state-of-the-art performance on benchmark datasets such as the Switchboard corpus (Jurafsky et al., 1997) and MRDA (Ang et al., 2005) . Since these corpora come from significantly different domains (telephone conversations and meeting transcripts, respectively) than ours, and since we are interested specifically in the institutional acts (e.g., did the officer request documentation from the driver?) rather than the general dialog acts (did the officer issue a request?), these taggers do not directly serve our purpose. Furthermore, our data is an order of magnitude smaller (around 7K sentences) than these corpora; making it infeasible to train in-domain recurrent networks.",
"cite_spans": [
{
"start": 163,
"end": 185,
"text": "Khanpour et al. (2016)",
"ref_id": "BIBREF29"
},
{
"start": 188,
"end": 204,
"text": "Li and Wu (2016)",
"ref_id": "BIBREF36"
},
{
"start": 209,
"end": 226,
"text": "Liu et al. (2017)",
"ref_id": "BIBREF37"
},
{
"start": 323,
"end": 346,
"text": "(Jurafsky et al., 1997)",
"ref_id": "BIBREF28"
},
{
"start": 356,
"end": 374,
"text": "(Ang et al., 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Prior to neural network approaches, support vector machines and conditional random fields (Cohen et al., 2004; Kim et al., 2010; Kim et al., 2012; Omuya et al., 2013) were the state-of-the-art algorithms on this task. These approaches also incorporated contextual and structural information into the classifier. For instance, Kim et al. (2012) used lexical information from previous utterances in predicting the dialog act of a current utterance; and Omuya et al. (2013) uses features such as the relative position of an utterance w.r.t the whole dialog. We draw from this line of work; we also experiment with positional and contextual features in addition to lexical features. Furthermore, we use features that capture the institutional context of the conversation.",
"cite_spans": [
{
"start": 90,
"end": 110,
"text": "(Cohen et al., 2004;",
"ref_id": "BIBREF9"
},
{
"start": 111,
"end": 128,
"text": "Kim et al., 2010;",
"ref_id": "BIBREF30"
},
{
"start": 129,
"end": 146,
"text": "Kim et al., 2012;",
"ref_id": "BIBREF31"
},
{
"start": 147,
"end": 166,
"text": "Omuya et al., 2013)",
"ref_id": "BIBREF41"
},
{
"start": 326,
"end": 343,
"text": "Kim et al. (2012)",
"ref_id": "BIBREF31"
},
{
"start": 451,
"end": 470,
"text": "Omuya et al. (2013)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "We begin with a framework for analyzing the structure of interactions in this important but understudied domain of traffic stop conversations, developed by applying a data-oriented approach to body camera footage. Our goal is to create a framework that can be a tool for police departments, policy makers, and the general public to understand, assess and improve policing practices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Institutional Dialog Acts of Traffic Stops",
"sec_num": "3"
},
{
"text": "We use the Voigt et al. (2017) ",
"cite_spans": [
{
"start": 11,
"end": 30,
"text": "Voigt et al. (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Traffic stops possess all three characteristics of institutional talk (Heritage, 2005) : i) participants' goals are tied to their institution-relevant identity (e.g. officer & driver); ii) there are special constraints on what is allowable within the interaction; iii) there are special inferences that are particular to the context. Table 1 presents an excerpt from a traffic stop conversation from our corpus: The officer greets the community member, gives the reason for the stop, asks about personal details, issues the sanction, and closes by encouraging safe driving. We are interested in such recurring sequences of institutionspecific dialog acts, or institutional acts, which combine aspects of dialog acts and those of topical segments, all conditioned by the institutional context.",
"cite_spans": [
{
"start": 70,
"end": 86,
"text": "(Heritage, 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Traffic Stops as Institutional Talk",
"sec_num": "3.2"
},
{
"text": "To develop the taxonomy of institutional dialog acts, we begin with a data-oriented exploration: identifying recurring sequences of topic segments using the (unsupervised) mixed membership Markov model (Paul, 2012 ). 3 Figure 1 shows the topic segments assigned by a 10-topic model on the traffic stop of Table 1 . The model identified different spans of con- versation; the officer gives the reason for the stop (orange), asks for documents (blue), collects driver information (purple), then in the end, there are spans of issuing a sanction (beige) and closing (yellow).",
"cite_spans": [
{
"start": 202,
"end": 213,
"text": "(Paul, 2012",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 305,
"end": 312,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Developing the Typology",
"sec_num": "3.3"
},
{
"text": "While these topical assignments helpfully suggest a high-level notion of the structure of these conversations, they do not capture the specific acts officers do. We next turned to the procedural justice literature, which highlights specific acts. For instance, questioning the driver's legitimacy for being somewhere (why are you here?) or driving a car (whose car is it?) are acts that trigger negative reactions in drivers (Epp et al., 2014) . On the other hand, officers introducing themselves and explaining the reasons for the stop are important procedural justice facets that communicate fairness and respect (Ramsey and Robinson, 2015). Informed by the procedural justice literature, the President's Task Force recommendations, and a review of the unsupervised topic segments, two of the authors manually analyzed twenty stop transcripts to identify institutional dialog acts.",
"cite_spans": [
{
"start": 425,
"end": 443,
"text": "(Epp et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Developing the Typology",
"sec_num": "3.3"
},
{
"text": "We focused on acts that tend to recur (e.g. citations), and those with procedural justice interest (e.g. reasons, introductions), teasing apart acts with similar goals but different illocutionary force (explicitly stating vs. implying the reason for the stop; or requesting to search the vehicle vs. stating that a search was being conducted). This process resulted in an initial coding scheme of twenty two institu-tional acts in nine categories. We also observe that the recurring acts by community members were often in response to officers' acts (e.g., responding to demographic questions), as their position of power gives them higher influence in shaping the conversation (Giles et al., 2007) . Hence, we focus on officer speech to capture our institutional act annotations.",
"cite_spans": [
{
"start": 678,
"end": 698,
"text": "(Giles et al., 2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Developing the Typology",
"sec_num": "3.3"
},
{
"text": "From each stop transcript, we selected all officer turns (excluding those directed to the radio dispatcher), and annotated each sentence of each turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating Institutional Acts",
"sec_num": "3.4"
},
{
"text": "In the first round, three annotators annotated the same 10 stops using the taxonomy and manual developed above with an average pair-wise interannotator agreement of \u03ba=0.79. We discussed the sources of disagreement, ratified the annotations, and updated the annotation manual to clarify act descriptions. During this process, we also updated the annotation manual to include four additional institutional acts, resulting in a set of twenty five acts in eleven categories. Table 2 presents this final typology, along with actual examples from our data.",
"cite_spans": [],
"ref_spans": [
{
"start": 471,
"end": 478,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Annotating Institutional Acts",
"sec_num": "3.4"
},
{
"text": "We then performed two subsequent rounds of three-way parallel annotations obtaining average pair-wise \u03ba values of 0.84 and 0.88, respectively. Once we obtained high agreement, we conducted a fourth round where each annotator annotated a separate set of 30 stops. Stops were chosen at random from the entire corpus for each round; however, seven of the previously annotated stops were incorrectly included in the final round of annotations, resulting in a total of 113 annotated stops (7081 sentences, 4245 turns). Table 1 shows resulting labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 514,
"end": 521,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Annotating Institutional Acts",
"sec_num": "3.4"
},
{
"text": "We now investigate whether we can train a model that can automatically detect the institutional acts during the course of a traffic stop. In Sections 5-7, we present an institutional act tagger, and describe three increasingly difficult evaluation settings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Detect Institutional Acts",
"sec_num": "4"
},
{
"text": "1. Using manual transcripts: We train and test an institutional act tagger on the manual transcripts. This task is similar to dialog act tagging (e.g., (Stolcke et al., 2006) ), but it has the important distinction that it needs to captures dialog structure at the intersection of the general dialog acts (e.g., requests, responses) and the topical structure. Section 5 presents the experiments on building the institutional act tagger for this domain. 2. Using ASR: We develop an automatic speech recognizer that works in our domain, and uses the text it generates, instead of manual transcripts, to train and test the model. The downstream institutional act tagging framework stays the same. This setting is not fully automatic, as we still rely on the manually identified segments of audio where officers spoke. Section 6 first presents experiments on building the ASR system for this domain, and then presents results on using ASRgenerated text for institutional act tagging. 3. From raw audio: We build automatic means to detect the segments of officers' speech, apply the ASR on those segments, and then use the text thus produced to detect institutional acts, building a fully automatic tagger with no human intervention. Section 7 first describes the experiments on detecting the officers' speech automatically, and then presents results on institutional act tagging in this fully automatic setting.",
"cite_spans": [
{
"start": 152,
"end": 174,
"text": "(Stolcke et al., 2006)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Detect Institutional Acts",
"sec_num": "4"
},
{
"text": "For all our experiments, we merge labels from all sentences in each turn, making this a multi-label (instead of multi-class) classification task. 4 Only around 7% of the institutional act bearing utterances had multiple acts. Common co-occurrences were GREETING and REASON, and GREETING and OR-DERS, e.g., Hey, turn the car off. How you doing?",
"cite_spans": [
{
"start": 146,
"end": 147,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning to Detect Institutional Acts",
"sec_num": "4"
},
{
"text": "We adopt a supervised machine learning approach to the task of institutional act tagging. We draw from prior work in the area of dialog act modeling, while also adding features that specifically capture the institutional context of traffic stop conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Institutional Act Tagging from Manual Transcripts",
"sec_num": "5"
},
{
"text": "We compared three supervised text classification methods: Support Vector Machine (SVM) (Cortes and Vapnik, 1995) and Extremely Randomized Trees (ERT) (Geurts et al., 2006) , 5 which are efficient and tend to work well with smaller datasets like ours, and Convolutional Neural Network (CNN) (Kim, 2014) , which captures variable length patterns without feature engineering. For SVM, we use the one-vs-all multi-label algorithm (ERT and CNN inherently deal with multi-label classification) and use the balanced mode to address the skewed label distribution (0.5% to 3.5% positive cases). In the balanced mode, positive and negative examples are balanced at training time. For CNN, we use two convolutional layers of filter sizes 3 and 4 and 20 filters with relu activation and max-pooling with pool size 2. This is followed by two dense layers, and a final layer with sigmoid activation and binary cross entropy loss to handle multi-label classification. While some prior work in dialog act tagging (e.g., (Kim et al., 2010; Kim et al., 2012) have shown that sequence tagging algorithms such as conditional random fields (CRF) have some advantage over text classification approaches such as SVMs, preliminary experiments using CRFs revealed this to not be the case in our corpus.",
"cite_spans": [
{
"start": 87,
"end": 112,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF10"
},
{
"start": 150,
"end": 171,
"text": "(Geurts et al., 2006)",
"ref_id": "BIBREF17"
},
{
"start": 290,
"end": 301,
"text": "(Kim, 2014)",
"ref_id": "BIBREF32"
},
{
"start": 1004,
"end": 1022,
"text": "(Kim et al., 2010;",
"ref_id": "BIBREF30"
},
{
"start": 1023,
"end": 1040,
"text": "Kim et al., 2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithms",
"sec_num": "5.1"
},
{
"text": "Lexical Features: We used unigrams and bigrams as indicator features for SVM and ERT. We initialize the input layer of CNN with word embeddings trained using our entire transcribed dataset. 6 Pattern features: We use indicator features for two types of patterns. 1) For each institutional act, we hand-crafted a list of linguistic patterns; e.g., the pattern feature for GREETING included how are you, hello, and good morning, among others. 2) We use a semi-automatically built dictionary of offenses (e.g., tail light) by querying the word embedding model trained on all transcripts with a seed list of offenses, resulting in a large list of offenses and variations of their usage (e.g., break light, rear lite) with high incidence in some acts (e.g., REASON, SANCTION). (-patterns&structure) 76.3 71.9 74.0 Table 3 : Micro-averaged precision (P), recall (R) and Fscore (F) for experiments using manual transcripts.",
"cite_spans": [
{
"start": 190,
"end": 191,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 772,
"end": 793,
"text": "(-patterns&structure)",
"ref_id": null
},
{
"start": 809,
"end": 816,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Structural features: 1) The number of words in the utterance, since some acts (e.g., GREETING) require fewer words than others (e.g., SANCTION). We binned this feature into four bins: <3, 4-10, 11-20, and >20.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "2) The position of the utterance within the conversation (e.g., SANCTION is likely to happen late, and GREETING early), binned to one or more of: first five, first quarter, first third, first half, last half, last third, last quarter, and last five.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Other features: We tried other features such as 1) ngrams from previous utterances, 2) ngrams from driver's responses, 3) dependency parse patterns, 4) word/sentence embeddings, and 5) topic assignments obtained from the mixed membership Markov model (Paul, 2012) discussed in Section 3.3. These features turned out not to be helpful for this task, and we do not include those results here.",
"cite_spans": [
{
"start": 251,
"end": 263,
"text": "(Paul, 2012)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Table 3 presents micro-averaged (i.e., weighted average of each class) precision, recall and F-measure obtained on 10-fold cross validation. 7 While ERT posted the highest precision of 80.9% at a low recall of 63.6%, SVM reported the highest recall of 76.2% without a huge dent in precision. Overall, we obtain the best micro-averaged F-score of 77.5% using SVM. CNN performed worse than both ERT and SVM. 8 We also performed an ablation study to see the relative importance of features in the SVM model. As expected, the ngram features contribute the most; removing them drastically lowered performance. Patterns and structural features had a smaller impact on performance.",
"cite_spans": [
{
"start": 406,
"end": 407,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5.3"
},
{
"text": "We inspected the weights assigned to the features by a model trained on the entire dataset. The models created for each institutional act had at least one pattern or structure feature in the top twenty five features. Figure 2 shows the feature weights assigned to the model detecting GREETING. The model up-weighted utterances with greeting patterns (GREETINGS), first utterances (FIRST), and utterances in the first quarter (FIRSTQUART), while down-weighting longer utterances (LENGTH 11-20) and those that mention lenience (LENIENCE).",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 225,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5.3"
},
{
"text": "The institutional act tagger of Section 5 relies on manual transcriptions, making it not scalable to the thousands of traffic stops conducted every month. We now investigate using automatic speech recognition, while assuming manual segmentation, i.e., we know the time segments where an officer spoke to the driver; in the next section we explore the additional task of automatic officer turn detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Institutional Act Tagging using ASR",
"sec_num": "6"
},
{
"text": "Traffic stops have considerable noise (wind, traffic, horns), overlap, and difficult vocabulary (names, addresses, jargon), making it a challenging domain for off-the-shelf automatic speech recognizers (ASR). However, our 35 hours of transcribed speech is insufficient to train a domain-specific recognizer. We Train 603 + 2435 407,408 494 Dev 66 3,241 3.6 Test 113 4,248 4.6 Table 4 : Data used to build the ASR models.",
"cite_spans": [],
"ref_spans": [
{
"start": 311,
"end": 395,
"text": "Train 603 + 2435 407,408 494 Dev 66 3,241 3.6 Test 113 4,248 4.6 Table 4",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "6.1"
},
{
"text": "therefore employ two data augmentation techniques. First, we perturb our data by frame-shifting and filterbank adjustment following the procedure described in (Ko et al., 2015) . In frame-shifting, we change the starting point of each frame, making features generated from these frames slightly different from the original ones. For filterbank adjustment, we move the locations of the center frequencies of filterbank triangular frequency bins during feature extraction. This method increases our training data 5-fold to 180 hours. Second, we make use of the 300-hour Switchboard telephone speech dataset (Godfrey and Holliman, 1997) to create additional data. We first upsample Switchboard speech to the 16 KHz of our data, and then mix them with noise samples randomly picked from our data where speech is not identified, using a random speech-tonoise-ratio between 0 and 10. This method contributes another 300 hours of speech for training.",
"cite_spans": [
{
"start": 159,
"end": 176,
"text": "(Ko et al., 2015)",
"ref_id": "BIBREF34"
},
{
"start": 605,
"end": 633,
"text": "(Godfrey and Holliman, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Recordings Utterances Hours",
"sec_num": null
},
{
"text": "We implemented two acoustic models, a Bidirectional Long Short-Term Memory network (BLSTM) (Graves et al., 2013 ) and a Deep Neural Net Hidden Markov Model (DNN-HMM) tri-phone baseline. While LSTM based approaches generally work better, they are much slower to train, so we wanted to know if their word error improvements indeed translated to act tagger improvements. DNN-HMM system training follows the standard pipeline in the Kaldi toolkit (Povey et al., 2011; Vesel\u00fd et al., 2013) . Frame alignments generated from a traditional Gaussian mixture model based system are used as targets and 40-dimension fMLLR features (Gales, 1998) are used as inputs to the DNN to aid speaker adaptation. The network was trained using Restricted Boltzmann Machine (RBM) based pretraining (Salakhutdinov et al., 2007) and then discriminatively trained using stochastic gradient descent with cross-entropy as loss function. ( et al., 2013) describes more training details. We trained the BLSTM using the recipe proposed by Mohamed et al. (2015) . The BLSTM is used to model short segments of speech (with a sliding window of 40 frames), and predict frame-level HMM states at each time frame 9 . We use 6 hidden layers and 512 LSTM cells in each direction. Dropout (Srivastava et al., 2014) , peephole connections (Gers et al., 2002) and gradient clipping are adopted to stabilize training (Sak et al., 2014) . As in DNN-HMM training, fMLLR features and frame alignments are used as inputs and targets respectively.",
"cite_spans": [
{
"start": 91,
"end": 111,
"text": "(Graves et al., 2013",
"ref_id": "BIBREF24"
},
{
"start": 443,
"end": 463,
"text": "(Povey et al., 2011;",
"ref_id": "BIBREF43"
},
{
"start": 464,
"end": 484,
"text": "Vesel\u00fd et al., 2013)",
"ref_id": "BIBREF55"
},
{
"start": 621,
"end": 634,
"text": "(Gales, 1998)",
"ref_id": "BIBREF15"
},
{
"start": 775,
"end": 803,
"text": "(Salakhutdinov et al., 2007)",
"ref_id": "BIBREF49"
},
{
"start": 909,
"end": 910,
"text": "(",
"ref_id": null
},
{
"start": 1008,
"end": 1029,
"text": "Mohamed et al. (2015)",
"ref_id": "BIBREF39"
},
{
"start": 1249,
"end": 1274,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF52"
},
{
"start": 1298,
"end": 1317,
"text": "(Gers et al., 2002)",
"ref_id": "BIBREF16"
},
{
"start": 1374,
"end": 1392,
"text": "(Sak et al., 2014)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic Modeling",
"sec_num": "6.2"
},
{
"text": "For decoding, frame posteriors from the acoustic model are fed into a weighted finite state transducer with HMMs, context-dependent tri-phone models, a lexicon, 10 and a 3-gram language model with Kneser-Ney smoothing (Kneser and Ney, 1995) .",
"cite_spans": [
{
"start": 218,
"end": 240,
"text": "(Kneser and Ney, 1995)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic Modeling",
"sec_num": "6.2"
},
{
"text": "To mitigate language model data scarcity, we use transcriptions from the Switchboard and Fisher (Cieri et al., 2004) corpora, adding about 3.12M and 21.1M words, respectively. Separate language models are trained on these datasets, and then interpolated with the traffic stop language model; interpolation weights were chosen by minimizing perplexity on a separate Dev set. Table 5 shows the perplexities of different language models on this Dev set. Table 4 shows statistics of the data used to build the ASR system. We kept aside the 113 institutional act annotated stops from Section 3 as test set. The remaining 669 stops were divided 9:1 into Train and Dev sets. The Train set also includes the 2435 recordings from the Switchboard corpora.",
"cite_spans": [
{
"start": 96,
"end": 116,
"text": "(Cieri et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 374,
"end": 381,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 451,
"end": 458,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Model Data Augmentation",
"sec_num": "6.3"
},
{
"text": "Dev Test DNN 57.0 48.5 BLSTM 49.7 45.0 BLSTM (-data augmentation) 56.9 51.4 BLSTM (-LM interpolation) 50.2 45.7 Table 7 : Micro-averaged F-scores on institutional act prediction using different ASR sources. Table 6 shows word error rates under different settings. Overall, we obtain relatively high error rates, largely due to the noisy environment of the audio in this domain. BLSTM performs better than DNN-HMM, consistent with prior research (Mohamed et al., 2015; Sak et al., 2014) . 11 Interpolating Switchboard and Fisher language models provides a further boost of 0.7 percentage points.",
"cite_spans": [
{
"start": 445,
"end": 467,
"text": "(Mohamed et al., 2015;",
"ref_id": "BIBREF39"
},
{
"start": 468,
"end": 485,
"text": "Sak et al., 2014)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 7",
"ref_id": null
},
{
"start": 207,
"end": 214,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We now use text generated by ASR to train and test the institutional act tagger of Section 4. To increase recall, we also made use of N-best list output from the ASR systems, collecting ngram and pattern features from the top 10 candidate transcriptions. The L1 penalty in the SVM limits the impact of the resulting noisier ngrams on precision. Table 7 presents micro-averaged F-scores. BLSTM with 10Best obtained the best F-score of 65.3. While using 10Best lists only helped marginally for BLSTM, it helped the DNN enough to eliminate most of the gap in performance with BLSTMs. Our results suggest that downstream tasks with efficiency constraints could employ DNNs without a huge dent in performance by making use of NBest or lattice output. Table 8 : Micro-averaged F-scores on institutional act prediction from raw audio using different ASR sources.",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 7",
"ref_id": null
},
{
"start": 746,
"end": 753,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Institutional Act Tagging Experiments",
"sec_num": "6.5"
},
{
"text": "We now turn to the task of detecting institutional acts directly from raw body camera audio. This requires detecting spans with speech activity and distinguishing them from noise-voice activity detection-and identifying segments spoken by the police officers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Institutional Act Tagging from Raw Audio",
"sec_num": "7"
},
{
"text": "Our goal is to find regions of the audio with a high probability of being officer speech. We could not build a standard supervised officer-versus-other classifier, because the stops contain large untranscribed regions of officer speech (we did not transcribe segments where the officer was, for example, talking to the dispatcher in the car). We therefore instead built a two-output classifier to discriminate between the officer and community member speech, and used a tuned threshold (0.55) on the posterior probability of officer as our voice activity detector, drawing on the intuitions of (Williams and Ellis, 1999; Verma et al., 2015) who found that posterior features on speech tasks also improved speech/nonspeech performance. Our model is a 3-layer fully connected neural network with 1024 neurons trained with cross entropy loss. 12 Figure 3 sketches the architecture. We run the classifier on each .5 second span; (recall=.97 and precision = .90 on the Dev set of Table 4 ), and then merge classifications to a single turn if adjacent spans are classified as officer speech, with a 500 ms lenience for pauses.",
"cite_spans": [
{
"start": 594,
"end": 620,
"text": "(Williams and Ellis, 1999;",
"ref_id": null
},
{
"start": 621,
"end": 640,
"text": "Verma et al., 2015)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [
{
"start": 843,
"end": 851,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 975,
"end": 982,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Finding Officer Speech Segments",
"sec_num": "7.1"
},
{
"text": "We now present experiments using the automatically identified officer speech segments. At training time, we use the ASR generated text using gold segments; at test time, we use the same ASR model to generate text for the predicted segments. Since the predicted segments do not exactly match gold segments, we use a fuzzy-matching approach for evaluation. If a gold segment contains an act and an overlapping predicted segment has the same act, we consider it a true positive. If a gold segment contains an act, but none of the overlapping predicted segments have that act, it is counted as a false negative. If an act is identified in one of the predicted segments, without any of the overlapping gold segments having it, then we consider it a false positive. Table 8 presents results using this evaluation scheme. Again, BLSTM using the 10Best strategy obtained the best F-score of 59.8%. Both BLSTM and DNN benefited significantly from using the 10Best likely predictions. As in the ASR experiments, the DNN substantially closes the gap in performance by using the 10Best strategy.",
"cite_spans": [],
"ref_spans": [
{
"start": 760,
"end": 767,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Institutional Act Tagging Experiments",
"sec_num": "7.2"
},
{
"text": "Our three previous sets of models focused on labeling each officer turn with one or more institutional acts. For many purposes, it suffices to ask a far simpler question: does an act occur somewhere in the traffic stop? From a procedural justice standpoint, for example, we want to know whether the officer explained the reason for the stop; we may not care about the turn in which the reason occurred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stop Level Act Detection",
"sec_num": "8"
},
{
"text": "We call this task stop-level act detection, in which each stop is labeled as a positive instance of an act if that particular act occurred in it in the gold labels. Our algorithm is simple: run our best turn-based act tagger, and if the tagger labels an institutional act anywhere in the conversation, tag the conversation as having that class. 13 We explore all three settings: manual segments and transcripts, manual segments with ASR, and automatic segments with ASR. We compare our results with a dialog-structureignorant lexical baseline: simply merge all text features (ngrams and patterns) from all the officer turns in a stop and use them to classify whether the stop did or didn't contain an act. Our goal here is to see whether dialog structure is useful for this task; if so, the tagger based on dialog turns should outperform the global text classifier. Table 10 shows that using the output of the turnbased classifier to do stop classification offers a huge advantage over the structure-ignorant baseline, reducing F-score error by 49% while using manual transcripts, and by 22% while applied to raw audio. Table 9 and Table 11 summarize the different experiments presented in Sections 4-8. Table 9 breaks down performance for each of the 11 acts, while Table 11 compares turn-level to stop-level results.",
"cite_spans": [
{
"start": 345,
"end": 347,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 866,
"end": 874,
"text": "Table 10",
"ref_id": "TABREF1"
},
{
"start": 1120,
"end": 1140,
"text": "Table 9 and Table 11",
"ref_id": "TABREF1"
},
{
"start": 1204,
"end": 1211,
"text": "Table 9",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Stop Level Act Detection",
"sec_num": "8"
},
{
"text": "Despite our relatively small training resources (113 stops with dialog act labels, ASR and segmentation training data from one month), performance at the stop level directly from raw audio is surprisingly high. For instance, detecting whether or not the community member was explained the reason they were stopped-an important question for pro- cedural justice-we obtained around 96% precision with an 84% recall from raw camera audio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stop Level Act Detection",
"sec_num": "8"
},
{
"text": "The institutional acts that happen during a traffic stop, when they occur, and in what order are all of importance to police departments. For instance, the President's Task Force on 21st Century Policing recommends (and some departments require) that officers identify themselves and state the reason for the stop as an important aspect of fairness. However, police departments currently have no way of easily measuring how consistently such policies are carried out during traffic stops. They also have no way to test the effectiveness of any training programs or policy updates that are meant to affect these conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversation Trajectories",
"sec_num": "9"
},
{
"text": "In this section, we demonstrate that our institutional act tagger provides an efficient and reliable tool for departments to detect and monitor conversational patterns during traffic stops. Specifically, we focus on conversational openings, a fundamental aspect of conversations (Schegloff and Sacks, 1973) that is also important for procedural justice (Whalen and Zimmerman, 1987; Ramsey and Robinson, 2015) . For instance, do officers start the conversations with a greeting? Are the drivers told the reason why they were stopped? Was the reason given before or after asking for their documentation?",
"cite_spans": [
{
"start": 279,
"end": 306,
"text": "(Schegloff and Sacks, 1973)",
"ref_id": "BIBREF50"
},
{
"start": 353,
"end": 381,
"text": "(Whalen and Zimmerman, 1987;",
"ref_id": "BIBREF58"
},
{
"start": 382,
"end": 408,
"text": "Ramsey and Robinson, 2015)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conversation Trajectories",
"sec_num": "9"
},
{
"text": "We first apply our high performance (78% F-score at turn level; 89% at stop level) tagging model on manual transcripts. Figure 5 shows the percentage of stops made in which each of the eleven institutional acts was present. Around 17% of stops did not provide a reason at all. Only 69% of the stops started with a greeting, and an even smaller percentage of stops ended with a positive closing. While these high level statistics provide a window into these con-versations, our institutional event tagger allows us to gain deeper perspectives.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Conversation Trajectories",
"sec_num": "9"
},
{
"text": "Using the turn-level tags assigned by our system, we calculate the transition probabilities between dialog acts. Figure 4 shows a traffic stop 'narrative schema' or script, extracted from the high probability transitions. Variations from this prototypical script can be a useful tool for police departments to study how police community interactions differ across different squads, city locations, or driver characteristics like race. Figure 6 , for example, shows different conversational paths that officers take before explaining the reason for the stop. In over a quarter of the stops, either the reason is not given, or it is given after issuing orders or requesting documents. These violations of policing recommendations or requirements can impact the drivers' attitude and perception of the legitimacy of the institution. ",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 121,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 435,
"end": 443,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Conversation Trajectories",
"sec_num": "9"
},
{
"text": "In this section, we outline some of the limitations of our work and discuss future directions of research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "First, our work is based on data from a single police department (the Oakland Police Department in the State of California) in the U.S. The schema we developed may need to be updated for it to be applicable to other police departments; especially those in other countries, where the laws, policies and culture around policing may be significantly different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "Due to the sensitive nature of the data, we will not be able to publicly release the raw annotations described in Section 3.4. However, we will release the labeling scheme for institutional acts in traffic stops, along with the annotation manual. We believe that it will serve as a starting point for future researchers working in this domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "Like any data-oriented approach, our machine learning models may have captured the idiosyncrasies of the particular department represented in our dataset. Since we are not aware of any other police departments' body-worn camera footage that is available for research, we have no way to guarantee that our models are directly applicable to other police departments' data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "Our institutional act tagger enables us to perform large scale social science analyses controlling for various confounds, which is infeasible to perform using hand-labeled data. However, although our models obtain high performance in detecting individual institutional acts, it may also capture biases that exist in the data (Hopkins and King, 2010) . Hence, our models should be corrected for biases before they may be used to estimate proportions in any category of stops.",
"cite_spans": [
{
"start": 325,
"end": 349,
"text": "(Hopkins and King, 2010)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "In this paper, we focus on officers' speech alone, since the conversational initiative with respect to the institutional acts lies mostly with the officer. However, drivers' speech may also need to be taken into account sometimes; e.g., if an officer says yes to a driver's question did you stop me for running the red light?, the officer has in fact given the reason for the stop even though their words alone don't convey that fact. Moreover, drivers' speech may also contribute to how the conversations are shaped. However, since the camera is further away from the driver than the officer, and since the environment is noisy, the audio quality of drivers' speech is poor, and further work is required to extract useful information from driver's speech. This is an important line of future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "The video information from the body-camera footage may potentially help in the diarization and segmentation tasks, and in analyzing the effects the institutional acts have on the driver. However, since many of the stops occur at night when the video is often dark, it is not straightforward to extract useful information from them. This is another direction of future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "10"
},
{
"text": "In this paper, we developed a typology of institutional dialog acts to model the structure of police officer interactions with drivers in traffic stops. It enables a fine-grained and contextualized analysis of dialog structure that generic dialog acts fail to provide. We built supervised taggers for detecting these institutional dialog acts from interactions captured on police body-worn cameras, achieving around 78% F-score at the turn level and 89% Fscore at the stop level. Our tagger detects institutional acts at the stop level directly from raw bodycamera audio with 81% F-score, with even higher accuracy on important acts like giving the reason for a stop. Finally, we use our institutional act tagger on one month's worth of stops to extract insights about the frequency and order in which these acts occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11"
},
{
"text": "The strains on police-community relations in the U.S. make it ever more important to develop insights into how conversations between police and community members are shaped. Until now, we have not had a reliable way to understand the dynamics of these stops. In this paper, we present a novel way to look at these conversations and gain actionable insights into their structure. Being able to automatically extract this information directly from raw body-worn camera footage holds immense potential not only for police departments, but also for policy makers and the general public alike to understand and improve this ubiquitous institutional practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11"
},
{
"text": "The core contribution of this paper is a technical one of detecting institutional acts in the domain of traffic stops, from text and from unstructured audio files extracted from raw body-worn camera footage. Current work aims to improve the performance of the segmentation and diarization components, with the hope of reducing some of the performance gap with our system run on gold transcripts. We also plan to extend the preliminary analyses we describe in Section 9, for instance, studying how the different conversational paths and the presence or absence of certain acts (such as greetings or reason) shapes the rest of the conversation, including how it changes the community member's language use. Finally, our model allows us to study whether police training has an effect on the kinds of conversations that police officers have with the communities they serve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11"
},
{
"text": "We trained the model on a subset of 541 stop transcripts from our data, exploring different numbers of topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We present turn-level (instead of sentence-level) predictions to facilitate comparisons with experiments presented in Section 6 & 7; sentence-level experiments were performed using manual transcripts and yielded slightly better numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ERT is a variant of the random forest algorithm, with the difference that the splits at each step are selected at random rather than using a preset criteria.6 In preliminary experiments, we found that SVMs using these word embeddings (or GloVe embeddings) performed worse than using ngram features directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "CNN: batch size of 10, dropout of 0.3, adam, 10 epochs. SVM: C=1, linear kernel. ERT: 100 estimators, max tree depth 75, # of features capped at 20% of all features. Parameter values obtained using grid-search within the training set for each fold.8 Since CNN performed much worse than SVM with lexical features alone (last row), presumably because of the small amount of data, we did not perform more CNN experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that this recipe is different from the end-to-end approach where LSTM model takes in the whole utterance and predict phone / word outputs directly(Graves and Jaitly, 2014) 10 CMU dictionary (CMUdict v0.7a) is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that our Test set, designed for measuring institutional act detection, consists of only police officers talking close to the camera; hence the word error rate can be lower than the Dev, which is designed to measure overall ASR performance and includes community member speech as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Patch of 210ms with a stride of 50ms. Audio was downsampled to 16kHz, and converted to 21-dimensional magnitude mel-filterbank representation covering frequencies from 0-8 kHz. FFT size was 512 with 10ms hop and 30ms frame size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the best system from each set of experiments: SVM model using ngrams, patterns, and structure features trained on manual transcripts or from the BLSTM ASR model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers as well as the action editor, Jordan Boyd-Graber, for helpful feedback on an earlier draft of this paper. This research was supported by a John D. and Catherine T. MacArthur Foundation award granted to J.L. Eberhardt and D. Jurafsky, as well as NSF grants IIS-1514268 and IIS-1159679. We also thank the City of Oakland and the Oakland Police Department for their support and cooperation in this project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Large-scale analysis of counseling conversations: An application of natural language processing to mental health",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Althoff",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "463--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale analysis of counseling conversations: An application of natural language processing to mental health. Transactions of the Association for Computa- tional Linguistics, 4:463-476.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic dialog act segmentation and classification in multiparty meetings",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Ang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "1",
"issue": "",
"pages": "1061--1064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Ang, Yang Liu, and Elizabeth Shriberg. 2005. Automatic dialog act segmentation and classification in multiparty meetings. In Proceedings of IEEE Inter- national Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 1061-1064. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Order in Court",
"authors": [
{
"first": "Maxwell",
"middle": [],
"last": "Atkinson",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Drew",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxwell Atkinson and Paul Drew. 1979. Order in Court. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "How To Do Things With Words",
"authors": [
{
"first": "Austin",
"middle": [],
"last": "John Langshaw",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Langshaw Austin. 1975. How To Do Things With Words. Oxford University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning the structure of task-driven human-human dialogs",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [
"Di"
],
"last": "Fabbrizio",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "201--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore, Giuseppe Di Fabbrizio, and Amanda Stent. 2006. Learning the structure of task-driven human-human dialogs. In Proceedings of the 21st In- ternational Conference on Computational Linguistics and 44th Annual Meeting of the Association for Com- putational Linguistics, pages 201-208. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Topic segmentation with an aspect hidden Markov model",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Pedro",
"middle": [
"J"
],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moreno",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 24th Annual International ACM SI-GIR conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "343--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei and Pedro J. Moreno. 2001. Topic seg- mentation with an aspect hidden Markov model. In Proceedings of the 24th Annual International ACM SI- GIR conference on Research and Development in In- formation Retrieval, pages 343-348. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Police Don't Like Black people\": African-American Young Men's Accumulated Police Experiences",
"authors": [
{
"first": "K",
"middle": [],
"last": "Rod",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brunson",
"suffix": ""
}
],
"year": 2007,
"venue": "Criminology & Public Policy",
"volume": "6",
"issue": "1",
"pages": "71--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rod K. Brunson. 2007. \"Police Don't Like Black peo- ple\": African-American Young Men's Accumulated Police Experiences. Criminology & Public Policy, 6(1):71-101.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using discourse signals for robust instructor intervention prediction",
"authors": [
{
"first": "Carrie",
"middle": [],
"last": "Muthu Kumar Chandrasekaran",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Epp",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muthu Kumar Chandrasekaran, Carrie Epp, Min-Yen Kan, and Diane Litman. 2017. Using discourse sig- nals for robust instructor intervention prediction. In Proceedings of the AAAI Conference on Artificial In- telligence.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Fisher corpus: A resource for the next generations of speech-to-text",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Cieri",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Cieri, David Miller, and Kevin Walker. 2004. The Fisher corpus: A resource for the next generations of speech-to-text. In Proceedings of the Fourth International Conference on Language Re- sources and Evaluation (LREC'04). European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to classify email into \"speech acts",
"authors": [
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vitor",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Carvalho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "4",
"issue": "",
"pages": "309--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William W. Cohen, Vitor R. Carvalho, and Tom M. Mitchell. 2004. Learning to classify email into \"speech acts\". In Proceedings of the Conference on Empirical Methods in Natural Language Processing, volume 4, pages 309-316. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Supportvector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine learning",
"volume": "20",
"issue": "3",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine learning, 20(3):273-297.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Accommodation theory, communication, context and consequences",
"authors": [
{
"first": "Justine",
"middle": [],
"last": "Coupland",
"suffix": ""
},
{
"first": "Nikolas",
"middle": [],
"last": "Coupland",
"suffix": ""
},
{
"first": "Howard",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 1991,
"venue": "Contexts of Accommodation",
"volume": "",
"issue": "",
"pages": "1--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justine Coupland, Nikolas Coupland, and Howard Giles. 1991. Accommodation theory, communication, con- text and consequences. Contexts of Accommodation, pages 1-68.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bayesian unsupervised topic segmentation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 334-343. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Citizens' perceptions of distributive and procedural injustice during traffic stops with police",
"authors": [
{
"first": "S",
"middle": [],
"last": "Robin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Engel",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Research in Crime and Delinquency",
"volume": "42",
"issue": "4",
"pages": "445--481",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin S. Engel. 2005. Citizens' perceptions of dis- tributive and procedural injustice during traffic stops with police. Journal of Research in Crime and Delin- quency, 42(4):445-481.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pulled Over: How Police Stops Define Race and Citizenship",
"authors": [
{
"first": "Charles",
"middle": [
"R"
],
"last": "Epp",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Maynard-Moody",
"suffix": ""
},
{
"first": "Donald",
"middle": [
"P"
],
"last": "Haider-Markel",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles R. Epp, Steven Maynard-Moody, and Donald P. Haider-Markel. 2014. Pulled Over: How Police Stops Define Race and Citizenship. University of Chicago Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Maximum likelihood linear transformations for HMM-based speech recognition",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gales",
"suffix": ""
}
],
"year": 1998,
"venue": "Computer Speech & Language",
"volume": "12",
"issue": "2",
"pages": "75--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark J. F. Gales. 1998. Maximum likelihood linear transformations for HMM-based speech recognition. Computer Speech & Language, 12(2):75-98.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning precise timing with LSTM recurrent networks",
"authors": [
{
"first": "Felix",
"middle": [
"A"
],
"last": "Gers",
"suffix": ""
},
{
"first": "Nicol",
"middle": [
"N"
],
"last": "Schraudolph",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "115--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix A. Gers, Nicol N. Schraudolph, and J\u00fcrgen Schmid- huber. 2002. Learning precise timing with LSTM re- current networks. Journal of Machine Learning Re- search, 3(Aug):115-143.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Extremely randomized trees. Machine Learning",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Geurts",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Ernst",
"suffix": ""
},
{
"first": "Louis",
"middle": [],
"last": "Wehenkel",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "63",
"issue": "",
"pages": "3--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Geurts, Damien Ernst, and Louis Wehenkel. 2006. Extremely randomized trees. Machine Learning, 63(1):3-42.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Communication accommodation: Law enforcement and the public",
"authors": [
{
"first": "Howard",
"middle": [],
"last": "Giles",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Fortman",
"suffix": ""
},
{
"first": "Ren\u00e9",
"middle": [],
"last": "Dailey",
"suffix": ""
},
{
"first": "Valerie",
"middle": [],
"last": "Barker",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Hajek",
"suffix": ""
}
],
"year": 2006,
"venue": "Applied Interpersonal Communication Matters: Family",
"volume": "5",
"issue": "",
"pages": "241--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Howard Giles, Jennifer Fortman, Ren\u00e9 Dailey, Valerie Barker, Christopher Hajek, Michelle Chernikoff An- derson, and Nicholas O. Rule. 2006. Communica- tion accommodation: Law enforcement and the public. Applied Interpersonal Communication Matters: Fam- ily, Health, and Community Relations, 5:241-269.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Accommodation and institutional talk: Communicative dimensions of policecivilian interactions",
"authors": [
{
"first": "Howard",
"middle": [],
"last": "Giles",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Hajek",
"suffix": ""
},
{
"first": "Valerie",
"middle": [],
"last": "Barker",
"suffix": ""
},
{
"first": "Mei-Chen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yan",
"middle": [
"Bing"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Lee"
],
"last": "Hummert",
"suffix": ""
},
{
"first": "Michelle",
"middle": [
"C"
],
"last": "Anderson",
"suffix": ""
}
],
"year": 2007,
"venue": "Language, Discourse and Social Psychology",
"volume": "",
"issue": "",
"pages": "131--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Howard Giles, Christopher Hajek, Valerie Barker, Mei- Chen Lin, Yan Bing Zhang, Mary Lee Hummert, and Michelle C. Anderson. 2007. Accommodation and institutional talk: Communicative dimensions of po- licecivilian interactions. In Language, Discourse and Social Psychology, pages 131-159. Springer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sequential strategies of accommodation: A new method in courtroom",
"authors": [
{
"first": "Augusto",
"middle": [],
"last": "Gnisci",
"suffix": ""
}
],
"year": 2005,
"venue": "British Journal of Social Psychology",
"volume": "44",
"issue": "4",
"pages": "621--643",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Augusto Gnisci. 2005. Sequential strategies of accom- modation: A new method in courtroom. British Jour- nal of Social Psychology, 44(4):621-643.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Switchboard-1 release 2. Linguistic Data Consortium",
"authors": [],
"year": null,
"venue": "",
"volume": "926",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Switchboard-1 release 2. Linguistic Data Consortium, Philadelphia, 926:927.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Towards end-toend speech recognition with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1764--1772",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and Navdeep Jaitly. 2014. Towards end-to- end speech recognition with recurrent neural networks. In International Conference on Machine Learning, pages 1764-1772.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Speech recognition with deep recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Abdel-Rahman",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6645--6649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 6645-6649. IEEE.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The representation and use of focus in dialogue understanding",
"authors": [
{
"first": "Barbara",
"middle": [
"J"
],
"last": "Grosz",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J. Grosz. 1977. The representation and use of fo- cus in dialogue understanding. Technical report, SRI International Menlo Park United States.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Conversation analysis and institutional talk. Handbook of Language and Social Interaction",
"authors": [
{
"first": "John",
"middle": [],
"last": "Heritage",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "103--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Heritage. 2005. Conversation analysis and institu- tional talk. Handbook of Language and Social Inter- action, pages 103-147.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A method of automated nonparametric content analysis for social science",
"authors": [
{
"first": "J",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2010,
"venue": "American Journal of Political Science",
"volume": "54",
"issue": "1",
"pages": "229--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel J. Hopkins and Gary King. 2010. A method of automated nonparametric content analysis for so- cial science. American Journal of Political Science, 54(1):229-247.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Automatic detection of discourse structure for speech recognition and understanding",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Meteer",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Carol",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 1997 IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Jurafsky, Rebecca Bates, Noah Coccaro, Rachel Martin, Marie Meteer, Klaus Ries, Elizabeth Shriberg, Andreas Stolcke, Paul Taylor, and Carol Van Ess- Dykema. 1997. Automatic detection of discourse structure for speech recognition and understanding. In Proceedings of the 1997 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 88-95. IEEE.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dialogue act classification in domainindependent conversations using a deep recurrent neural network",
"authors": [
{
"first": "Hamed",
"middle": [],
"last": "Khanpour",
"suffix": ""
},
{
"first": "Nishitha",
"middle": [],
"last": "Guntakandla",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2012--2021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamed Khanpour, Nishitha Guntakandla, and Rodney Nielsen. 2016. Dialogue act classification in domain- independent conversations using a deep recurrent neu- ral network. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, pages 2012-2021.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Classifying dialogue acts in one-on-one live chats",
"authors": [
{
"first": "Nam",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Cavedon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "862--871",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Nam Kim, Lawrence Cavedon, and Timothy Bald- win. 2010. Classifying dialogue acts in one-on-one live chats. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 862-871. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Classifying dialogue acts in multi-party live chats",
"authors": [
{
"first": "Nam",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Cavedon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation",
"volume": "",
"issue": "",
"pages": "463--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Nam Kim, Lawrence Cavedon, and Timothy Baldwin. 2012. Classifying dialogue acts in multi-party live chats. In Proceedings of the 26th Pacific Asia Con- ference on Language, Information, and Computation, pages 463-472.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improved backing-off for M-gram language modeling",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Kneser",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing",
"volume": "1",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for M-gram language modeling. In Pro- ceedings of the International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 181- 184. IEEE.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Audio augmentation for speech recognition",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Vijayaditya",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Sixteenth Annual Conference of the International Speech Communication Association (INTERSPEECH)",
"volume": "",
"issue": "",
"pages": "3586--3589",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Ko, Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur. 2015. Audio augmentation for speech recognition. In Proceedings of Sixteenth Annual Con- ference of the International Speech Communication Association (INTERSPEECH), pages 3586-3589.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Police behavior during traffic and street stops",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Langton",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Durose",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Langton and Matthew R. Durose. 2013. Police behavior during traffic and street stops, 2011. US De- partment of Justice, Office of Justice Programs, Bu- reau of Justice Statistics Washington, DC.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Multi-level gated recurrent neural network for dialog act classification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunfang",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 26th International Conference on Computational Linguistics: Technical papers",
"volume": "",
"issue": "",
"pages": "1970--1979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Li and Yunfang Wu. 2016. Multi-level gated re- current neural network for dialog act classification. In Proceedings of the 26th International Conference on Computational Linguistics: Technical papers, pages 1970-1979.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Using context information for dialog act classification in DNN framework",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Lei",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2170--2178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Kun Han, Zhao Tan, and Yun Lei. 2017. Us- ing context information for dialog act classification in DNN framework. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 2170-2178.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Driving while black: Effects of race, ethnicity, and gender on citizen self-reports of traffic stops and police actions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Lundman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaufman",
"suffix": ""
}
],
"year": 2003,
"venue": "Criminology",
"volume": "41",
"issue": "1",
"pages": "195--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard J. Lundman and Robert L. Kaufman. 2003. Driving while black: Effects of race, ethnicity, and gender on citizen self-reports of traffic stops and po- lice actions. Criminology, 41(1):195-220.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Deep bi-directional recurrent networks over spectral windows",
"authors": [
{
"first": "Abdel-Rahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jasha",
"middle": [],
"last": "Droppo",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stoicke",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdel-rahman Mohamed, Frank Seide, Dong Yu, Jasha Droppo, Andreas Stoicke, Geoffrey Zweig, and Gerald Penn. 2015. Deep bi-directional recurrent networks over spectral windows. In Proceedings of 2015 IEEE Workshop on Automatic Speech Recognition and Un- derstanding (ASRU), pages 78-83. IEEE.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "SITS: A hierarchical nonparametric model using speaker identity for topic segmentation in multiparty conversations",
"authors": [
{
"first": "",
"middle": [],
"last": "Viet-An",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "78--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viet-An Nguyen, Jordan Boyd-Graber, and Philip Resnik. 2012. SITS: A hierarchical nonparametric model using speaker identity for topic segmentation in multiparty conversations. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 78-87. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Improving the quality of minority class identification in dialog act tagging",
"authors": [
{
"first": "Adinoyi",
"middle": [],
"last": "Omuya",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Vinodkumar Prabhakaran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "802--807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adinoyi Omuya, Vinodkumar Prabhakaran, and Owen Rambow. 2013. Improving the quality of minority class identification in dialog act tagging. In Proceed- ings of the 2013 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 802-807.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Mixed membership Markov models for unsupervised conversation modeling",
"authors": [
{
"first": "Michael",
"middle": [
"J"
],
"last": "Paul",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "94--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J. Paul. 2012. Mixed membership Markov mod- els for unsupervised conversation modeling. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 94-104. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Predicting overt display of power in written dialogs",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motl\u00ed\u010dek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "518--522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Han- nemann, Petr Motl\u00ed\u010dek, Yanmin Qian, Petr Schwarz, Jan Silovsk\u00fd, Georg Stemmer, and Karel Vesel\u00fd. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society. Vinodkumar Prabhakaran, Owen Rambow, and Mona Diab. 2012. Predicting overt display of power in written dialogs. In Proceedings of the 2012 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 518-522. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Gender and power: How gender and gender environment affect manifestations of power",
"authors": [
{
"first": "Emily",
"middle": [
"E"
],
"last": "Vinodkumar Prabhakaran",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Reid",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1965--1976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinodkumar Prabhakaran, Emily E. Reid, and Owen Rambow. 2014. Gender and power: How gender and gender environment affect manifestations of power. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1965-1976. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Final report of the President's task force on 21st century policing",
"authors": [
{
"first": "H",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "Laurie",
"middle": [
"O"
],
"last": "Ramsey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robinson",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles H. Ramsey and Laurie O. Robinson. 2015. Fi- nal report of the President's task force on 21st century policing. Washington, DC: Office of Community Ori- ented Policing Services.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Unsupervised modeling of Twitter conversations",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "172--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsu- pervised modeling of Twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172-180. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A simplest systematics for the organization of turn-taking for conversation. Language",
"authors": [
{
"first": "Harvey",
"middle": [],
"last": "Sacks",
"suffix": ""
},
{
"first": "Emanuel",
"middle": [
"A"
],
"last": "Schegloff",
"suffix": ""
},
{
"first": "Gail",
"middle": [],
"last": "Jefferson",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "696--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harvey Sacks, Emanuel A. Schegloff, and Gail Jefferson. 1974. A simplest systematics for the organization of turn-taking for conversation. Language, pages 696- 735.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Long short-term memory recurrent neural network architectures for large scale acoustic modeling",
"authors": [
{
"first": "Ha\u015fim",
"middle": [],
"last": "Sak",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "Fran\u00e7oise",
"middle": [],
"last": "Beaufays",
"suffix": ""
}
],
"year": 2014,
"venue": "Fifteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ha\u015fim Sak, Andrew Senior, and Fran\u00e7oise Beaufays. 2014. Long short-term memory recurrent neural net- work architectures for large scale acoustic modeling. In Fifteenth Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Restricted Boltzmann machines for collaborative filtering",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "791--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Salakhutdinov, Andriy Mnih, and Geoffrey Hin- ton. 2007. Restricted Boltzmann machines for collab- orative filtering. In Proceedings of the 24th Interna- tional Conference on Machine Learning, pages 791- 798. ACM.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Opening up closings",
"authors": [
{
"first": "A",
"middle": [],
"last": "Emanuel",
"suffix": ""
},
{
"first": "Harvey",
"middle": [],
"last": "Schegloff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sacks",
"suffix": ""
}
],
"year": 1973,
"venue": "Semiotica",
"volume": "8",
"issue": "4",
"pages": "289--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emanuel A. Schegloff and Harvey Sacks. 1973. Opening up closings. Semiotica, 8(4):289-327.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Identification and recognition in telephone conversation openings. Everyday-Language: Studies in Ethnomethodology",
"authors": [
{
"first": "Emanuel",
"middle": [
"A"
],
"last": "Schegloff",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "23--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emanuel A. Schegloff. 1979. Identification and recog- nition in telephone conversation openings. Everyday- Language: Studies in Ethnomethodology, New York, Irvington, pages 23-78.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2006,
"venue": "Dialogue",
"volume": "26",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Tay- lor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2006. Dialogue act modeling for automatic tagging and recognition of conversational speech. Di- alogue, 26(3).",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Structural segmentation of Hindustani concert audio with posterior features",
"authors": [
{
"first": "Prateek",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "T",
"middle": [
"P"
],
"last": "Vinutha",
"suffix": ""
},
{
"first": "Parthe",
"middle": [],
"last": "Pandit",
"suffix": ""
},
{
"first": "Preeti",
"middle": [],
"last": "Rao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "136--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prateek Verma, T. P. Vinutha, Parthe Pandit, and Preeti Rao. 2015. Structural segmentation of Hindustani concert audio with posterior features. In Proceedings of the 2015 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 136-140. IEEE.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Semi-supervised training of deep neural networks",
"authors": [
{
"first": "Karel",
"middle": [],
"last": "Vesel\u00fd",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karel Vesel\u00fd, Mirko Hannemann, and Lukas Burget. 2013. Semi-supervised training of deep neural net- works. In Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Language from police body camera footage shows racial disparities in officer respect",
"authors": [
{
"first": "David",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [
"L"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eberhardt",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "114",
"issue": "25",
"pages": "6521--6526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffiths, David Jurgens, Dan Jurafsky, and Jennifer L. Eberhardt. 2017. Language from police body cam- era footage shows racial disparities in officer respect. Proceedings of the National Academy of Sciences, 114(25):6521-6526.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Sequential and institutional contexts in calls for help. Social Psychology Quarterly",
"authors": [
{
"first": "Marilyn",
"middle": [
"R"
],
"last": "Whalen",
"suffix": ""
},
{
"first": "Don",
"middle": [
"H"
],
"last": "Zimmerman",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "172--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn R. Whalen and Don H. Zimmerman. 1987. Se- quential and institutional contexts in calls for help. So- cial Psychology Quarterly, pages 172-185.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Speech/music discrimination based on posterior probability features",
"authors": [],
"year": null,
"venue": "Proceedings of the Sixth European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Speech/music discrimination based on posterior prob- ability features. In Proceedings of the Sixth European Conference on Speech Communication and Technol- ogy.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Turn on, tune in, drop out: Anticipating student dropouts in massive open online courses",
"authors": [
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tanmay",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Adamson",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"P"
],
"last": "Rose",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 NIPS Data-Driven Education Workshop",
"volume": "10",
"issue": "",
"pages": "13--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diyi Yang, Tanmay Sinha, David Adamson, and Car- olyn P. Rose. 2013. Turn on, tune in, drop out: Anticipating student dropouts in massive open online courses. In Proceedings of the 2013 NIPS Data- Driven Education Workshop, volume 10, pages 13-20.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Topic assignments from Mixed Membership Markov Modeling (Paul, 2012) on a sample stop (turns go from top to bottom;x-axis shows probabilities assigned to each topic; right are the top topic words). The model identifies the reason for the stop (orange), driver's documents (blue), driver's address and demographics (purple), the sanction (beige) and closing (yellow)."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Top 25 most (by absolute value) weighted features in the GREETING model."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Detecting Officer Speech segments."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Prototypical conversation structure of traffic stops; transition probabilities based on 900 stops from Apr '14."
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Presence of institutional acts in the 900 stops of black or white drivers from the month of April 2014."
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Conversational Paths to Giving Reason."
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>: Excerpt from a traffic stop conversation with in-stitutional acts in [blue] (names/addresses redacted).</td></tr></table>",
"text": ""
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Typology of institutional acts during traffic stops. Column 1 shows the 11-way coarse-grained groupings. Column 2 shows the 25-way fine-grained institutional act labels used for annotations, and Column 3 shows the number of sentences labeled with each acts."
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Language model perplexity on Dev set."
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">ASR Source 1Best 10Best</td></tr><tr><td>DNN BLSTM</td><td>57.2 65.0</td><td>63.6 65.3</td></tr></table>",
"text": "Word error rate for different ASR models."
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Stop level institutional act presence detection results (for each label)."
},
"TABREF12": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>: Summary: Micro-averaged F-scores across dif-ferent text/segmentation sources.</td></tr></table>",
"text": ""
}
}
}
}