{ "paper_id": "J99-3003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:47:28.929008Z" }, "title": "Vector-based Natural Language Call Routing", "authors": [ { "first": "Jennifer", "middle": [], "last": "Chu-Carroll", "suffix": "", "affiliation": { "laboratory": "Lucent Technologies Bell Laboratories Bob Carpenter* Lucent Technologies Bell Laboratories", "institution": "", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a domain-independent, automatically trained natural language call router for directing incoming calls in a call center. Our call router directs customer calls based on their response to an open-ended How may I direct your call? prompt. Routing behavior is trained from a corpus of transcribed and hand-routed calls and then carried out using vectorbased information retrieval techniques. Terms consist of n-gram sequences of morphologically reduced content words, while documents representing routing destinations consist of weighted term frequencies derived from calls to that destination in the training corpus. Based on the statistical discriminating power of the n-gram terms extracted from the caller's request, the caller is 1) routed to the appropriate destination, 2) transferred to a human operator, or 3) asked a disambiguation question. In the last case, the system dynamically generates queries tailored to the caller's request and the destinations with which it is consistent, based on our extension of the vector model. Evaluation of the call router performance over a financial services call center using both accurate transcriptions of calls and fairly noisy speech recognizer output demonstrated robustness in the face of speech recognition errors. More specifically, using accurate transcriptions of speech input, our system correctly routed 93.8% of the calls after redirecting 10.2% of all calls to a human operator. Using speech recognizer output with a 23% error rate reduced the number of correctly routed calls by 4%.", "pdf_parse": { "paper_id": "J99-3003", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a domain-independent, automatically trained natural language call router for directing incoming calls in a call center. Our call router directs customer calls based on their response to an open-ended How may I direct your call? prompt. Routing behavior is trained from a corpus of transcribed and hand-routed calls and then carried out using vectorbased information retrieval techniques. Terms consist of n-gram sequences of morphologically reduced content words, while documents representing routing destinations consist of weighted term frequencies derived from calls to that destination in the training corpus. Based on the statistical discriminating power of the n-gram terms extracted from the caller's request, the caller is 1) routed to the appropriate destination, 2) transferred to a human operator, or 3) asked a disambiguation question. In the last case, the system dynamically generates queries tailored to the caller's request and the destinations with which it is consistent, based on our extension of the vector model. Evaluation of the call router performance over a financial services call center using both accurate transcriptions of calls and fairly noisy speech recognizer output demonstrated robustness in the face of speech recognition errors. More specifically, using accurate transcriptions of speech input, our system correctly routed 93.8% of the calls after redirecting 10.2% of all calls to a human operator. Using speech recognizer output with a 23% error rate reduced the number of correctly routed calls by 4%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The call routing task is one of directing a customer's call to an appropriate destination within a call center or directly providing some simple information, such as current loan rates, on the basis of some kind of interaction with the customer. In current systems, such interaction is typically carried out via a touch-tone system with a rigid predetermined navigational menu. The primary disadvantages of navigating menus for users are the time it takes to listen to all the options and the difficulty of matching their goals to the given options. These problems are compounded by the necessity of descending a nested hierarchy of choices to zero in on a particular activity. Even requests with simple English phrasings such as I want the balance on my car loan may require users to navigate as many as four or five nested menus with four or five options each. We describe an alternative to touch-tone menus that allows users to interact with a call router in natural spoken English dialogues just as they would with a human operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In a typical dialogue between a caller and a human operator, the operator responds to a caller request by either routing the call to an appropriate destination, or querying the caller for further information to determine where the call should be routed. Thus, in developing an automatic call router, we select between these two options as well as a third option of sending the call to a human operator in situations where the router recognizes that to automatically handle the call is beyond its capabilities. The rest of this paper provides both a description and an evaluation of an automatic call router that consists of 1) a routing module driven by a novel application of vector-based information retrieval techniques, and 2) a disambiguation query generation module that utilizes the same vector representations as the routing module and dynamically generates queries tailored to the caller's request and the destinations with which it is consistent, based on our extension of the vector model. The overall call routing system has the following desirable characteristics: First, the training of the call router is domain independent and fully automatic, 1 allowing the system to be easily ported to new domains. Second, the disambiguation module dynamically generates queries based on caller requests and candidate destinations, allowing the system to tailor queries to specific circumstances. Third, the system is highly robust to speech recognition errors. Finally, the overall performance of the system is high, in particular when using noisy speech recognizer output. With transcription (perfect recognition), we redirect 10.2% of the calls to the operator, correctly routing 93.8% of the remainder either with or without disambiguation. With spoken input processed automatically with recognition performance at a 23% word error rate, the percentage of correctly routed calls drops by only 4%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Call routing is similar to text categorization in identifying which one of n topics (or in the case of call routing, destinations) most closely matches a caller's request. Call routing is distinguished from text categorization by requiring a single destination to be selected, but allowing a request to be refined in an interactive dialogue. The closest previous work to ours is Ittner, Lewis, and Ahn (1995) , in which noisy documents produced by optical character recognition are classified against multiple categories. We are further interested in carrying out the routing process using natural, conversational language.", "cite_spans": [ { "start": 376, "end": 408, "text": "is Ittner, Lewis, and Ahn (1995)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The only work on natural language call routing to date that we are aware of is that by Gorin and his colleagues (Gorin, Riccardi, and Wright 1997; Abella and Gorin 1997; Riccardi and Gorin 1998) , who designed an automated system to route calls to AT&T operators. They select salient phrase fragments from caller requests (in response to the system's prompt of How may I help you ?), such as made a long distance and the area code for, and sometimes including phrases that are not meaningful syntactic or semantic units, such as it on my credit. These salient phrase fragments, which are incorporated into their finite-state language model for their speech recognizer, are then used to compute likely destinations, which they refer to as call types. This is done by either computing a posteriori probabilities for all possible call types (Gorin 1996) or by passing the weighted fragments through a neural network classifier (Wright, Gorin, and Riccardi 1997) . Abella and Gorin (1997) utilized the Boolean formula minimization algorithm for combining the resulting set of call types based on a hand-coded hierarchy of call types. This algorithm provides the basis for determining whether or not the goal of the request can be uniquely identified, in order to select from a set of dialogue strategies for response generation. ", "cite_spans": [ { "start": 112, "end": 146, "text": "(Gorin, Riccardi, and Wright 1997;", "ref_id": "BIBREF6" }, { "start": 147, "end": 169, "text": "Abella and Gorin 1997;", "ref_id": "BIBREF0" }, { "start": 170, "end": 194, "text": "Riccardi and Gorin 1998)", "ref_id": "BIBREF12" }, { "start": 838, "end": 850, "text": "(Gorin 1996)", "ref_id": "BIBREF5" }, { "start": 924, "end": 958, "text": "(Wright, Gorin, and Riccardi 1997)", "ref_id": "BIBREF18" }, { "start": 961, "end": 984, "text": "Abella and Gorin (1997)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "To examine human-human dialogue behavior, we analyzed a set of 4,497 transcribed telephone calls involving actual customers interacting with human call operators at a large call center. In the vast majority of these calls, the first customer utterance contained between 1 and 20 words, while the longest first utterance had 131 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3." }, { "text": "However, these utterances included only a few content words, 2 with almost all calls containing fewer than 10 content words in the initial user utterance. Figures l(a) and l(b) show histograms of call lengths based on total words and content words in the initial user utterance in each call, respectively. Figure 2 shows the distribution of calls to the top 23 destinations on a log scale in our corpus. 3 The perplexity of a probability distribution provides a measure of the difficulty of classification of samples drawn from that distribution. Using the estimate of call distribution based on Figure 2 , our task perplexity is 6.836. 4", "cite_spans": [], "ref_spans": [ { "start": 306, "end": 314, "text": "Figure 2", "ref_id": null }, { "start": 596, "end": 604, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3." }, { "text": "We further analyzed our corpus of calls along two dimensions: the semantics of caller requests and the dialogue actions for operator responses. The analysis of the semantics of caller requests is intended to examine the ways in which users typically express their goal when prompted, and is used to focus on an appropriate subset of the classes of user utterances that the call router should handle automatically (as opposed to transferring to a human operator). The analysis of the dialogue actions for operator responses, on the other hand, is intended to determine the types of responses the call router should be able to provide in response to user utterances in order to help design the response generation component of the call router. The analysis of the corpus along both dimensions was performed by the first author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3." }, { "text": "In our corpus, all callers respond to an initial open-ended prompt of /ABC/ banking services call director; how may I direct your call? Their responses varied greatly in their 2 Content words are keywords automatically extracted from the training corpus that are considered relevant for routing purposes. For details on how the list of content words is selected, see Section 4.1.2. 3 These are destinations that received more than 10 calls in the corpus we analyzed. 4 Recall that the entropy of a distribution p is the expected value of the log probability, given by H(p) = -Y'~x p(x) log 2 p(x). The perplexity is given by 2 H(p) and can be thought of roughly as the number of equiprobable categories that would lead to the same classification difficulty. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantics of Caller Requests", "sec_num": "3.1" }, { "text": "Distribution of calls.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "degree of specificity. We roughly classified the calls into the following three broad classes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Destination Name, in which the caller explicitly specifies the name of the department to which he wishes to be transferred. The requested destination can form an answer to the operator's prompt by itself, as in deposit services, or be part of a complete sentence, as in I would like to speak to someone in auto leasing please.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Activity, in which the caller provides a description of the activity he wishes to perform, and expects the operator to transfer his call to the appropriate department that handles the given activity. Such descriptions may be ambiguous or unambiguous, depending on the level of detail the caller provides, which in turn depends on the caller's understanding of the organization of the call center. Because all transactions related to savings accounts are handled by the deposit services department in the call center we studied, the request I want to talk to someone about savings accounts will be routed to Deposit Services. On the other hand, the similar request I want to talk to someone about car loans is ambiguous between Consumer Lending, which handles new car loans, and Loan Services, which handles existing car loans. Queries can also be ambiguous due to the caller's providing more than one activity, as in I need to get my checking account balance and then pay a car loan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Indirect Request, in which the caller describes his goal in a roundabout way, often including irrelevant information. This typically occurs with callers who are unfamiliar with the call center organization, or those who have difficulty concisely describing their goals. An example of an actual indirect request is ah I'm calling \"cuz ah a friend gave me this number and ah she told me ah with this number I can buy some cars or whatever but she didn't know how to explain it to me so I just called you you know to get that information. Table I shows the distribution of caller requests in our corpus with respect to these semantic types. Our analysis shows that in the vast majority of calls, the request was based on either destination name or activity. Since in our corpus there are only 23 dis- tinct destinations, 5 and each destination only handles a fairly small number (dozens to hundreds) of activities, requests based on destination names and activities are expected to be more predictable and thus more suitable for handling by an automatic call router. However, our system does not directly classify calls in terms of specificity; this classification was only intended to provide a sense of the distribution of calls received.", "cite_spans": [], "ref_spans": [ { "start": 536, "end": 543, "text": "Table I", "ref_id": null } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "In addition to analyzing how the callers phrased their requests in response to the operator's initial prompt, we also analyzed how the operators responded to the callers' requests. 6 We found that in our corpus, the human operator either notifies the caller of a destination to which the call will be transferred, or queries the caller for further information, most frequently when the original request was ambiguous and, much less often, when the original request was not heard or understood. Table 2 shows the frequency with which each dialogue action was employed by human operators in our corpus. It shows that nearly 20% of all caller requests require further disambiguation. We further analyzed these calls that were not immediately routed and noted that 75% of them involve underspecified noun phrases, such as requesting car loans without specifying whether it is an existing car loan or a new car loan. The remaining 25% mostly involve underspecified verb phrases, such as asking to transfer funds without specifying the accounts to and from which the transfer will take place, or missing verb phrases, such as asking for direct deposit without specifying whether the caller wants to set up a direct deposit or change an existing direct deposit.", "cite_spans": [], "ref_spans": [ { "start": 494, "end": 501, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Dialogue Actions for Operator Responses", "sec_num": "3.2" }, { "text": "Based on our analysis of operator responses, we decided to first focus our router responses on notifying the caller of a selected destination in cases where the caller request is unambiguous, and on formulating a query for noun phrase disambiguation in the case of noun phrase underspecification in the caller request. For calls that 5 Although the call center had nearly 100 departments, in our corpus of 4,500 calls, only 23 departments received more than 10 calls. We chose to base our experiments on these 23 destinations. 6 In most calls, we analyzed the utterances given in the operator's second turn in the dialogue. However, in situations where the operator generates an acknowledgment, such as uh-huh, midway through the caller's request, we analyzed utterances in the next operator turn. do not satisfy either criterion, the call router should simply relay them to a human operator. 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dialogue Actions for Operator Responses", "sec_num": "3.2" }, { "text": "In addition to notifying the caller of a selected destination or querying the caller for further information, an automatic call router should be able to identify when it is unable to handle a call and route the call to a human operator for further processing. The process of determining whether to route a call, generate a disambiguation query, or redirect the call to an operator is carried out by two modules in our system, the routing module and the disambiguation module, as shown in Figure 3 . Given a caller request, the routing module selects a set of candidate destinations to which it believes the call can reasonably be routed. If there is exactly one such destination, the call is routed to that destination and the caller notified; if there is no appropriate destination, the call is sent to an operator; and if there are multiple candidate destinations, the disambiguation module is invoked. In the last case, the disambiguation module attempts to formulate a query that it believes will solicit relevant information from the caller to allow the revised request to be routed to a unique destination. If such a query is successfully formulated, it is posed to the caller, and the system makes another attempt at routing the revised request, which includes the original request and the caller's response to the follow-up question; otherwise, the call is sent to a human operator.", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 496, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Vector-based Call Routing", "sec_num": "4." }, { "text": "7 Note that the corpus analysis described in this section was conducted with the purpose of determining guidelines for system design in order to achieve reasonable coverage of phenomena in actual human-human dialogues. The call classification schemes presented in this section do not come into play in the actual training or testing of our system, nor do we discard any part of our training corpus as a result of this analysis. Two-dimensional vector representation for the routing module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector-based Call Routing", "sec_num": "4." }, { "text": "Our approach to call routing is novel in its application of vector-based information retrieval techniques to the routing process, and in its extension of the vectorbased representation for dynamically generating disambiguation queries (Chu-Carroll and . The routing and disambiguation mechanisms are detailed in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Credit Card Servio", "sec_num": null }, { "text": "4.1.1 Vector Representation for the Routing Module. In vector-based information retrieval, the database contains a large collection of documents, each of which is represented as a vector in n-dimensional space. Given a query, a query vector is computed and compared to the existing document vectors, and those documents whose vectors are similar to the query vector are returned. We apply this technique to call routing by treating each destination as a document, and representing the destination as a vector in n-dimensional space. Given a caller request, an n-dimensional request vector is computed. The similarity between the request vector and each destination vector is then computed and those destinations that are close to the request vector are then selected as the candidate destinations. This vector representation for destinations and query is illustrated in a simplified two-dimensional space in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 908, "end": 916, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "The Routing Module", "sec_num": "4.1" }, { "text": "In order to carry out call routing with the aforementioned vector representation, three issues must be addressed. First, we must determine the vector representation for each destination within the call center. Once computed, these destination vectors should remain constant as long as the organization of the call center remains unchanged. 8 Second, we must determine how a caller request will be mapped to the same vector space for comparison with the destination vectors. Finally, we must decide how the similarity between the request vector and each destination vector will be measured in order to select candidate destinations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Routing Module", "sec_num": "4.1" }, { "text": "The Training Process. The goal of the training phase of the call router is to determine the values of the destination vectors (and term vectors) that will subsequently be used in the routing process. Our training process, depicted in Figure 5 , requires a corpus of transcribed calls, each of which is routed to the appropriate destination. 9 These routed calls are processed by five domain-independent procedures to obtain the desired document (destination) and term vectors. Document Construction. Since our goal is to represent each destination as an n-dimensional vector, we must create one (virtual) document per destination. The document for a destination contains the raw text of the callers' contributions in all calls routed to that destination, since these are the utterances that provided vital information for routing purposes. For instance, the document for deposit services may contain utterances such as I want to check the balance in my checking account and I would like to stop payment on a check. In our experiments, the corpus contains 3,753 calls routed to 23 destinations. 1\u00b0", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 242, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "4.1.2", "sec_num": null }, { "text": "8 One may consider allowing the call router to constantly update the destination vectors as new data are being collected while the system is deployed. We leave adding learning capabilities to the call router for future work. 9 The transcription process can be carried out by humans or by an automatic speech recognizer. In the experiments reported in this paper, we used human transcriptions. 10 These calls are a subset of the 4,500 calls used in our corpus analysis. We included calls of all semantic types, but excluded calls to destinations that were not represented by more than 10 calls, as well as ambiguous calls that were not resolved by the operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.1.2", "sec_num": null }, { "text": "For routing purposes, we are concerned with the semantics of the words present in a document, but not with the morphological forms of the words themselves. Thus we filter each (virtual) document, produced by the document construction process, through the morphological processor of the Bell Labs Text-to-Speech synthesizer (Sproat 1998) to extract the root form of each word in the corpus. This process will reduce singulars, plurals, and gerunds to their root forms, such as reducing service, services, and servicing to the root service. Also, the various verb forms are also reduced to their root forms, such as reducing going, went, and gone to go. 11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Filtering and Stop Word Filtering.", "sec_num": null }, { "text": "Next, the root forms of caller utterances are filtered through two lists, the ignore list and the stop list, in order to build more accurate n-gram term models for subsequent processing. The ignore list consists of noise words, which are common in spontaneous speech and can be removed without altering the meaning of an utterance, such as um and uh. These words sometimes get in the way of proper n-gram extraction, as in I'd like to speak to someone about a car uh loan. When the noise word uh is filtered out of the utterance, we can then properly extract the bigram car+loan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Filtering and Stop Word Filtering.", "sec_num": null }, { "text": "The stop list enumerates words that are ubiquitous and therefore do not contribute to discriminating between destinations, such as the, be, for, and morning. We modified the standard stop list distributed with the SMART information retrieval system (Salton 1971) to include domain-specific terms and proper names that occurred in our training corpus. 12 Note that when a word on the ignore list is removed from an utterance, it allows words preceding and succeeding the removed word to form n-grams, such as car+loan in the example above. On the other hand, when a stop word is removed from an utterance, a placeholder is inserted into the utterance to prevent the words preceding and following the removed stop word from forming n-grams. For instance, after stop word filtering, the caller utterance I want to check on an account becomes (sw) (sw) (sw) check (sw) (sw) account, resulting in the two unigrams check and account. Without the placeholders, we would extract the bigram check+account, just as if the caller had used the term checking account in the utterance.", "cite_spans": [ { "start": 249, "end": 262, "text": "(Salton 1971)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Morphological Filtering and Stop Word Filtering.", "sec_num": null }, { "text": "In our experiments, the ignore list contains 25 words, which are variations of common transcriptions of speech disfluencies, such as ah, aah, and ahh. The stop list contains over 1,200 words, including function words, proper names, greetings, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Filtering and Stop Word Filtering.", "sec_num": null }, { "text": "Term Extraction. The output of the filtering processes is a set of documents, one for each destination, containing the root forms of the content words extracted from the raw texts originally in each document. In order to capture word co-occurrence, n-gram terms are extracted from the filtered texts. First, a list of n-gram terms and their counts are generated from all filtered texts. Thresholds are then applied to the n-gram counts to select as salient terms those n-gram terms that occurred sufficiently frequently. Next, these salient terms are used to reduce the filtered text for each document to a bag of salient terms, i.e., a collection of n-gram terms along with their respective counts. Note that when an n-gram term is extracted, all of the lower order k-grams, where 1Destination Name Activity Indirect Request# of calls9493271277% of all calls21.1%72.7%6.2%/Table 2..-Call operator dialogue actions.NotificationQueryNPOthers# of calls3,608657232% of all calls80.2%14.6%5.2%", "num": null }, "TABREF3": { "html": null, "text": "...................................", "type_str": "table", "content": "
Term Selection
All n-gram terms & vectorsIiSelect Close Terms:' Select Relevant TermsSelect Terms with Disambiguating Power
Difference vectors
", "num": null }, "TABREF4": { "html": null, "text": "Word accuracy vs. term accuracy on ASR output.", "type_str": "table", "content": "
Word AccuracyTerm Accuracy
Raw Rooted Unigram Bigram Trigram
Precision 78.6% 79.8% 93.7% 96.5% 98.5%
Recall76.0% 77.2% 88.4% 85.5% 83.6%
that are correct, while recall is the percentage of words/terms in the transcription that
are correctly returned by the recognizer, i.e., percentage of actual words/terms that
are found.
", "num": null }, "TABREF7": { "html": null, "text": "Performance of disambiguation module on ambiguous calls.", "type_str": "table", "content": "
Routed As UnambiguousRouted As Ambiguous
CorrectIncorrectCorrect Incorrect Reject
401260342
", "num": null }, "TABREF9": { "html": null, "text": "Evaluation of our system and WGR97.", "type_str": "table", "content": "
# of DestinationsOn TranscriptionOn ASR Output
Rejection Rate Correct Rate Rejection Rate Correct Rate
Our system230%94%3%92%
WGR971410%84%12%78%
WGR971440%94%40%83%
", "num": null } } } }