ACL-OCL / Base_JSON /prefixP /json /P05 /P05-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P05-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:38:01.901450Z"
},
"title": "Digesting Virtual \"Geek\" Culture: The Summarization of Technical Internet Relay Chats",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California Information Sciences Institute",
"location": {
"addrLine": "4676 Admiralty Way Marina del Rey",
"postCode": "90292-6695",
"region": "CA"
}
},
"email": "liangz@isi.edu"
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California Information Sciences Institute",
"location": {
"addrLine": "4676 Admiralty Way Marina del Rey",
"postCode": "90292-6695",
"region": "CA"
}
},
"email": "hovy@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a summarization system for technical chats and emails on the Linux kernel. To reflect the complexity and sophistication of the discussions, they are clustered according to subtopic structure on the sub-message level, and immediate responding pairs are identified through machine learning methods. A resulting summary consists of one or more mini-summaries, each on a subtopic from the discussion.",
"pdf_parse": {
"paper_id": "P05-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a summarization system for technical chats and emails on the Linux kernel. To reflect the complexity and sophistication of the discussions, they are clustered according to subtopic structure on the sub-message level, and immediate responding pairs are identified through machine learning methods. A resulting summary consists of one or more mini-summaries, each on a subtopic from the discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The availability of many chat forums reflects the formation of globally dispersed virtual communities. From them we select the very active and growing movement of Open Source Software (OSS) development. Working together in a virtual community in non-collocated environments, OSS developers communicate and collaborate using a wide range of web-based tools including Internet Relay Chat (IRC), electronic mailing lists, and more (Elliott and Scacchi, 2004) . In contrast to conventional instant message chats, IRCs convey engaging and focused discussions on collaborative software development. Even though all OSS participants are technically savvy individually, summaries of IRC content are necessary within a virtual organization both as a resource and an organizational memory of activities (Ackerman and Halverson, 2000) . They are regularly produced manually by volunteers. These summaries can be used for analyzing the impact of virtual social interactions and virtual organizational culture on software/product development.",
"cite_spans": [
{
"start": 428,
"end": 455,
"text": "(Elliott and Scacchi, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 793,
"end": 823,
"text": "(Ackerman and Halverson, 2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The emergence of email thread discussions and chat logs as a major information source has prompted increased interest in thread summarization within the Natural Language Processing (NLP) community. One might assume a smooth transition from text-based summarization to email and chat-based summarizations. However, chat falls in the genre of correspondence, which requires dialogue and conversation analysis. This property makes summarization in this area even more difficult than traditional summarization. In particular, topic \"drift\" occurs more radically than in written genres, and interpersonal and pragmatic content appears more frequently. Questions about the content and overall organization of the summary must be addressed in a more thorough way for chat and other dialogue summarization systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a new system that clusters sub-message segments from correspondences according to topic, identifies the sub-message segment containing the leading issue within the topic, finds immediate responses from other participants, and consequently produces a summary for the entire IRC. Other constructions are possible. One of the two baseline systems described in this paper uses the timeline and dialogue structure to select summary content, and is quite effective. We use the term chat loosely in this paper. Input IRCs for our system is a mixture of chats and emails that are indistinguishable in format observed from the downloaded corpus (Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections, we summarize previous work, describe the email/chat data, intramessage clustering and summary extraction process, and discuss the results and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are at least two ways of organizing dialogue summaries: by dialogue structure and by topic. Newman and Blitzer (2002) describe methods for summarizing archived newsgroup conversations by clustering messages into subtopic groups and extracting top-ranked sentences per subtopic group based on the intrinsic scores of position in the cluster and lexical centrality. Due to the technical nature of our working corpus, we had to handle intra-message topic shifts, in which the author of a message raises or responds to multiple issues in the same message. This requires that our clustering component be not message-based but submessage-based. Lam et al. (2002) employ an existing summarizer for single documents using preprocessed email messages and context information from previous emails in the thread. Rambow et al. (2004) show that sentence extraction techniques are applicable to summarizing email threads, but only with added email-specific features. Wan and McKeown (2004) introduce a system that creates overview summaries for ongoing decision-making email exchanges by first detecting the issue being discussed and then extracting the response to the issue. Both systems use a corpus that, on average, contains 190 words and 3.25 messages per thread, much shorter than the ones in our collection. Galley et al. (2004) describe a system that identifies agreement and disagreement occurring in human-to-human multi-party conversations. They utilize an important concept from conversational analysis, adjacent pairs (AP), which consists of initiating and responding utterances from different speakers. Identifying APs is also required by our research to find correspondences from different chat participants.",
"cite_spans": [
{
"start": 98,
"end": 123,
"text": "Newman and Blitzer (2002)",
"ref_id": "BIBREF10"
},
{
"start": 645,
"end": 662,
"text": "Lam et al. (2002)",
"ref_id": "BIBREF8"
},
{
"start": 808,
"end": 828,
"text": "Rambow et al. (2004)",
"ref_id": "BIBREF11"
},
{
"start": 960,
"end": 982,
"text": "Wan and McKeown (2004)",
"ref_id": "BIBREF14"
},
{
"start": 1309,
"end": 1329,
"text": "Galley et al. (2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous and Related Work",
"sec_num": "2"
},
{
"text": "In automatic summarization of spoken dialogues, Zechner (2001) presents an approach to obtain extractive summaries for multi-party dialogues in unrestricted domains by addressing in-trinsic issues specific to speech transcripts. Automatic question detection is also deemed important in this work. A decision-tree classifier was trained on question-triggering words to detect questions among speech acts (sentences). A search heuristic procedure then finds the corresponding answers. Ries (2001) shows how to use keyword repetition, speaker initiative and speaking style to achieve topical segmentation of spontaneous dialogues.",
"cite_spans": [
{
"start": 48,
"end": 62,
"text": "Zechner (2001)",
"ref_id": "BIBREF16"
},
{
"start": 483,
"end": 494,
"text": "Ries (2001)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous and Related Work",
"sec_num": "2"
},
{
"text": "GNUe, a meta-project of the GNU project 1 -one of the most famous free/open source software projects-is the case study used in (Elliott and Scacchi, 2004) in support of the claim that, even in virtual organizations, there is still the need for successful conflict management in order to maintain order and stability. The GNUe IRC archive is uniquely suited for our experimental purpose because each IRC chat log has a companion summary digest written by project participants as part of their contribution to the community. This manual summary constitutes gold-standard data for evaluation.",
"cite_spans": [
{
"start": 127,
"end": 154,
"text": "(Elliott and Scacchi, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Technical Internet Relay Chats",
"sec_num": "3"
},
{
"text": "Kernel Traffic is a collection of summary digests of discussions on GNUe development. Each digest summarizes IRC logs and/or email messages (later referred to as chat logs) for a period of up to two weeks. A nice feature is that direct quotes and hyperlinks are part of the summary. Each digest is an extractive overview of facts, plus the author's dramatic and humorous interpretations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Traffic 2",
"sec_num": "3.1"
},
{
"text": "The complete Linux Kernel Archive (LKA) consists of two separate downloads. The Kernel Traffic (summary digests) are in XML format and were downloaded by crawling the Kernel Traffic site. The Linux Kernel Archives (individual IRC chat logs) are downloaded from the archive site. We matched the summaries with their respective chat logs based on subject line and publication dates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Download",
"sec_num": "3.2"
},
{
"text": "Upon initial examination of the chat logs, we found that many conventional assumptions about chats in general do not apply. For example, in most instant-message chats, each exchange usually consists of a small number of words in several sentences. Due to the technical nature of GNUe, half of the chat logs contain in-depth discussions with lengthy messages. One message might ask and answer several questions, discuss many topics in detail, and make further comments. This property, which we call subtopic structure, is an important difference from informal chat/interpersonal banter. Figure 1 shows the subtopic structure and relation of the first 4 messages from a chat log, produced manually. Each message is represented horizontally; the vertical arrows show where participants responded to each other. Visual inspection reveals in this example there are three distinctive clusters (a more complex cluster and two smaller satellite clusters) of discussions between participants at sub-message level.",
"cite_spans": [],
"ref_spans": [
{
"start": 586,
"end": 594,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Observation on Chat Logs",
"sec_num": "3.3"
},
{
"text": "To measure the goodness of system-produced summaries, gold standards are used as references. Human-written summaries usually make up the gold standards. The Kernel Traffic (summary digests) are written by Linux experts who actively contribute to the production and discussion of the open source projects. However, participantproduced digests cannot be used as reference summaries verbatim. Due to the complex structure of the dialogue, the summary itself exhibits some discourse structure, necessitating such reader guidance phrases such as \"for the \u2026 question,\" \"on the \u2026 subject,\" \"regarding \u2026,\" \"later in the same thread,\" etc., to direct and refocus the reader's attention. Therefore, further manual editing and partitioning is needed to transform a multi-topic digest into several smaller subtopic-based gold-standard reference summaries (see Section 6.1 for the transformation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Observation on Summary Digests",
"sec_num": "3.4"
},
{
"text": "To model the subtopic structure of each chat message, we apply clustering at the sub-message level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-grained Clustering",
"sec_num": "4"
},
{
"text": "First, we look at each message and assume that each participant responds to an ongoing discussion by stating his/her opinion on several topics or issues that have been discussed in the current chat log, but not necessarily in the order they were discussed. Thus, topic shifts can occur sequentially within a message. Messages are partitioned into multi-paragraph segments using TextTiling, which reportedly has an overall precision of 83% and recall of 78% (Hearst, 1994) .",
"cite_spans": [
{
"start": 457,
"end": 471,
"text": "(Hearst, 1994)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Message Segmentation",
"sec_num": "4.1"
},
{
"text": "After distinguishing a set of message segments, we cluster them. When choosing an appropriate clustering method, because the number of subtopics under discussion is unknown, we cannot make an assumption about the total number of resulting clusters. Thus, nonhierarchical partitioning methods cannot be used, and we must use a hierarchical method. These methods can be either agglomerative, which begin with an unclustered data set and perform N -1 pairwise joins, or divisive, which add all objects to a single cluster, and then perform N -1 divisions to create a hierarchy of smaller clusters, where N is the total number of items to be clustered (Frakes and Baeza-Yates, 1992 ).",
"cite_spans": [
{
"start": 648,
"end": 677,
"text": "(Frakes and Baeza-Yates, 1992",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering",
"sec_num": "4.2"
},
{
"text": "Hierarchical agglomerative clustering methods are commonly used and we employ Ward's method (Ward and Hook, 1963) , in which the text segment pair merged at each stage is the one that minimizes the increase in total within-cluster variance.",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "(Ward and Hook, 1963)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ward's Method",
"sec_num": null
},
{
"text": "Each cluster is represented by an L-dimensional vector (x i1 , x i2 , \u2026, x iL ) where each x ik is the word's tf \u2022 idf score. If m i is the number of objects in the cluster, the squared Euclidean distance between two segments i and j is: When two segments are joined, the increase in variance I ij is expressed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ward's Method",
"sec_num": null
},
{
"text": "\u20ac d ij 2 = (x ik K =1 L \u2211 \u2212 x jk ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ward's Method",
"sec_num": null
},
{
"text": "\u20ac I ij = m i m j m i + m j d ij 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ward's Method",
"sec_num": null
},
{
"text": "The process of joining clusters continues until the combination of any two clusters would destabilize the entire array of currently existing clusters produced from previous stages. At each stage, the two clusters x ik and x jk are chosen whose combination would cause the minimum increase in variance I ij , expressed as a percentage of the variance change from the last round. If this percentage reaches a preset threshold, it means that the nearest two clusters are much further from each other compared to the previous round; therefore, joining of the two represents a destabilizing change, and should not take place. Sub-message segments from resulting clusters are arranged according to the sequence the original messages were posted and the resulting subtopic structures are similar to the one shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 809,
"end": 817,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Number of Clusters",
"sec_num": null
},
{
"text": "Having obtained clusters of message segments focused on subtopics, we adopt the typical summarization paradigm to extract informative sentences and segments from each cluster to produce subtopic-based summaries. If a chat log has n clusters, then the corresponding summary will contain n mini-summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Extraction",
"sec_num": "5"
},
{
"text": "All message segments in a cluster are related to the central topic, but to various degrees. Some are answers to questions asked previously, plus further elaborative explanations; some make suggestions and give advice where they are requested, etc. From careful analysis of the LKA data, we can safely assume that for this type of conversational interaction, the goal of the participants is to seek help or advice and advance their current knowledge on various technical subjects. This kind of interaction can be modeled as one probleminitiating segment and one or more corresponding problem-solving segments. We envisage that identifying corresponding message segment pairs will produce adequate summaries. This analysis follows the structural organization of summaries from Kernel Traffic. Other types of discussions, at least in part, require different discourse/summary organization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Extraction",
"sec_num": "5"
},
{
"text": "These corresponding pairs are formally introduced below, and the methods we experimented with for identifying them are described.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Extraction",
"sec_num": "5"
},
{
"text": "An important conversational analysis concept, adjacent pairs (AP), is applied in our system to identify initiating and responding correspondences from different participants in one chat log. Adjacent pairs are considered fundamental units of conversational organization (Schegloff and Sacks, 1973) . An adjacent pair is said to consist of two parts that are ordered, adjacent, and produced by different speakers (Galley et al., 2004) . In our email/chat (LKA) corpus a physically adjacent message, following the timeline, may not directly respond to its immediate predecessor. Discussion participants read the current live thread and decide what he/she would like to correspond to, not necessarily in a serial fashion. With the added complication of subtopic structure (see Figure 1 ) the definition of adjacency is further violated. Due to its problematic nature, a relaxation on the adjacency requirement is used in extensive research in conversational analysis (Levinson, 1983) . This relaxed requirement is adopted in our research.",
"cite_spans": [
{
"start": 270,
"end": 297,
"text": "(Schegloff and Sacks, 1973)",
"ref_id": "BIBREF13"
},
{
"start": 412,
"end": 433,
"text": "(Galley et al., 2004)",
"ref_id": null
},
{
"start": 964,
"end": 980,
"text": "(Levinson, 1983)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 774,
"end": 782,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Adjacent Response Pairs",
"sec_num": "5.1"
},
{
"text": "Information produced by adjacent correspondences can be used to produce the subtopic-based summary of the chat log. As described in Section 4, each chat log is partitioned, at sub-message level, into several subtopic clusters. We take the message segment that appears first chronologically in the cluster as the topic-initiating segment in an adjacent pair. Given the initiating segment, we need to identify one or more segments from the same cluster that are the most direct and relevant responses. This process can be viewed equivalently as the informative sentence extraction process in conventional text-based summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjacent Response Pairs",
"sec_num": "5.1"
},
{
"text": "We manually tagged 100 chat logs for adjacent pairs. There are, on average, 11 messages per chat log and 3 segments per message (This is considerably larger than threads used in previous research). Each chat log has been clustered into one or more bags of message segments. The message segment that appears earliest in time in a cluster was marked as the initiating segment. The annotators were provided with this segment and one other segment at a time, and were asked to decide whether the current message segment is a direct answer to the question asked, the suggestion that was requested, etc. in the initiating segment. There are 1521 adjacent response pairs; 1000 were used for training and 521 for testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AP Corpus and Baseline",
"sec_num": "5.2"
},
{
"text": "Our baseline system selects the message segment (from a different author) immediately following the initiating segment. It is quite effective, with an accuracy of 64.67%. This is reasonable because not all adjacent responses are interrupted by messages responding to different earlier initiating messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AP Corpus and Baseline",
"sec_num": "5.2"
},
{
"text": "In the following sections, we describe two machine learning methods that were used to identify the second element in an adjacent response pair and the features used for training. We view the problem as a binary classification problem, distinguishing less relevant responses from direct responses. Our approach is to assign a candidate message segment c an appropriate response class r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AP Corpus and Baseline",
"sec_num": "5.2"
},
{
"text": "Structural and durational features have been demonstrated to improve performance significantly in conversational text analysis tasks. Using them, Galley et al. (2004) report an 8% increase in speaker identification. Zechner (2001) reports excellent results (F > .94) for inter-turn sentence boundary detection when recording the length of pause between utterances. In our corpus, durational information is nonexistent because chats and emails were mixed and no exact time recordings beside dates were reported. So we rely solely on structural and lexical features.",
"cite_spans": [
{
"start": 146,
"end": 166,
"text": "Galley et al. (2004)",
"ref_id": null
},
{
"start": 216,
"end": 230,
"text": "Zechner (2001)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.3"
},
{
"text": "For structural features, we count the number of messages between the initiating message segment and the responding message segment. Lexical features are listed in Table 1 . The tech words are the words that are uncommon in conventional literature and unique to Linux discussions.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "5.3"
},
{
"text": "Maximum entropy has been proven to be an effective method in various natural language processing applications (Berger et al., 1996) . For training and testing, we used YASMET 3 . To est imate P(r | c) in the exponential form, we have:",
"cite_spans": [
{
"start": 110,
"end": 131,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy",
"sec_num": "5.4"
},
{
"text": "\u20ac P \u03bb (r | c) = 1 Z \u03bb (c) exp( \u03bb i,r i \u2211 f i,r (c,r))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy",
"sec_num": "5.4"
},
{
"text": "where Z \u03bb (c) is a normalizing constant and the feature function for feature f i and response class r is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy",
"sec_num": "5.4"
},
{
"text": "\u20ac f i,r (c, \u2032 r ) = 1, if f i > 0 and \u2032 r = r 0, otherwise \uf8f1 \uf8f2 \uf8f3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy",
"sec_num": "5.4"
},
{
"text": "\u03bb i,r is the feature-weight parameter for feature f i and response class r. Then, to determine the best class r for the candidate message segment c, we have: \u20ac r * = arg max r P(r | c) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy",
"sec_num": "5.4"
},
{
"text": "Support vector machines (SVMs) have been shown to outperform other existing methods (na\u00efve Bayes, k-NN, and decision trees) in text categorization (Joachims, 1998) . Their advantages are robustness and the elimination of the need for feature selection and parameter tuning. SVMs find the hyperplane that separates the positive and negative training examples with maximum margin. Finding this hyperplane can be translated into an optimization problem of finding a set of coefficients \u03b1 i * of the weight vector",
"cite_spans": [
{
"start": 147,
"end": 163,
"text": "(Joachims, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "5.5"
},
{
"text": "\u20ac r w for document d i of class y i \u2208 {+1 , -1}: \u20ac r w = \u03b1 i * i \u2211 y i r d i , \u03b1 i > 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "5.5"
},
{
"text": "Testing data are classified depending on the side of the hyperplane they fall on. We used the LIBSVM 4 package for training and testing. Table 2 show the accuracies achieved using machine learning models and feature sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "5.5"
},
{
"text": "After responding message segments are identified, we couple them with their respective initiating segment to form a mini-summary based on their subtopic. Each initializing segment has zero or more responding segments. We also observed zero response in human-written summaries where participants initiated some question or concern, but others failed to follow up on the discussion. The AP process is repeated for each cluster created previously. One or more subtopic-based minisummaries make up one final summary for each chat log. Figure 2 shows an example. For longer chat logs, the length of the final summary is arbitrarily averaged at 35% of the original.",
"cite_spans": [],
"ref_spans": [
{
"start": 531,
"end": 539,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summary Generation",
"sec_num": "5.7"
},
{
"text": "To evaluate the goodness of the system-produced summaries, a set of reference summaries is used for comparison. In this section, we describe the manual procedure used to produce the reference summaries, and the performances of our system and two baseline systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary Evaluation",
"sec_num": "6"
},
{
"text": "Kernel Traffic digests are participant-written summaries of the chat logs. Each digest mixes the summary writer's own narrative comments with direct quotes (citing the authors) from the chat log. As observed in Section 3.4, subtopics are intermingled in each digest. Authors use key phrases to link the contents of each subtopic throughout texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Summaries",
"sec_num": "6.1"
},
{
"text": "In Figure 3 , we show an example of such a digest. Discussion participants' names are in italics and subtopics are in bold. In this example, the conversation was started by Benjamin Reed with two questions: 1) asking for conventions for writing /proc drivers, and 2) asking about the status of sysctl. The summary writer indicated that Linus Torvalds replied to both questions and used the phrase \"for the \u2026 question, he added\u2026\" to highlight the answer to the second question. As the di- gest goes on, Marcin Dalecki only responded to the first question with his excited commentary.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Reference Summaries",
"sec_num": "6.1"
},
{
"text": "Since our system-produced summaries are subtopic-based and partitioned accordingly, if we use unprocessed Kernel Traffic as references, the comparison would be rather complicated and would increase the level of inconsistency in future assessments. We manually reorganized each summary digest into one or more mini-summaries by subtopic (see Figure 4 .) Examples (usually kernel stats) and programs are reduced to \"[example]\" and \"[program code].\" Quotes (originally in separate messages but merged by the summary writer) that contain multiple topics are segmented and the participant's name is inserted for each segment. We follow clues like \"to answer \u2026 question\" to pair up the main topics and their responses.",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 349,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Reference Summaries",
"sec_num": "6.1"
},
{
"text": "We evaluated 10 chat logs. On average, each contains approximately 50 multi-paragraph tiles (partitioned by TextTile) and 5 subtopics (clustered by the method from Section 4). A simple baseline system takes the first sentence from each email in the sequence that they were posted, based on the assumption that people tend to put important information in the beginning of texts (Position Hypothesis).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization Results",
"sec_num": "6.2"
},
{
"text": "A second baseline system was built based on constructing and analyzing the dialogue structure of each chat log. Participants often quote portions of previously posted messages in their responses. These quotes link most of the messages from a chat log. The message segment that immediately follows the quote is automatically paired with the quote itself and added to the summary and sorted according to the timeline. Segments that are not quoted in later messages are labeled as less relevant and discarded. A resulting baseline summary is an inter-connected structure of segments that quoted and responded to one another. Figure 5 is a shortened summary produced by this baseline for the ongoing example.",
"cite_spans": [],
"ref_spans": [
{
"start": 622,
"end": 630,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summarization Results",
"sec_num": "6.2"
},
{
"text": "The summary digests from Kernel Traffic mostly consist of direct snippets from original messages, thus making the reference summaries extractive even after rewriting. This makes it possible to conduct an automatic evaluation. A computerized procedure calculates the overlap between reference and system-produced summary units. Since each system-produced summary is a set of mini-summaries based on subtopics, we also compared the subtopics against those appearing in reference summaries (precision = 77.00%, recall = 74.33 %, F = 0.7566). Table 3 shows the recall, precision, and Fmeasure from the evaluation. From manual analysis on the results, we notice that the original digest writers often leave large portions of the discussion out and focus on a few topics. We think this is because among the participants, some are Linux veterans and others are novice programmers. Digest writers recognize this difference and reflect it in their writings, whereas our system does not. The entry \"Topic-summ\" in the table shows systemproduced summaries being compared only against the topics discussed in the reference summaries.",
"cite_spans": [],
"ref_spans": [
{
"start": 539,
"end": 546,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Summarization Results",
"sec_num": "6.2"
},
{
"text": "A recall of 30.79% from the simple baseline reassures us the Position Hypothesis still applies in conversational discussions. The second baseline performs extremely well on recall, 63.14%. It shows that quoted message segments, and thereby derived dialogue structure, are quite indicative of where the important information resides. Systems built on these properties are good summarization systems and hard-to-beat baselines. The system described in this paper (Summary) shows an Fmeasure of .5235, an improvement from .4629 of the smart baseline. It gains from a high precision because less relevant message segments are identified and excluded from the adjacent response pairs,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.3"
},
{
"text": "[0|0] Benjamin Reed: \"I wrote an \u2026 driver \u2026 /proc \u2026\" [0|1] Benjamin Reed: \"\u2026 /proc/ guideline \u2026\" [0|2] Benjamin Reed: \"\u2026 syscyl \u2026\" [1|0] Linus Torvalds responds to [0|0, 0|1, 0|2]: \"the thing to do is \u2026\" \"sysctl is deprecated \u2026 \" Figure 5 . A short example from Baseline 2. Table 3 . Summary of results.",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 238,
"text": "Figure 5",
"ref_id": null
},
{
"start": 274,
"end": 281,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.3"
},
{
"text": "leaving mostly topic-oriented segments in summaries. There is a slight improvement when assessing against only those subtopics appeared in the reference summaries (Topic-summ). This shows that we only identified clusters on their information content, not on their respective writers' experience and reliability of knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.3"
},
{
"text": "In the original summary digests, interactions and reactions between participants are sometimes described. Digest writers insert terms like \"flamed\", \"surprised\", \"felt sorry\", \"excited\", etc. To analyze social and organizational culture in a virtual environment, we need not only information extracts (implemented so far) but also passages that reveal the personal aspect of the communications. We plan to incorporate opinion identification into the current system in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.3"
},
{
"text": "In this paper we have described a system that performs intra-message topic-based summarization by clustering message segments and classifying topicinitiating and responding pairs. Our approach is an initial step in developing a framework that can eventually reflect the human interactions in virtual environments. In future work, we need to prioritize information according to the perceived knowledgeability of each participant in the discussion, in addition to identifying informative content and recognizing dialogue structure. While the approach to the detection of initiating-responding pairs is quite effective, differentiating important and nonimportant topic clusters is still unresolved and must be explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "http://www.gnu.org 2 http://kt.hoser.ca/kernel-traffic/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reexaming organizational memory",
"authors": [
{
"first": "M",
"middle": [
"S"
],
"last": "Ackerman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Halverson",
"suffix": ""
}
],
"year": 2000,
"venue": "Communications of the ACM",
"volume": "43",
"issue": "1",
"pages": "59--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. S. Ackerman and C. Halverson. 2000. Reexaming organizational memory. Communications of the ACM, 43(1), 59-64.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Free software development: cooperation and conflict in a virtual organizational culture",
"authors": [
{
"first": "M",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Scacchi",
"suffix": ""
}
],
"year": 2004,
"venue": "Free/Open Source Software Development, IDEA publishing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Elliott and W. Scacchi. 2004. Free software devel- opment: cooperation and conflict in a virtual organi- zational culture. S. Koch (ed.), Free/Open Source Software Development, IDEA publishing, 2004.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Information retrieval: data structures & algorithms",
"authors": [
{
"first": "W",
"middle": [
"B"
],
"last": "Frakes",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. B. Frakes and R. Baeza-Yates. 1992. Information retrieval: data structures & algorithms. Prentice Hall.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Identifying agreement and disagreement in conversational speech: use of Bayesian networks to model pragmatic dependencies",
"authors": [
{
"first": "",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2004,
"venue": "the Proceedings of ACL-04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shriberg. 2004. Identifying agreement and disagree- ment in conversational speech: use of Bayesian net- works to model pragmatic dependencies. In the Proceedings of ACL-04.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multi-paragraph segmentation of expository text",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1994,
"venue": "the Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. Hearst. 1994. Multi-paragraph segmentation of expository text. In the Proceedings of ACL 1994.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Text categorization with support vector machines: Learning with many relevant features",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the ECML",
"volume": "",
"issue": "",
"pages": "137--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Joachims. 1998. Text categorization with support vector machines: Learning with many relevant fea- tures. In Proceedings of the ECML, pages 137-142.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Exploiting e-mail structure to improve summarization. Technical Paper at IBM Watson Research Center",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "20--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Lam and S. L. Rohall. 2002. Exploiting e-mail structure to improve summarization. Technical Paper at IBM Watson Research Center #20-02.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pragmatics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Levinson",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Levinson. 1983. Pragmatics. Cambridge University Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Summarizing archived discussions: a beginning",
"authors": [
{
"first": "P",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Blitzer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of Intelligent User Interfaces",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Newman and J. Blitzer. 2002. Summarizing archived discussions: a beginning. In Proceedings of Intelli- gent User Interfaces.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Summarizing email threads",
"authors": [
{
"first": "O",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Shrestha",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Laurdisen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL 2004: Short Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Rambow, L. Shrestha, J. Chen and C. Laurdisen. 2004. Summarizing email threads. In Proceedings of HLT-NAACL 2004: Short Papers.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Segmenting conversations by topic, initiative, and style",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR Workshop: Information Retrieval Techniques for Speech Applications",
"volume": "",
"issue": "",
"pages": "51--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Ries. 2001. Segmenting conversations by topic, ini- tiative, and style. In Proceedings of SIGIR Work- shop: Information Retrieval Techniques for Speech Applications 2001: 51-66.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Opening up closings",
"authors": [
{
"first": "E",
"middle": [
"A"
],
"last": "Schegloff",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sacks",
"suffix": ""
}
],
"year": 1973,
"venue": "Semiotica",
"volume": "",
"issue": "",
"pages": "289--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. A. Schegloff and H. Sacks. 1973. Opening up clos- ings. Semiotica, 7-4:289-327.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generating overview summaries of ongoing email thread discussions",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Wan and K. McKeown. 2004. Generating overview summaries of ongoing email thread discussions. In Proceedings of COLING 2004.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Application of an hierarchical grouping procedure to a problem of grouping profiles",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Ward",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Hook",
"suffix": ""
}
],
"year": 1963,
"venue": "Educational and Psychological Measurement",
"volume": "23",
"issue": "",
"pages": "69--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. H. Ward Jr. and M. E. Hook. 1963. Application of an hierarchical grouping procedure to a problem of grouping profiles. Educational and Psychological Measurement, 23, 69-81.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic generation of concise summaries of spoken dialogues in unrestricted domains",
"authors": [
{
"first": "K",
"middle": [],
"last": "Zechner",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Zechner. 2001. Automatic generation of concise summaries of spoken dialogues in unrestricted do- mains. In Proceedings of SIGIR 2001.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "An example of chat subtopic structure and relation between correspondences.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "An original Kernel Traffic digest.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "A reference summary reproduced from a summary digest.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"text": "http://www.fjoch.com/YASMET.html 4 http://www.csie.ntu.edu.tw/~cjlin/libsvm/",
"content": "<table><tr><td colspan=\"2\">\u2022 number of overlapping words \u2022 number of overlapping content words \u2022 ratio of overlapping words \u2022 ratio of overlapping content words \u2022 number of overlapping tech words</td><td/></tr><tr><td colspan=\"2\">Table 1. Lexical features.</td><td/></tr><tr><td>Feature sets</td><td>baseline MaxEnt</td><td>SVM</td></tr><tr><td/><td>64.67%</td><td/></tr><tr><td>Structural</td><td colspan=\"2\">61.22% 71.79%</td></tr><tr><td>Lexical</td><td colspan=\"2\">62.24% 72.22%</td></tr><tr><td>Structural + Lexical</td><td colspan=\"2\">72.61% 72.79%</td></tr><tr><td colspan=\"3\">Table 2. Accuracy on identifying APs.</td></tr></table>"
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"text": "Benjamin Reed: and finally, what's up with sysctl? ... Linus Torvalds replied: sysctl is deprecated. ...",
"content": "<table><tr><td>Mini 1:</td></tr><tr><td>Benjamin Reed wrote a wireless Ethernet driver that</td></tr><tr><td>used /proc as its interface. But he was a little uncom-</td></tr><tr><td>fortable \u2026 and asked if there were any conventions he</td></tr><tr><td>should follow.</td></tr><tr><td>Linus Torvalds replied with: the thing to do is to create</td></tr><tr><td>a \u2026[program code]. The /proc/drivers/ directory is</td></tr><tr><td>already there, so you'd basically do something like \u2026</td></tr><tr><td>[program code].</td></tr><tr><td>Marcin Dalecki flamed Linus: Are you just blind to the</td></tr><tr><td>never-ending format/ compatibility/ \u2026 problems the</td></tr><tr><td>whole idea behind /proc induces inherently?</td></tr><tr><td>\u2026[example]</td></tr><tr><td>Mini 2:</td></tr></table>"
}
}
}
}