| { |
| "paper_id": "K15-1024", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:08:38.540832Z" |
| }, |
| "title": "Recovering Traceability Links in Requirements Documents", |
| "authors": [ |
| { |
| "first": "Zeheng", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Southern Methodist University Dallas", |
| "location": { |
| "postCode": "75275-0122", |
| "region": "TX" |
| } |
| }, |
| "email": "zehengl@smu.edu" |
| }, |
| { |
| "first": "Mingrui", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Southern Methodist University Dallas", |
| "location": { |
| "postCode": "75275-0122", |
| "region": "TX" |
| } |
| }, |
| "email": "mingruic@smu.edu" |
| }, |
| { |
| "first": "Liguo", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Southern Methodist University Dallas", |
| "location": { |
| "postCode": "75275-0122", |
| "region": "TX" |
| } |
| }, |
| "email": "lghuang@smu.edu" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Texas", |
| "location": { |
| "addrLine": "at Dallas Richardson", |
| "postCode": "75083-0688", |
| "region": "TX" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Software system development is guided by the evolution of requirements. In this paper, we address the task of requirements traceability, which is concerned with providing bi-directional traceability between various requirements, enabling users to find the origin of each requirement and track every change made to it. We propose a knowledge-rich approach to the task, where we extend a supervised baseline system with (1) additional training instances derived from human-provided annotator rationales; and (2) additional features derived from a hand-built ontology. Experiments demonstrate that our approach yields a relative error reduction of 11.1-19.7%.", |
| "pdf_parse": { |
| "paper_id": "K15-1024", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Software system development is guided by the evolution of requirements. In this paper, we address the task of requirements traceability, which is concerned with providing bi-directional traceability between various requirements, enabling users to find the origin of each requirement and track every change made to it. We propose a knowledge-rich approach to the task, where we extend a supervised baseline system with (1) additional training instances derived from human-provided annotator rationales; and (2) additional features derived from a hand-built ontology. Experiments demonstrate that our approach yields a relative error reduction of 11.1-19.7%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Software system development is guided by the evolution and refinement of requirements. Requirements specifications, which are mostly documented using natural language, are refined with additional design details and implementation information as the development life cycle progresses. A crucial task throughout the entire development life cycle is requirements traceability, which is concerned with linking requirements in which one is a refinement of the other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Specifically, one is given a set of high-level (coarse-grained) requirements and a set of lowlevel (fine-grained) requirements, and the goal of requirements traceability is to find for each highlevel requirement all the low-level requirements that refine it. Note that the resulting mapping between high-and low-level requirements is many-to-many, because a low-level requirement can potentially refine more than one high-level requirement. As an example, consider the three highlevel requirements and two low-level requirements shown in Figure 1 about the well-known Pine email system. In this example, three traceability links should be established: (1) HR01 is refined by UC01 (because UC01 specifies the shortcut key for saving an entry in the address book);", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 538, |
| "end": 546, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(2) HR02 is refined by UC01 (because UC01 specifies how to store contacts in the address book); and (3) HR03 is refined by UC02 (because both of them are concerned with the help system).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "From a text mining perspective, requirements traceability is a very challenging task. First, there could be abundant information irrelevant to the establishment of a link in one or both of the requirements. For instance, all the information under the Description section in UC01 is irrelevant to the establishment of the link between UC01 and HR02. Worse still, as the goal is to induce a many-to-many mapping, information irrelevant to the establishment of one link could be relevant to the establishment of another link involving the same requirement. For instance, while the Description section is irrelevant when linking UC01 and HR02, it is crucial for linking UC01 and HR01. Above all, a link can exist between a pair of requirements (HR01 and UC01) even if they do not possess any overlapping or semantically similar content words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Virtually all existing approaches to the requirements traceability task were developed in the software engineering (SE) research community. Related work on this task can be broadly divided into two categories. In manual approaches, requirements traceability links are recovered manually by developers. Automated approaches, on the other Figure 1 : Samples of high-and low-level requirements.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 337, |
| "end": 345, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "hand, have relied on information retrieval (IR) techniques, which recover links based on computing the similarity between a given pair of requirements. Hence, such similarity-based approaches are unable to recover links between those pairs that do not contain overlapping or semantically similar words or phrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In light of this weakness, we recast requirements traceability as a supervised binary classification task, where we classify each pair of highand low-level requirements as positive (having a link) or negative (not having a link). In particular, we propose a knowledge-rich approach to the task, where we extend a supervised baseline employing only word pairs and LDA-induced topics as features (see Section 4) with two types of humansupplied knowledge. First, we employ annotator rationales. In the context of requirements traceability, rationales are human-annotated words or phrases in a pair of high-and low-level requirements that motivated a human annotator to establish a link between the two. In other words, rationales contain the information relevant to the establishment of a link. Therefore, using them could allow a learner to focus on the relevant portions of a requirement. Motivated by Zaidan et al. (2007) , we employ rationales to create additional training instances for the learner.", |
| "cite_spans": [ |
| { |
| "start": 901, |
| "end": 921, |
| "text": "Zaidan et al. (2007)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Second, we employ an ontology hand-built by a domain expert. A sample ontology built for the Pine domain is shown in Table 1 . As we can see, the ontology contains a verb clustering and a noun clustering: the verbs are clustered by the function they perform, whereas a noun cluster corresponds to a (domain-specific) semantic type. We employ the ontology to derive additional features.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 124, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are at least two reasons why the ontologybased features might be useful for identifying traceability links. First, since only those verbs and nouns that (1) appear in the training data and (2) are deemed relevant by the domain expert for link identification are included in the ontology, it provides guidance to the learner as to which words/phrases in the requirements it should focus on in the learning process. 1 Second, the verb and noun clusters provide a robust generalization of the words/phrases in the requirements. For instance, a word pair that is relevant for link identification may still be ignored by the learner due to its infrequency of occurrence. The features computed based on these clusters, on the other hand, will be more robust to the infrequency problem and could therefore provide better generalizations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our contributions are three-fold. First, the knowledge-rich approach we propose for requirements traceability significantly outperforms a supervised baseline on two traceability datasets, Pine and WorldVistA. Second, we increase the NLP community's awareness of this under-studied, challenging, yet important problem in SE, which could lead to fruitful inter-disciplinary collaboration. Third, to facilitate future research on this problem, we make our annotated resources, including the datasets, the rationales, and the ontolo- Category Terms Message mail, message, email, e-mail, PDL, subjects Contact contact, addresses, multiple addresses Folder folder, folder list, tree structure Location address book, address field, entry, address Platform windows,unix,window system,unix system Module help system, spelling check, Pico, shell Protocol MIME, ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 530, |
| "end": 866, |
| "text": "Category Terms Message mail, message, email, e-mail, PDL, subjects Contact contact, addresses, multiple addresses Folder folder, folder list, tree structure Location address book, address field, entry, address Platform windows,unix,window system,unix system Module help system, spelling check, Pico, shell Protocol MIME,", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Related work on traceability link prediction can be broadly divided into two categories, manual approaches and automatic approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Manual requirement tracing. Traditional manual requirements tracing is usually accomplished by system analysts with the help of requirement management tools, where analysts visually examine each pair of requirements documented in the requirement management tools to build the Requirement Traceability Matrix (RTM). Most existing requirement management tools (e.g., Rational DOORS 3 , Rational RequisitePro 4 , CASE 5 ) support traceability analysis. Manual tracing is often based on observing the potential relevance between a pair of requirements belonging to different categories or at different levels of detail. The manual process is human-intensive and errorprone given a large set of requirements.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Automated requirement tracing. Automated or semi-automated requirements traceability, on the other hand, generates traceability links automatically, and hence significantly increases efficiency. Pierce (1978) designed a tool that maintains a requirements database to aid automated requirements tracing. Jackson (1991) proposed a keyphrase-based approach for tracing a large number of requirements of a large Surface Ship Command System. More advanced approaches relying on information retrieval (IR) techniques, such as the tf-idf-based vector space model (Sundaram et al., 2005) , Latent Semantic Indexing (Lormans and Van Deursen, 2006; De Lucia et al., 2007; De Lucia et al., 2009) , probabilistic networks (Cleland-Huang et al., 2005) , and Latent Dirichlet Allocation (Port et al., 2011) , have been investigated, where traceability links were generated by calculating the textual similarity between requirements using similarity measures such as Dice, Jaccard, and Cosine coefficients (Dag et al., 2002) . All these methods were developed based on either matching keywords or identifying similar words across a pair of requirements, and none of them have studied the feasibility of employing supervised learning to accomplish this task, unlike ours.", |
| "cite_spans": [ |
| { |
| "start": 556, |
| "end": 579, |
| "text": "(Sundaram et al., 2005)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 607, |
| "end": 638, |
| "text": "(Lormans and Van Deursen, 2006;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 639, |
| "end": 661, |
| "text": "De Lucia et al., 2007;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 662, |
| "end": 684, |
| "text": "De Lucia et al., 2009)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 710, |
| "end": 738, |
| "text": "(Cleland-Huang et al., 2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 773, |
| "end": 792, |
| "text": "(Port et al., 2011)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 991, |
| "end": 1009, |
| "text": "(Dag et al., 2002)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For evaluation, we employ two publicly-available datasets annotated with traceability links. The first dataset, annotated by Sultanov and Hayes (2010) , involves the Pine email system developed at the University of Washington. The second dataset, annotated by Cleland-Huang et al. 2010, involves WorldVistA, an electronic health information system developed by the USA Veterans Administration. Statistics on these datasets are shown in Table 2. For Pine, 2499 instances can be created by pairing the 49 high-level requirements with the 51 low-level use cases. For WorldVistA, 9193 instances can be created by pairing the 29 highlevel requirements with the 317 low-level specifications. As expected, these datasets have skewed class distributions: only 10% (Pine) and 4.3% (WorldVistA) of the pairs are linked. While these datasets have been annotated with traceability links, they are not annotated with annotator rationales. Consequently, we employed a software engineer specializing in requirements traceability to perform rationale annotation. We asked him to annotate rationales for a pair of re- quirements only if he believed that there should be a traceability link between them. The reason is that in traceability prediction, the absence of a traceability link between two requirements is attributed to the lack of evidence that they should be linked, rather than the presence of evidence that they should not be linked. More specifically, we asked the annotator to identify as rationales all the words/phrases in a pair of requirements that motivated him to label the pair as positive. For instance, for the link between HR01 and UC01 in Figure 1 , he identified two rationales from HR01 (\"shortcut key\" and \"control and the shortcut key are pressed\") and one from UC01 (\"press ctrl+x\").", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 150, |
| "text": "Sultanov and Hayes (2010)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1647, |
| "end": 1655, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A traceability link prediction ontology is composed of a verb clustering and a noun clustering. We asked a software engineer with expertise in requirements traceability to hand-build the ontology for each of the two datasets. Using his domain expertise, the engineer first identified the noun categories and verb categories that are relevant for traceability prediction. Then, by inspecting the training data, he manually populated each noun/verb category with words and phrases collected from the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hand-Building the Ontologies", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As will be discussed in Section 8, we evaluate our approach using five-fold cross validation. Since the nouns/verbs in the ontology were collected only from the training data, the software engineer built five ontologies for each dataset, one for each fold experiment. Hence, nouns/verbs that appear in only the test data in a fold experiment are not included in that experiment's ontology. In other words, our test data are truly held-out w.r.t. the construction of the ontology. Tables 1 and 3 show the complete lists of noun and verb categories identified for Pine and WorldVistA, respectively, as well as sample nouns and verbs that populate each category. Note that the five ontologies employ the same set of noun and verb categories, ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 480, |
| "end": 494, |
| "text": "Tables 1 and 3", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hand-Building the Ontologies", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this section, we describe three baseline systems for traceability prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline Systems", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The Tf-idf baseline. We employ tf-idf as our first unsupervised baseline. Each document is represented as a vector of unigrams. The value of each vector entry is its associated word's tfidf value. Cosine similarity is used to compute the similarity between two documents. Any pair of requirements whose similarity exceeds a given threshold is labeled as positive.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Baselines", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The LDA baseline. We employ LDA (Blei et al., 2003) as our second unsupervised baseline.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 51, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Baselines", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We train an LDA model on our data to produce n topics. For Pine, we set n to 10, 20, . . ., 50. For WorldVistA, because of its larger size, we set n to 50, 60, . . ., 100. We then represent each document as a vector of length n, with each entry set to the probability that the document belongs to one of the topics. Cosine similarity is used as the similarity measure. Any pair of requirements whose similarity exceeds a given threshold is labeled as positive.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Baselines", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Each instance corresponds to a high-level requirement and a low-level requirement. Hence, we create instances by pairing each high-level requirement with each low-level requirement. The class value of an instance is positive if the two requirements involved should be linked; otherwise, it is negative. Each instance is represented using two types of features: Word pairs. We create one binary feature for each word pair (w i , w j ) collected from the training instances, where w i and w j are words appearing in a high-level requirement and a low-level requirement respectively. Its value is 1 if w i and w j appear in the high-level and low-level pair under consideration, respectively. LDA-induced topic pairs. Motivated by previous work, we create features based on the topics induced by an LDA model for a requirement. Specifically, we first train an LDA model on our data to obtain n topics, where n is to be tuned jointly with C on the development (dev) set. 6 Then, we create one binary feature for each topic 6 As in the LDA baseline, for Pine we set n to 10, 20, . . ., 50, and for WorldVistA, we set n to 50, 60, . . ., 100. pair (t i , t j ), where t i and t j are the topics corresponding to a high-level requirement and a lowlevel requirement, respectively. Its value is 1 if t i and t j are the most probable topics of the highlevel and low-level pair under consideration, respectively.", |
| "cite_spans": [ |
| { |
| "start": 967, |
| "end": 968, |
| "text": "6", |
| "ref_id": null |
| }, |
| { |
| "start": 1019, |
| "end": 1020, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Baseline", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We employ LIBSVM (Chang and Lin, 2011) to train a binary SVM classifier on the training set. We use a linear kernel, setting all learning parameters to their default values except for the C (regularization) parameter, which we tune jointly with n (the number of LDA-induced topics) to maximize F-score on the dev set. 7 Since we conduct five-fold cross validation, in all experiments that require a dev set, we use three folds for training, one fold for dev, and one fold for evaluation.", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 38, |
| "text": "(Chang and Lin, 2011)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 318, |
| "end": 319, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Baseline", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In this section, we describe our first extension to the baseline: exploiting rationales to generate additional training instances for the SVM learner.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploiting Rationales", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The idea of using annotator rationales to improve binary text classification was proposed by Zaidan et al. (2007) . A rationale is a human-annotated text fragment that motivated an annotator to assign a particular label to a training document. In their work on classifying the sentiment expressed in movie reviews as positive or negative, Zaidan et al. generate additional training instances by removing rationales from documents. Since these pseudo-instances lack information that the annotators thought was important, an SVM learner should be less confident about the labels of these weaker instances, and should therefore place the hyperplane closer to them. A learner that successfully learns this difference in confidence assigns a higher importance to the pieces of text that are present only in the original instances. Thus the pseudo-instances help the learner both by indicating which parts of the documents are important and by increasing the number of training instances.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 113, |
| "text": "Zaidan et al. (2007)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Unlike in sentiment analysis, where rationales can be identified for both positive and negative training reviews, in traceability prediction, rationales can only be identified for the positive training instances (i.e., pairs with links). As noted before, the reason is that in traceability prediction, an instance is labeled as negative because of the absence of evidence that the two requirements involved should be linked, rather than the presence of evidence that they should not be linked.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Using these rationales, we can create positive pseudo-instances. Note, however, that we cannot employ Zaidan et al.'s method to create positive pseudo-instances. According to their method, we would (1) take a pair of linked requirements, (2) remove the rationales from both of them, (3) create a positive pseudo-instance from the remaining text fragments, and (4) add a constraint to the SVM learner forcing it to classify it less confidently than the original positive instance. Creating positive pseudo-instances in this way is problematic for our task. The reason is simple: as discussed previously, a negative instance in our task stems from the absence of evidence that the two requirements should be linked. In other words, after removing the rationales from a pair of linked requirements, the pseudo-instance created from the remaining text fragments should be labeled as negative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Given this observation, we create a positive pseudo-instance from each pair of linked requirements by removing any text fragments from the pair that are not part of a rationale. In other words, we use only the rationales to create positive pseudo-instances. This has the effect of amplifying the information present in the rationales.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "As mentioned above, while Zaidan et al.'s method cannot be used to create positive pseudoinstances, it can be used to create negative pseudoinstances. For each pair of linked requirements, we create three negative pseudo-instances. The first one is created by removing all and only the rationales from the high-level requirement in the pair. The second one is created by removing all and only the rationales from the low-level requirement in the pair. The third one is created by removing all the rationales from both requirements in the pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "To better understand our annotator rationale framework, let us define it more formally. Recall that in a standard soft-margin SVM, the goal is to find w and \u03be to minimize", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "1 2 |w| 2 + C i \u03be i subject to \u2200i : c i w \u2022 x i \u2265 1 \u2212 \u03be i , \u03be i > 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "where x i is a training example; c i \u2208 {\u22121, 1} is the class label of x i ; \u03be i is a slack variable that allows x i to be misclassified if necessary; and C > 0 is the misclassification penalty (a.k.a. the regularization parameter).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "To enable this standard soft-margin SVM to also learn from the positive pseudo-instances, we add the following constraints:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "\u2200i : w \u2022 v i \u2265 \u00b5(1 \u2212 \u03be i ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "where v i is the positive pseudo-instance created from positive example x i , \u03be i \u2265 0 is the slack variable associated with v i , and \u00b5 is the margin size (which controls how confident the classifier is in classifying the pseudo-instances).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Similarly, to learn from the negative pseudoinstances, we add the following constraints:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "\u2200i, j : w \u2022 u ij \u2264 \u00b5(1 \u2212 \u03be ij ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "where u ij is the jth negative pseudo-instance created from positive example x i , \u03be ij \u2265 0 is the slack variable associated with u ij , and \u00b5 is the margin size.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We let the learner decide how confidently it wants to classify these additional training instances based on the dev data. Specifically, we tune this confidence parameter \u00b5 jointly with the C value to maximize F-score on the dev set. 8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Application to Traceability Prediction", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Next, we describe our second extension to the baseline: exploiting ontology-based features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extension 2: Exploiting an Ontology", |
| "sec_num": "7" |
| }, |
| { |
| "text": "As mentioned before, we derive additional features for the SVM learner from the verb and noun clusters in the hand-built ontology. Specifically, we derive five types of features: Verb pairs. We create one binary feature for each verb pair (v i , v j ) collected from the training instances, where (1) v i and v j appear in a highlevel requirement and a low-level requirement respectively, and (2) both verbs appear in the ontology. Its value is 1 if v i and v j appear in the highlevel and low-level pair under consideration, respectively. Using these verb pairs as features may allow the learner to focus on verbs that are relevant to traceability prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology-Based Features", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Verb group pairs. For each verb pair feature described above, we create one binary feature by replacing each verb in the pair with its cluster id in the ontology. Its value is 1 if the two verb groups in the pair appear in the high-level and low-level pair under consideration, respectively. These features may enable the resulting classifier to provide robust generalizations in cases where the learner chooses to ignore certain useful verb pairs owing to their infrequency of occurrence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology-Based Features", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Noun pairs. We create one binary feature for each noun pair (n i , n j ) collected from the training instances, where (1) n i and n j appear in a highlevel requirement and a low-level requirement respectively, and (2) both nouns appear in the ontology. Its value is computed in the same manner as the verb pairs. These noun pairs may help the learner to focus on verbs that are relevant to traceability prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology-Based Features", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Noun group pairs. For each noun pair feature described above, we create one binary feature by replacing each noun in the pair with its cluster id in the ontology. Its value is computed in the same manner as the verb group pairs. These features may enable the classifier to provide robust generalizations in cases where the learner chooses to ignore certain useful noun pairs owing to their infrequency of occurrence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology-Based Features", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Dependency pairs. In some cases, the noun/verb pairs may not provide sufficient information for traceability prediction. For example, the verb pair feature (delete, delete) is suggestive of a positive instance, but the instance may turn out to be negative if one requirement concerns deleting messages and the other concerns deleting folders. As another example, the noun pair feature (folder, folder) is suggestive of a positive instance, but the instance may turn out to be negative if one requirement concerns creating folders and the other concerns deleting folders.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology-Based Features", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "In other words, we need to develop features that encode the relationship between verbs and nouns. To do so, we first parse each requirement using the Stanford dependency parser (de Marneffe et al., 2006) , and collect each noun-verb pair (n i ,v j ) connected by a dependency relation. We then create binary features by pairing each related nounverb pair found in a high-level training requirement with each related noun-verb pair found in a low-level training requirement. The feature value is 1 if the two noun-verb pairs appear in the pair of requirements under consideration. To enable the learner to focus on learning from relevant verbs and nouns, only verbs and nouns that appear in the ontology are used to create these features.", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 203, |
| "text": "(de Marneffe et al., 2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology-Based Features", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "An interesting question is: is it possible to learn an ontology rather than hand-building it? This question is of practical relevance, as hand-constructing the ontology is a time-consuming and error-prone process. Below we describe the steps we propose for ontology learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the Ontology", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Step 1: Verb/Noun selection. We select the nouns, noun phrases (NPs) and verbs in the training set to be clustered. Specifically, we select a verb/noun/NP if (1) it appears more than once in the training data; (2) it contains at least three characters (thus avoiding verbs such as be); and (3) it appears in the high-level but not the low-level requirements and vice versa.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the Ontology", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Step 2: Verb/Noun representation. We represent each noun/NP/verb as a feature vector. Each verb v is represented using the set of nouns/NPs collected in Step 1. The value of each feature is binary: 1 if the corresponding noun/NP occurs as the direct or indirect object of v in the training data (as determined by the Stanford dependency parser), and 0 otherwise. Similarly, each noun n is represented using the set of verbs collected in Step 1. The value of each feature is binary: 1 if n serves as the direct or indirect object of the corresponding verb in the training data, and 0 otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the Ontology", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Step 3: Clustering. To produce a verb clustering and a noun clustering, we cluster the verbs and the nouns/NPs separately using the single-link algorithm. Single-link is an agglomerative algorithm where each object to be clustered is initially in its own cluster. In each iteration, it merges the two most similar clusters and stops when the desired number of clusters is reached. Since we are using single-link clustering, the similarity between two clusters is the similarity between the two most similar objects in the two clusters. We compute the similarity between two objects by taking the dot product of their feature vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the Ontology", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Since we do not know the number of clusters to be produced a priori, for Pine we produce three noun clusterings and three verb clusterings (with 10, 15, and 20 clusters each). For WorldVistA, given its larger size, we produce five noun clusterings and five verb clusterings (with 10, 20, 30, 40, and 50 clusters each) . We then select the combination of noun clustering, verb clustering, and C value that maximizes F-score on the dev set, and apply the resulting combination on the test set.", |
| "cite_spans": [ |
| { |
| "start": 274, |
| "end": 317, |
| "text": "(with 10, 20, 30, 40, and 50 clusters each)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the Ontology", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "To compare the usefulness of the hand-built and induced ontologies, in our evaluation we will perform separate experiments in which each ontology is used to derive the features from Section 7.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the Ontology", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "We employ as our evaluation measure F-score, which is the unweighted harmonic mean of recall and precision. Recall (R) is the percentage of links in the gold standard that are recovered by our system. Precision (P) is the percentage of links recovered by our system that are correct. We preprocess each document by removing stopwords and stemming the remaining words. All results are obtained via five-fold cross validation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation 8.1 Experimental Setup", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Results on Pine and WorldVistA are shown in Table 4(a) and Table 4 (b), respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 59, |
| "end": 66, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "The \"No pseudo\" column of Table 4 shows the results when the learner learns from only real training instances (i.e., no pseudo-instances). Specifically, rows 1 and 2 show the results of the two unsupervised baselines, tf-idf and LDA, respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 26, |
| "end": 33, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "No Pseudo-instances", |
| "sec_num": "8.2.1" |
| }, |
| { |
| "text": "Recall from Section 5.1 that in both baselines, we compute the cosine similarity between a pair of requirements, positing them as having a traceability link if and only if their similarity score exceeds a threshold that is tuned based on the test set. By doing so, we are essentially giving both unsupervised baselines an unfair advantage in the evaluation. As we can see from rows 1 and 2 of the table, tf-idf achieves F-scores of 54.5% on Pine and 46.5% on WorldVistA. LDA performs significantly worse than tf-idf, achieving F-scores of 34.2% on Pine and 15.1% on WorldVistA. 9 Row 3 shows the results of the supervised baseline described in Section 5.2. As we can see, this baseline achieves F-scores of 57.5% on Pine and 63.3% on WorldVistA, significantly outperforming the better unsupervised baseline (tf-idf) 9 All significance tests are paired t-tests (p < 0.05).", |
| "cite_spans": [ |
| { |
| "start": 578, |
| "end": 579, |
| "text": "9", |
| "ref_id": null |
| }, |
| { |
| "start": 816, |
| "end": 817, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No Pseudo-instances", |
| "sec_num": "8.2.1" |
| }, |
| { |
| "text": "on both datasets. When this baseline is augmented with features derived from manual clusters (row 4), the resulting system achieves F-scores of 62.6% on Pine and 64.2% on WorldVistA, outperforming the supervised baseline by 5.1% and 0.9% in F-score on these datasets. These results represent significant improvements over the supervised baseline on both datasets, suggesting the usefulness of the features derived from manual clusters for traceability link prediction. When employing features derived from induced rather than manual clusters (row 5), the resulting system achieves Fscores of 61.7% on Pine and 64.6% on World-VistA, outperforming the supervised baseline by 4.2% and 1.3% in F-score on these datasets. These results also represent significant improvements over the supervised baseline on both datasets. In addition, the results obtained using manual clusters (row 4) and induced clusters (row 5) are statistically indistinguishable. This result suggests that the ontologies we induced can potentially be used in lieu of the manually constructed ontologies for traceability link prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No Pseudo-instances", |
| "sec_num": "8.2.1" |
| }, |
| { |
| "text": "The \"Pseudo pos only\" column of Table 4 shows the results when each of the systems is trained with additional positive pseudo-instances.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 39, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Using Positive Pseudo-instances", |
| "sec_num": "8.2.2" |
| }, |
| { |
| "text": "Comparing the first two columns, we can see that employing positive pseudo-instances increases performance on Pine (F-scores rise by 0.7-1.1%) but decreases performance on WorldVistA (F-scores drop by 0.3-2.1%). Nevertheless, the corresponding F-scores in all but one case (Pine, induced) are statistically indistinguishable. These results seem to suggest that the addition of positive pseudo-instances is not useful for traceability link prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Positive Pseudo-instances", |
| "sec_num": "8.2.2" |
| }, |
| { |
| "text": "Note that the addition of features derived from manual/induced clusters to the supervised baseline no longer consistently improves its performance: while F-scores still rise significantly by 4.6-5.5% on Pine, they drop insignificantly by 0.1-0.5% on WorldVistA.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Positive Pseudo-instances", |
| "sec_num": "8.2.2" |
| }, |
| { |
| "text": "Pseudo-instances", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Positive and Negative", |
| "sec_num": "8.2.3" |
| }, |
| { |
| "text": "The \"Pseudo pos+neg\" column of Table 4 shows the results when each of the systems is trained with additional positive and negative pseudo-instances. Comparing these results with the corresponding \"Pseudo pos only\" results, we can see that additionally employing negative pseudo-instances consistently improves performance: F-scores rise by 0.8-4.1% on Pine and 3.0-4.9% on World-VistA. In particular, the improvements in F-score in three of the six cases (Pine/Baseline, World-VistA/manual, WorldVistA/induced) are statistically significant. These results suggest that the additional negative pseudo-instances provide useful information for traceability link prediction. In addition, the use of features derived from manual/induced clusters to the supervised baseline consistently improves its performance: F-scores rise significantly by 1.3-3.6% on Pine and significantly by 1.4-1.6% on WorldVistA.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 38, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Using Positive and Negative", |
| "sec_num": "8.2.3" |
| }, |
| { |
| "text": "Finally, the best results in our experiments are achieved when both positive and negative pseudoinstances are used in combination with manual/induced clusters: F-scores reach 63.6-65.9% on Pine and 67.4-67.6% on WorldVistA. These results translate to significant improvements in Fscore over the supervised baseline by 6.1-8.4% on Pine and 4.1-4.3% on WorldVistA, or relative error reductions of 14.3-19.7% on Pine and 11.1-11.7% on WorldVistA.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Positive and Negative", |
| "sec_num": "8.2.3" |
| }, |
| { |
| "text": "Recall that Zaidan et al. (2007) created pseudoinstances from the text fragments that remain after the rationales are removed. In Section 6.3, we argued that their method of creating positive pseudoinstances for our requirements traceability task is problematic. In this subsection, we empirically verify the correctness of this claim.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 32, |
| "text": "Zaidan et al. (2007)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pseudo-instances from Residuals", |
| "sec_num": "8.2.4" |
| }, |
| { |
| "text": "Specifically, the \"Pseudo residual\" column of Table 4 shows the results when each of the \"No pseudo\" systems is additionally trained on the pos-itive pseudo-instances created using Zaidan et al.'s method. Comparing these results with the corresponding \"Pseudo pos+neg\" results, we see that replacing our method of creating positive pseudoinstances with Zaidan et al.'s method causes the F-scores to drop significantly by 7.7-23.6% in all cases. In fact, comparing these results with the corresponding \"No pseudo\" results, we see that except for the baseline system, employing positive pseudo-instances created from Zaidan et al.'s method yields significantly worse results than not employing pseudo-instances at all. These results provide suggestive evidence for our claim.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 46, |
| "end": 53, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pseudo-instances from Residuals", |
| "sec_num": "8.2.4" |
| }, |
| { |
| "text": "We investigated a knowledge-rich approach to an important yet under-studied SE task that presents a lot of challenges to NLP researchers: traceability prediction. Experiments on two evaluation datasets showed that (1) in comparison to a supervised baseline, this method reduces relative error by 11.1-19.7%; and (2) results obtained using induced clusters were competitive with those obtained using manual clusters. To stimulate research on this task, we make our annotated resources publicly available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "9" |
| }, |
| { |
| "text": "Note that both the rationales and the words/phrases in the ontology could help the learner by allowing it to focus on relevant materials in a given pair of requirements. Nevertheless, they are not identical: rationales are words/phrases that are relevant to the establishment of a particular traceability link, whereas the words/phrases in the ontology are relevant to link establishment in general in the given domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "See our website at http://lyle.smu.edu/ lghuang/research/Traceability/ for these annotated resources.3 http://www-03.ibm.com/software/ products/en/ratidoor 4 http://www.ibm.com/developerworks/ downloads/r/rrp 5 http://www.analysttool.com", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "C was selected from the set {1, 10, 100, 1000, 10000}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "C was selected from the set {1, 10, 100, 100, 10000}, and \u00b5 was selected from the set {0.2, 0.3, 1, 3, 5}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the three anonymous reviewers for their insightful comments on an earlier draft of the paper. This research was supported in part by the U.S. Department of Defense Systems Engineering Research Center (SERC) new project incubator fund RT-128 and the U.S. National Science Foundation (NSF Awards CNS-1126747 and IIS-1219142).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Latent Dirichlet Allocation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "I" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "993--1022", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Ma- chine Learning Research, 3:993-1022.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "LIB-SVM: a library for support vector machines", |
| "authors": [ |
| { |
| "first": "Chih-Chung", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chih-Jen", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ACM Transactions on Intelligent Systems and Technology", |
| "volume": "2", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIB- SVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3):27.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Utilizing supporting evidence to improve dynamic requirements traceability", |
| "authors": [ |
| { |
| "first": "Jane", |
| "middle": [], |
| "last": "Cleland-Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Raffaella", |
| "middle": [], |
| "last": "Settimi", |
| "suffix": "" |
| }, |
| { |
| "first": "Chuan", |
| "middle": [], |
| "last": "Duan", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuchang", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 13th IEEE International Conference on Requirements Engineering", |
| "volume": "", |
| "issue": "", |
| "pages": "135--144", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jane Cleland-Huang, Raffaella Settimi, Chuan Duan, and Xuchang Zou. 2005. Utilizing supporting evi- dence to improve dynamic requirements traceability. In Proceedings of the 13th IEEE International Con- ference on Requirements Engineering, pages 135- 144.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A machine learning approach for tracing regulatory codes to product specific requirements", |
| "authors": [ |
| { |
| "first": "Jane", |
| "middle": [], |
| "last": "Cleland-Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Czauderna", |
| "suffix": "" |
| }, |
| { |
| "first": "Marek", |
| "middle": [], |
| "last": "Gibiec", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Emenecker", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering", |
| "volume": "1", |
| "issue": "", |
| "pages": "155--164", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jane Cleland-Huang, Adam Czauderna, Marek Gibiec, and John Emenecker. 2010. A machine learn- ing approach for tracing regulatory codes to prod- uct specific requirements. In Proceedings of the 32nd ACM/IEEE International Conference on Soft- ware Engineering (Volume 1), pages 155-164.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A feasibility study of automated natural language requirements analysis in market-driven development", |
| "authors": [ |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Natt Dag", |
| "suffix": "" |
| }, |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [], |
| "last": "Regnell", |
| "suffix": "" |
| }, |
| { |
| "first": "P\u00e4r", |
| "middle": [], |
| "last": "Carlshamre", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Andersson", |
| "suffix": "" |
| }, |
| { |
| "first": "Joachim", |
| "middle": [], |
| "last": "Karlsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Requirements Engineering", |
| "volume": "7", |
| "issue": "1", |
| "pages": "20--33", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johan Natt Dag, Bj\u00f6rn Regnell, P\u00e4r Carlshamre, Michael Andersson, and Joachim Karlsson. 2002. A feasibility study of automated natural language re- quirements analysis in market-driven development. Requirements Engineering, 7(1):20-33.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Recovering traceability links in software artifact management systems using information retrieval methods", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "De Lucia", |
| "suffix": "" |
| }, |
| { |
| "first": "Fausto", |
| "middle": [], |
| "last": "Fasano", |
| "suffix": "" |
| }, |
| { |
| "first": "Rocco", |
| "middle": [], |
| "last": "Oliveto", |
| "suffix": "" |
| }, |
| { |
| "first": "Genoveffa", |
| "middle": [], |
| "last": "Tortora", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ACM Transactions on Software Engineering and Methodology", |
| "volume": "16", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea De Lucia, Fausto Fasano, Rocco Oliveto, and Genoveffa Tortora. 2007. Recovering traceabil- ity links in software artifact management systems using information retrieval methods. ACM Trans- actions on Software Engineering and Methodology, 16(4):13.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Assessing IR-based traceability recovery tools through controlled experiments", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "De Lucia", |
| "suffix": "" |
| }, |
| { |
| "first": "Rocco", |
| "middle": [], |
| "last": "Oliveto", |
| "suffix": "" |
| }, |
| { |
| "first": "Genoveffa", |
| "middle": [], |
| "last": "Tortora", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Empirical Software Engineering", |
| "volume": "14", |
| "issue": "1", |
| "pages": "57--92", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea De Lucia, Rocco Oliveto, and Genoveffa Tor- tora. 2009. Assessing IR-based traceability recov- ery tools through controlled experiments. Empirical Software Engineering, 14(1):57-92.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Generating typed dependency parses from phrase structure parses", |
| "authors": [ |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Maccartney", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "449--454", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation, pages 449- 454.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A keyphrase based traceability scheme", |
| "authors": [ |
| { |
| "first": "Justin", |
| "middle": [ |
| "Jackson" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "IEEE Colloquium on Tools and Techniques for Maintaining Traceability During Design", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Justin Jackson. 1991. A keyphrase based traceability scheme. In IEEE Colloquium on Tools and Tech- niques for Maintaining Traceability During Design, pages 2/1-2/4. IET.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Can LSI help reconstructing requirements traceability in design and test?", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Lormans", |
| "suffix": "" |
| }, |
| { |
| "first": "Arie", |
| "middle": [], |
| "last": "Van Deursen", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 10th European Conference on Software Maintenance and Reengineering", |
| "volume": "", |
| "issue": "", |
| "pages": "47--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Lormans and Arie Van Deursen. 2006. Can LSI help reconstructing requirements traceability in de- sign and test? In Proceedings of the 10th European Conference on Software Maintenance and Reengi- neering, pages 47-56.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A requirements tracing tool", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pierce", |
| "suffix": "" |
| } |
| ], |
| "year": 1978, |
| "venue": "ACM SIGSOFT Software Engineering Notes", |
| "volume": "3", |
| "issue": "5", |
| "pages": "53--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert A Pierce. 1978. A requirements tracing tool. ACM SIGSOFT Software Engineering Notes, 3(5):53-60.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Text mining support for software requirements: Traceability assurance", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Port", |
| "suffix": "" |
| }, |
| { |
| "first": "Allen", |
| "middle": [], |
| "last": "Nikora", |
| "suffix": "" |
| }, |
| { |
| "first": "Jane", |
| "middle": [ |
| "Huffman" |
| ], |
| "last": "Hayes", |
| "suffix": "" |
| }, |
| { |
| "first": "Liguo", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 44th Hawaii International Conference on System Sciences", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Port, Allen Nikora, Jane Huffman Hayes, and LiGuo Huang. 2011. Text mining support for soft- ware requirements: Traceability assurance. In Pro- ceedings of the 44th Hawaii International Confer- ence on System Sciences, pages 1-11.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Application of swarm techniques to requirements engineering: Requirements tracing", |
| "authors": [ |
| { |
| "first": "Hakim", |
| "middle": [], |
| "last": "Sultanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Jane", |
| "middle": [ |
| "Huffman" |
| ], |
| "last": "Hayes", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Requirements Engineering Conference (RE), 2010 18th IEEE International", |
| "volume": "", |
| "issue": "", |
| "pages": "211--220", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hakim Sultanov and Jane Huffman Hayes. 2010. Ap- plication of swarm techniques to requirements en- gineering: Requirements tracing. In Requirements Engineering Conference (RE), 2010 18th IEEE In- ternational, pages 211-220.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Baselines in requirements tracing", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Senthil Karthikeyan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jane", |
| "middle": [ |
| "Huffman" |
| ], |
| "last": "Sundaram", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Hayes", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dekhtyar", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ACM SIGSOFT Software Engineering Notes", |
| "volume": "30", |
| "issue": "4", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Senthil Karthikeyan Sundaram, Jane Huffman Hayes, and Alexander Dekhtyar. 2005. Baselines in re- quirements tracing. ACM SIGSOFT Software En- gineering Notes, 30(4):1-6.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Using \"annotator rationales\" to improve machine learning for text categorization", |
| "authors": [ |
| { |
| "first": "Omar", |
| "middle": [], |
| "last": "Zaidan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| }, |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Piatko", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "260--267", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using \"annotator rationales\" to improve ma- chine learning for text categorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 260-267.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "text": "Manual ontology for Pine.", |
| "content": "<table/>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "Statistics on the datasets.", |
| "content": "<table/>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "text": "Manual ontology for WorldVistA. differing only w.r.t. the nouns and verbs that populate each category. As we can see, for Pine, eight groups of nouns and ten groups of verbs are defined, and for WorldVistA, 31 groups of nouns and 14 groups of verbs are defined. Each noun category represents a domain-specific semantic class, and each verb category corresponds to a function performed by the action underlying a verb.", |
| "content": "<table/>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "text": "Supervised baseline 50.4 67.0 57.5 51.2 67.3 58.2 53.9 73.8 62.3 31.6 68.6 43.2 4 + manual clusters 54.4 73.9 62.6 55.6 74.7 63.7 57.6 77.0 65.9 30.0 72.1 42.3 5 + induced clusters 53.6 72.8 61.7 54.8 73.6 62.8 55.2 75.0 63.6 30.0 73.5 42.6", |
| "content": "<table><tr><td/><td/><td colspan=\"2\">No pseudo</td><td/><td colspan=\"9\">Pseudo pos only Pseudo pos+neg Pseudo residual</td></tr><tr><td/><td>System</td><td>R</td><td>P</td><td>F</td><td>R</td><td>P</td><td>F</td><td>R</td><td>P</td><td>F</td><td>R</td><td>P</td><td>F</td></tr><tr><td colspan=\"2\">1 Tf-idf baseline</td><td colspan=\"4\">73.6 43.3 54.5 -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">2 LDA baseline</td><td colspan=\"4\">30.4 39.2 34.2 -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"7\">3 (a) Pine</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"2\">No pseudo</td><td/><td colspan=\"9\">Pseudo pos only Pseudo pos+neg Pseudo residual</td></tr><tr><td/><td>System</td><td>R</td><td>P</td><td>F</td><td>R</td><td>P</td><td>F</td><td>R</td><td>P</td><td>F</td><td>R</td><td>P</td><td>F</td></tr><tr><td colspan=\"2\">1 Tf-idf baseline</td><td colspan=\"4\">60.4 37.8 46.5 -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">2 LDA baseline</td><td colspan=\"4\">25.9 10.6 15.1 -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"14\">3 Supervised baseline 52.5 79.9 63.3 52.2 79.2 63.0 55.9 80.6 66.0 49.2 71.5 58.3</td></tr><tr><td>4</td><td>+ manual clusters</td><td colspan=\"12\">52.5 82.8 64.2 51.5 80.8 62.9 57.1 83.0 67.6 47.7 76.1 58.6</td></tr><tr><td>5</td><td colspan=\"13\">+ induced clusters 52.8 83.2 64.6 51.0 80.7 62.5 57.1 82.1 67.4 47.7 76.4 58.7</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">(b) WorldVistA</td><td/><td/><td/><td/><td/><td/></tr></table>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "text": "Results of supervised systems on the Pine and WorldVistA datasets.", |
| "content": "<table/>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |