| { |
| "File Number": "1019", |
| "Title": "Split-NER: Named Entity Recognition via Two Question-Answering-based Classifications", |
| "3 A1. Did you describe the limitations of your work?": "Limitations Section\nA2. Did you discuss any potential risks of your work? Not applicable. Left blank.", |
| "abstractText": "In this work, we address the NER problem by splitting it into two logical sub-tasks: (1) Span Detection which simply extracts mention spans of entities, irrespective of entity type; (2) Span Classification which classifies the spans into their entity types. Further, we formulate both sub-tasks as question-answering (QA) problems and produce two leaner models which can be optimized separately for each sub-task. Experiments with four crossdomain datasets demonstrate that this two-step approach is both effective and time efficient. Our system, SplitNER outperforms baselines on OntoNotes5.0, WNUT17 and a cybersecurity dataset and gives on-par performance on BioNLP13CG. In all cases, it achieves a significant reduction in training time compared to its QA baseline counterpart. The effectiveness of our system stems from fine-tuning the BERT model twice, separately for span detection and classification. The source code can be found at github.com/c3sr/split-ner.", |
| "1 Introduction": "Named entity recognition (NER) is a foundational task for a variety of applications like question answering and machine translation (Li et al., 2020a). Traditionally, NER has been seen as a sequence labeling task where a model is trained to classify each token of a sequence to a predefined class (Carreras et al., 2002, 2003; Chiu and Nichols, 2016; Lample et al., 2016; Ma and Hovy, 2016; Devlin et al., 2019; Wan et al., 2022).\nRecently, there has been a new trend of formulating NER as span prediction problem (Stratos, 2017; Li et al., 2020b; Jiang et al., 2020; Ouchi et al., 2020; Fu et al., 2021), where a model is trained to jointly perform span boundary detection and multiclass classification over the spans. Another trend is to formulate NER as a question answering (QA) task (Li et al., 2020b), where the model is given a sentence and a query corresponding to each entity\ntype. The model is trained to understand the query and extracts mentions of the entity type as answers. While these new frameworks have shown improved results, both approaches suffer from a high computational cost: span-based NER systems consider all possible spans (i.e., n2 (quadratic) spans for a sentence with n tokens) and the QA-based system multiplies each input sequence by the number of entity types resulting in N×T input sequences for N sentences and T entity types.\nIn this work, we borrow the effectiveness of span-based and QA-based techniques and make it more efficient by breaking (splitting up) the NER task into a two-step pipeline of classification tasks. In essence, our overall approach comes under the span-based NER paradigm, and each sub-task is formulated as a QA task inspired by the higher accuracy offered by the QA framework. The first step, Span Detection performs token-level classification to extract mention spans from text, irrespective of entity type and the second step, Span Classification classifies the extracted spans into their corresponding entity type, thus completing the NER task. Unlike other span-based NER techniques which are quadratic in terms of sequence length, our Span Detection process is linear. Compared to other QA-based techniques which query for all entity types in each sentence, our Span Classification queries each sentence only once for each entity mention in the sentence. This makes it highly efficient for datasets with large number of entity types like OntoNotes5.0.", |
| "2 Method": "Figure 1 illustrates how our two-step SplitNER system works. Span Detection Model is entityagnostic and identifies all mention spans irrespective of entity type. The extracted spans are passed to Span Classification Model which reanalyses them in the sentence structure and classifies them into an entity type. Both models use BERT-\n416\nbase as their underlying architecture and are designed as QA tasks. Hence, moving forward, we may sometimes explicitly call our system as SplitNER(QA-QA) to distinguish it from other variants we experiment with.", |
| "2.1 Span Detection": "Given a sentence S as a n-length sequence of tokens, S = ⟨w1, w2 . . . wn⟩, the goal is to output a list of spans ⟨s, e⟩, where s, e ∈ [1, n] are start and end indices of a mention. We formulate this as a QA task classifying each token using BIOE scheme1. Since the goal is to detect spans irrespective of their entity type, we use a generic question, “Extract important entity spans from the following text”, prefixed with input sentence (see Figure 1)2.\nA well-known problem in pipeline systems is error propagation. Inaccurate mention boundaries will lead to incorrect entity type classification. We observed that such boundary detection errors happen mostly for domain-specific terms which occur rarely and do not have a good semantic representation in the underlying BERT model. However, these domain specific terms often share patterns at character-level (e.g., chemical formulas). Thus we add character sequences and intrinsic orthographic patterns as additional features along with the BERT embeddings. The character and pattern features are shown to produce better word representations (Carreras et al., 2002; Limsopatham and Collier, 2016; Boukkouri et al., 2020; Lange et al., 2021).\nCharacter Sequence Feature To learn characterlevel representation of each token, we use five onedimensional CNNs with kernel sizes from 1 to 5, each having 16 filters and 50 input channels. Each\n1All experiments in this paper use BIOE scheme but the approach is generalizable to other schemes like BIOES.\n2Other similar question texts / no question text also gives similar results as shown in ablation study in Appendix A.\ntoken output from WordPiece Tokenizer is fed to the five CNN models simultaneously, which produce a 50-dimensional embedding for each character. These are max-pooled and the outputs from the CNNs are concatenated and passed through a linear layer with ReLU activation to get a 768- dimensional character-level representation of the token. Figure 2a shows the process.\nOrthographic Pattern Feature To capture the intrinsic orthographic patterns (or word shapes) of entity mentions at the sub-word level, we map all uppercase tokens to a single character, U, all lowercase tokens to L, all digit tokens to D. If a token contains a mix of uppercase, lowercase and digits, we map each lowercase character to l, uppercase to u and digit to d. Special characters are retained and BERT’s special tokens, “[CLS]” and “[SEP]”, are mapped to C and S respectively.\nWe use 3 CNNs with the same setup as character sequence with kernel sizes of 1 to 3. Note that a contextual learning layer is needed to capture patterns in mentions spanning multiple tokens. Thus, we pass the pattern-level embeddings for all tokens to a bidirectional LSTM with 256 hidden dimensions as shown in Figure 2b. Finally, the character and pattern features are concatenated with the BERT output for the token and fed to a final classifier layer as shown in Figure 3.", |
| "2.2 Span Classification": "Given a sentence S = ⟨w1, w2 . . . wn⟩ and a span ⟨s, e⟩, this step determines the entity type for the span. Existing QA-based NER methods take the target entity type as the question (e.g., “Where is person?) and return the corresponding mentions in the sentence. On the contrary, our model takes a mention as the question (e.g., “What is Emily?) and outputs its entity type.\nDuring training, we create a training sample for\neach labeled entity mention in a sentence. During inference, the model gets the mention spans from Span Detection Model as its input. An input sample is created by appending the mention span text as “What is [mention]?\" to the input sentence (see top diagrams in Figure 1 for example). This is fed to a BERT model and the pooled sequence embedding is fed to a fully connected layer and converted into a probability distribution over the entity types.", |
| "3 Experimental Results": "We demonstrate the effectiveness of our method in terms of performance and latency.", |
| "3.1 Datasets": "Table 1 shows our datasets, including three public benchmark datasets, BioNLP13CG (Pyysalo et al., 2015), OntoNotes5.0 (Weischedel et al., 2013), and WNUT17 (Derczynski et al., 2017), and a private dataset3 (CTIReports) from the cybersecurity domain which contains news articles and technical reports related to malware and security threats. These datasets cover not only the traditional whole-word entities like PERSON but also\n3The dataset curation procedure, entity types and their distribution is described in detail in Appendix C.\nentity types with non-word mentions (e.g., chemical formulas) and very long mentions (e.g., URLs).", |
| "3.2 Experimental Setup": "We implement our baselines and our proposed system, SplitNER in pytorch using transformers (Wolf et al., 2019). All models are trained on Nvidia Tesla V100 GPUs and use BERT-base architecture. We use pretrained RoBERTa-base (Liu et al., 2019) backbone for all experiments with OntoNotes5.0 corpus following Ye et al. (2022); Zhu and Li (2022) and use SciBERT-scivocab-uncased (Beltagy et al., 2019) for BioNLP13CG since this dataset has chemical formulas and scientific entities4. For WNUT175 and CTIReports, we use BERT-base-uncased (Devlin et al., 2019). Note that our model is a general two-step NER framework which has the performance benefits of QA-based and span-based approaches with efficiency. It can work with any BERT-based pretrained backbones.\nThe training data is randomly shuffled, and a batch size of 16 is used with post-padding. The maximum sequence length is set to 512 for\n4We also experimented with BioBERT(Lee et al., 2020) (dmis-lab/biobert-base-cased-v1.1) which gives similar trends. But SciBERT outperforms in our experiments.\n5BERTweet(Nguyen and Vu, 2020) model can also be used for WNUT17. We expect it to give same trends with even better performance figures.\nOntoNotes5.06 and to 256 for all other datasets. For model optimization, we use cross entropy loss for span detection and dice loss(Li et al., 2020c) for span classification. All other training parameters are set to defaults in transformers.", |
| "3.3 Performance Evaluation": "We compare our method SplitNER(QA-QA) with the following baselines and variants. (1) Single(SeqTag): The standard single-model sequence tagging NER setup which classifies each token using BIOE scheme. (2) Single(QA): The standard single-model QA-based setup which prefixes input sentences with a question describing the target entity type (e.g., Where is the person mentioned in the text?); (3) SplitNER(SeqTag-QA): A variant of our model which uses sequence tagging for span detection with our QA-based Span Classification Model; (4) SplitNER(QANoCharPattern-QA): This model is the same as our method but without the additional character and pattern features. All other baselines use character and pattern features for fair comparison. We trained all models with 5 random seeds and report the mean mention-level Micro-F1 score in Table 2. As can be seen, SplitNER(QA-QA) outperforms all baselines on three cross-domain datasets and gives comparable results on BioNLP13CG. We present further ablation studies on individual components of our system in Appendix A and a qualitative study in Appendix B.\n6Sentences in OntoNotes5.0 are found to be longer and with maximum sequence length set to 256, lots of sentences get truncated. Hence we select a larger limit of 512.", |
| "3.4 Latency Evaluation": "We compare the latency of our method, SplitNER(QA-QA) and the two single-model NER methods. Table 3 shows the training and inference times. Training time is measured for one epoch and averaged over 10 runs. For a fair comparison, we report the training latency for our system as the sum of span detection and classification even though they can be trained in parallel.\nThe results show that, compared to Single(QA), our method is 5 to 25 times faster for training and about 5 times faster for inference, and it is especially beneficial for large datasets with many entity types. Compared to Single(SeqTag), our method is slightly slower but achieves much better F1 scores (Table 2). These results validate SplitNER(QA-QA)’s effectiveness in achieving the balance between performance and time efficiency.", |
| "4 Related Work": "In recent years, deep learning has been increasingly applied for NER (Torfi et al., 2020; Li et al., 2020a), a popular architecture being CNN-LSTMCRF (Ma and Hovy, 2016; Xu et al., 2021) and BERT (Devlin et al., 2019). Li et al. (2020b,c) propose a QA-based setup for NER using one model for both span detection and classification. Li et al. (2020b); Jiang et al. (2020); Ouchi et al. (2020); Fu et al. (2021); Zhu and Li (2022) perform NER as a span prediction task. However, they enumerate all possible spans in a sentence leading to quadratic complexity w.r.t. sentence length. Our model does a token-level classification and hence is linear.\nXu et al. (2021) propose a Syn-LSTM setup leveraging dependency tree structure with pre-\ntrained BERT embeddings for NER. Yan et al. (2021) propose a generative framework leveraging BART (Lewis et al., 2020) for NER. Yu et al. (2020) propose a biaffine model utilizing pretrained BERT and FastText (Bojanowski et al., 2017) embeddings along with character-level CNN setup over a Bi-LSTM architecture. All of these models report good performance on OntoNotes5.0, however, using BERT-large architecture. Nguyen and Vu (2020) propose the BERTweet model by training BERT on a corpus of English tweets and report good performance on WNUT17. Wang et al. (2021) leverage external knowledge and a cooperative learning setup. On BioNLP13CG, Crichton et al. (2017) report 78.90 F1 in a multi-task learning setup and Neumann et al. (2019) report 77.60 using the SciSpacy system. SplitNER(QA-QA) outperforms both of these by a large margin.", |
| "5 Conclusion": "Using the QA-framework for both span detection and span classification, we show that this division of labor is not only effective but also significantly efficient through experiments on multiple crossdomain datasets. Through this work, we open up the possibility of breaking down other complex NLP tasks into smaller sub-tasks and fine-tuning large pretrained language models for each task.\nLimitations\nOur proposed approach requires to train two independent classification models. While the models can be trained in parallel, this requires larger GPU memory. For the experiments, we trained two BERT-base models, which have around 220M trainable parameters when trained in parallel. This requires almost twice the GPU memory compared to a single BERT-base NER model, having around 110M trainable parameters.\nOwing to a pipeline-based structure, the overall performance of our system is upper bounded by the performance of Span Detection Model which has lots of potential for improvement. On dev set, we find that around 30% of errors for OntoNotes5.0 and BioNLP13CG, and around 22% errors on WNUT17 are just due to minor boundary detection issues. Their entity types are being detected correctly. We henceforth encourage the research community to design architectures or new training objectives to detect mention boundaries more effectively. Currently, in our Span Detection Model,\nall entity mentions are grouped into a single class. As a potential future work, we expect to get even better performance by a hierarchical extension of our setup. At the top level, we can detect mentions belonging to some crude categories and gradually break them down into more fine-grained categories.", |
| "A Performance Ablations": "Here, we study the individual components of our system, SplitNER(QA-QA) in detail. First, we investigate the effectiveness of the additional character and pattern features for span detection. As we can see from Table 4, the character and pattern features improve the NER performance for all datasets.\nWe also study the effect of the character and pattern features separately. Table 5 shows this ablation study on the BioNLP13CG dataset. As we can see, adding the character feature or the pattern feature\nalone makes a small change in the performance. Interestingly, the character feature helps with recall, while the pattern features improves precision, and, thus, adding them together improves both precision and recall. However, adding part-of-speech (POS) in addition to the character and pattern features shows little impact on the performance.\nNext, we compare dice loss and cross-entropy loss for their effectiveness in handling the class imbalance issue in span classification. As shown in Table 6, dice loss works better for imbalanced data confirming the results found in Li et al. (2020c).\nFinally, we experimented with different question sentences in Span Detection Model to check if BERT is giving any importance to the query part. As shown in Table 7, different queries do have a minor impact but as expected, the model mostly learns not to focus on the query part as can be seen by the comparable results with <empty> query.\nA.1 Discussions From the results of the experiments described in Section 3 together with the ablation studies, we make the following observations:\n• As shown in Table 2, SplitNER(QA-QA) outperforms both the sequence tagging and QA-based baselines on three crossdomain datasets and performs on-par on BioNLP13CG.\n• The division of labor allows each model to be optimized for its own sub-task. Adding character and pattern features improves the accuracy of Span Detection Model (Table 4). However, adding these same features in Span Classification Model was found to deteriorate the performance. Similarly, dice loss improves the performance for Span Classification Model (Table 6), but no such impact was observed for Span Detection Model.\n• Span detection using the QA setting is slightly more effective than the sequence tagging setup as done in SplitNER(SeqTag-QA) (Table 2).\n• Our model has more representative power than the baseline approaches, because it leverages two BERT models, each working on their own sub-tasks.\n• It also leverages the QA framework much more efficiently than the standard singlemodel QA system (Table 3). The margin of improvement is more pronounced when the data size and number of entity types increase.\nSpan Detection Features\nBioNLP13CG CTIReports OntoNotes5.0 WNUT17\nP R F1 P R F1 P R F1 P R F1\n+CharPattern 91.43 90.70 91.06 80.59 77.21 78.86 92.17 92.83 92.50 73.38 44.25 55.21 -CharPattern 90.31 91.03 90.67 79.65 77.77 78.70 91.96 92.79 92.37 72.63 44.06 54.85", |
| "B Qualitative Analysis": "Table 8 shows some sample predictions by our method, SplitNER(QA-QA) and compares them with our single-model NER baseline, Single(QA). From the results, we observe that:\n• SplitNER(QA-QA) is better in detecting emerging entities and out-of-vocabulary (OOV) terms (e.g., movie titles and softwares). This can be attributed to Span Detection Model being stronger in generalizing and sharing entity extraction rules across multiple entity types.\n• Single(QA) gets confused when entities have special symbols within them (e.g., hyphens and commas). Our character and orthographic pattern features help handle such cases well.\n• Single(QA) model develops a bias towards more common entity types (e.g., PERSON) and misclassifies rare entity mentions when they occur in a similar context. SplitNER(QA-QA) handles such cases well thanks to the dedicated Span Classification Model using dice loss.\nC CTIReports Dataset\nThe CTIReports dataset is curated from a collection of 967 documents which include cybersecurity news articles and white papers published online by reputable companies and domain knowledge experts. These documents usually provide deep analysis on a certain malware, a hacking group or a newly discovered vulnerability (like a bug in software that can be exploited). The documents were published between 2016 and 2018. We split the dataset into the train, development, and test sets as shown in Table 9.\nA team of cybersecurity domain experts labeled the dataset for the following 8 entity types. These\nCategory Example Sentence\nGeneral Detection CVS selling their own version of ... CVS selling their own version of ...\nEmerging Entities Rogue One create a plot hole in Return of the Jedi Rogue One create a plot hole in Return of the Jedi\ntypes were selected based on the STIX (Structured Threat Information Expression) schema which is used to exchange cyber threat intelligence. For more detailed information about the 8 types, please refer the STIX documentation7.\n• CAMPAIGN: Names of cyber campaigns that describe a set of malicious activities or attacks over a period of time.\n• COURSE OF ACTION: Tools or actions to take in response to cyber attacks.\n• EXPLOIT TARGET: Vulnerabilities that are targeted for exploitation.\n• IDENTITY: Individuals, groups or organizations.\n• INDICATOR: Objects that are used to detect suspicious or malicious cyber activity such as domain name, IP address and file names.\n• MALWARE: Names of malicious codes used in cyber crimes.\n• RESOURCE: Tools that are used in cyber attacks.\n• THREAT ACTOR: Individuals or groups that commit cyber crimes.\nTable 10 and Table 11 show the statistics of the entity types in the corpus and some sample mentions of these types respectively.\n7https://stixproject.github.io/releases/1.2\nACL 2023 Responsible NLP Checklist", |
| "3 A3. Do the abstract and introduction summarize the paper’s main claims?": "Abstract and Section 1\n7 A4. Have you used AI writing assistants when working on this paper? Left blank.\nB 7 Did you use or create scientific artifacts? Left blank.\nB1. Did you cite the creators of artifacts you used? No response.\nB2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response.\nB3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response.\nB4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response.\nB5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response.\nB6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response.\nC 3 Did you run computational experiments? Section 3", |
| "3 C1. Did you report the number of parameters in the models used, the total computational budget": "(e.g., GPU hours), and computing infrastructure used? Section 2, 3\nThe Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.", |
| "3 C2. Did you discuss the experimental setup, including hyperparameter search and best-found": "hyperparameter values? Section 2, 3", |
| "3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary": "statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3", |
| "3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did": "you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 2, 3\nD 7 Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank.\nD1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response.\nD2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? No response.\nD3. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response.\nD4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response.\nD5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response." |
| } |