ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.39.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:34:58.811550Z"
},
"title": "Fancy Man Launches Zippo at WNUT 2020 Shared Task-1: A Bert Case Model for Wet Lab Entity Extraction",
"authors": [
{
"first": "Haoding",
"middle": [],
"last": "Meng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xian Jiaotong University",
"location": {}
},
"email": "menghd@stu.xjtu.edu.cn"
},
{
"first": "Qingcheng",
"middle": [],
"last": "Zeng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Manchester",
"location": {}
},
"email": "qingcheng.zeng@student.manchester.ac.uk"
},
{
"first": "Xiaoyang",
"middle": [],
"last": "Fang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhexin",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhejiang University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic or semi-automatic conversion of protocols specifying steps in performing a lab procedure into machine-readable format benefits biological research a lot. These noisy, dense, and domain-specific lab protocols processing draws more and more interests with the development of deep learning. This paper presents our teamwork on WNUT 2020 shared task-1: wet lab entity extract, that we conducted studies in several models, including a BiLSTM CRF model and a Bert case model which can be used to complete wet lab entity extraction. And we mainly discussed the performance differences of Bert case under different situations such as transformers versions, case sensitivity that may don't get enough attention before.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic or semi-automatic conversion of protocols specifying steps in performing a lab procedure into machine-readable format benefits biological research a lot. These noisy, dense, and domain-specific lab protocols processing draws more and more interests with the development of deep learning. This paper presents our teamwork on WNUT 2020 shared task-1: wet lab entity extract, that we conducted studies in several models, including a BiLSTM CRF model and a Bert case model which can be used to complete wet lab entity extraction. And we mainly discussed the performance differences of Bert case under different situations such as transformers versions, case sensitivity that may don't get enough attention before.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The task of named entity recognition (NER) was first put forward in 1991, after which it gradually became an essential part of natural language processing (NLP). The methods for NER is generally classified into four kinds: rule-based approaches, unsupervised learning approaches, feature-based supervised learning approaches and deep-learning based approaches. The previous methods are mainly rule-based, performing well on small dataset, like the LaSIE-II system provided by Humphreys et al. . Under the rapid development of deep learning since 2013, methods like BiLSTM CRF has been a hot spot in recent years. Even now, most of the deep learning methods for NER are based on this framework. But lately, some new research based on the concept of \"pre-training\" has attracted more and more attention, for example, Bert, which stands for bidirectional encoder representations from transformers (Devlin et al., 2018) . It both pushed the GLUE score to 80.5 % , which got 7.7 % point absolute improvement and Isolation of temperate phages by plaque agar overlay 1. Melt soft agar overlay tubes in boiling water and place in the 47C water bath. 2. Remove one tube of soft agar from the water bath. 3. Add 1.0 mL host culture and either 1.0 or 0.1 mL viral concentrate. 4. Mix the contents of the tube well by rolling back and forth between two hands, and immediately empty the tube contents onto an agar plate. 5. Sit RT for 5 min. 6. Gently spread the top agar over the agar surface by sliding the plate on the bench surface using a circular motion. 7. Harden the top agar by not disturbing the plates for 30 min. 8. Incubate the plates (top agar side down) overnight to 48h. 9. Temperate phage plaques will appear as turbid or cloudy plaques, whereas purely lytic phage will appear as shaply defined, clear plaques. (Kulkarni et al., 2018) created a new paradigm of natural language processing task method: using the model pre-trained on a large corpus to complete downstream tasks through fine-tuning.",
"cite_spans": [
{
"start": 476,
"end": 492,
"text": "Humphreys et al.",
"ref_id": null
},
{
"start": 894,
"end": 915,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1815,
"end": 1838,
"text": "(Kulkarni et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For WNUT 2020 shared task-1 (Tabassum et al., 2020) , participants were asked to develop a system that automatically identify entities from the provided lab instructions dataset. The dataset is from wet lab protocols, which usually refer to the experiment instructions in biology or chemistry experiments, involving substances like chemicals, proteins, drugs and other materials. Figure 1 shows one representative example of the wet lab protocols. In this shared task, the data was divided into three parts, training data with 370 protocols, development data with 122 protocols and test data with 123 protocols. The data was given in CoNLL format. To sum up, this was a small dataset in (Kulkarni et al., 2018) with BRAT (Stenetorp et al., 2012) . It could be visualized via http://bit.ly/WNUT2020platform. Figure 2 shows a visualization result of the protocol 3 in our training dataset. And there are 18 kinds of entities as shown in table 1. In order to facilitate narration and comparison, we merged all the files of test provided into one during training and verification, and separated and saved the prediction results of corresponding files by using dictionary when submitting verification.",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "(Tabassum et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 687,
"end": 710,
"text": "(Kulkarni et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 721,
"end": 745,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 380,
"end": 388,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 807,
"end": 816,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Task",
"sec_num": "2"
},
{
"text": "The provided baseline model is a linear conditional random field (CRF) tagger, which is one of the traditional machine learning ways to complete the named entity recognition task (Finkel et al., 2005) . This tagger does the NER task with feature engineering, taking word features, context features and gazatteer features into consideration.",
"cite_spans": [
{
"start": 179,
"end": 200,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.1"
},
{
"text": "BiLSTM CRF is a deep learning oriented tagger to complete the NER task (Huang et al., 2015) . The long-short term memory (LSTM) unit (Hochreiter and Schmidhuber, 1997 ) is a kind of specifically designed recurrent neural network (RNN) to process the timing sequential information. Here, LSTM units are adopted to collect the information in the context. Additionally, they are bidirectional so that they can take information from both sides into consideration.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 133,
"end": 166,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM CRF",
"sec_num": "3.2"
},
{
"text": "One more CRF layer is added into this model because it will help the model to standardizing output results. For example, a sequence like \"B-Action I-Mention\" will never be possible in real world. However, a pure BiLSTM model is possibly giving this kind of errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM CRF",
"sec_num": "3.2"
},
{
"text": "The basic architecture of BiLSTM CRF is shown in figure 3. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM CRF",
"sec_num": "3.2"
},
{
"text": "Bidirectional encoder representations from transformers (Bert) was first proposed by Google AI researchers in 2018 (Devlin et al., 2018) . It achieved quite a few new records in NLP field and the concept of \"pre-training\" has been popular since then. In this shared task, we also adopted Bert pretraining model to do the NER task and to compare the results with BiLSTM CRF to explore the performance of different techniques.",
"cite_spans": [
{
"start": 115,
"end": 136,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bert",
"sec_num": "3.3"
},
{
"text": "Our Bert case adopted BertforTokenClassification class in transformers (Wolf et al., 2019) and added one more fine-tuning layer to complete the task. The architecture is shown in the following figure 4. ",
"cite_spans": [
{
"start": 71,
"end": 90,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bert",
"sec_num": "3.3"
},
{
"text": "In this shared task, we tried to compare the relatively traditional deep learning method, BiLSTM CRF, and pre-training Bert based, including Bert base cased model and Bert base uncased model. The former needs to train static word vector from dataset, while the latter is equivalent to using dynamic word vector and we can directly conduct fine-tuning experiments on NER by connecting BertforTokenClassification after pre-training model. It can be seen that from table 2 that the performance of BiLSTM CRF without careful training of word vector is not as good as the baseline model provided, Linear CRF. Therefore, our follow-up experiments will focus on training Bert in different situations and carry out exploration and discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Linear CRF 0.7549 0.7332 0.7439 BiLSTM CRF 0.7208 0.6605 0.7101 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "precision recall F1",
"sec_num": null
},
{
"text": "For the sake of simplicity, we will use wd and \u03b7 represent weight decay and learning rate respectively and cased/uncased with/without to abbreviate the corresponding model with lowercase processing or not in subsequent trials. In addition, v means importing the required classes like BertModel from pytorch-transformers, and V means importing from transformers, and classes are imported from the lat-ter by default. In general, the precision and recall of the model are considered comprehensively in f1-score, so the performance of the model is often evaluated using f1-score (micro avg). And the numbers underlined in the chart represent possible anomalies, while the numbers highlighted in bold represent the best results. The model doesn't converge when it's trained only once, but it may face problems of over fitting and CUDA out of memory if it is more than 4, for only 8G memory in RTX 2060 Super and RTX 2080. After several trials, 3 is selected as the optimal default epoch number. For alleviating over fitting, weight decay technique (or L2 regularization) is usually adopted and wd is empirically set between 0.001 and 0.01. If there is no special explanation, the default value of weight decay value in this paper is 0.005.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stipulate",
"sec_num": "4.1"
},
{
"text": "Named entity recognition is one of downstream tasks of Bert. Since it has been trained on a large scale corpus, the recommended learning rate is generally small, such as 2e \u2212 5, 3e \u2212 5, 5e \u2212 5 (Devlin et al., 2018) . But this needs to be considering with the specific application scenarios. By using the default Bert model: cased without and uncased with, we trained on both 2060s and 2080 with different learning rates.",
"cite_spans": [
{
"start": 193,
"end": 214,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rate",
"sec_num": "4.2.1"
},
{
"text": "As shown in table 3, all recommended learning rates did not perform well in this task, and we thought that the dataset provided this time are not common in daily life, so we were ought to increase the learning rate appropriately. At the same time, we noticed that the uncased model is better than the cased model on 2060s, but it is opposite in 2080. Although the best model is obtained by training on 2080, the model training on 2060s is more stable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Rate",
"sec_num": "4.2.1"
},
{
"text": "Generally speaking, the uncased model is better than the cased model, however, the cased model performs better when there are obvious case differences in specific aspects such as named entity recognition. But we also noticed that we could train an uncased model after processing the text in lowercase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Sensitivity and Version",
"sec_num": "4.2.2"
},
{
"text": "During testing, we also found that different versions of classes imported will lead to differing results. To better explore the influence of case sensitivity and version, we further trained three pos-learning rate 2e-5 3e-5 5e-5 8e-5 9e-5 1e-4 2e-4 3e-4 4e-4 5e-4 cased without 0.7776 0.7848 0.7909 0.7949 0.7961 0.7965 0.7980 0.7963 0.7956 0.7911 2060s uncased with 0.7797 0.7900 0.7974 0.7987 0.7993 0.7993 0.7972 0.7968 0.7970 0.7928 cased without 0.7775 0.7844 0.7907 0.7951 0.7973 0.7994 0.8008 0.7946 0.7917 0.7876 2080 uncased with 0.7775 0.7881 0.7948 0.7974 0.7956 0.7962 0.7988 0.7948 0.7974 0.7903 Table 3 : Bert case default model performances at differing learning rates sible casing methods, including: cased without, cased with and uncased with (by the way, uncased without should perform the worst, because it can't actually distinguish case information, and the experimental results are exactly the same, so we omit this possible combination) and two different versions of the combination model. Record the performance of each model and take the top two to get table 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 609,
"end": 616,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Sensitivity and Version",
"sec_num": "4.2.2"
},
{
"text": "Except for some models with slight performance decrease, in most cases, the model can be further improved by using pytorch-transformers to import the required classes. Moreover, we noticed that uncased with is the best model on both 2060s and 2080 when considering the use of previous versions of classes (uncased with V), however, when only the latest version of transformers is used, the cased model works best. But the relationship between whether to use lowercase processing and the final performance is not obvious from our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Sensitivity and Version",
"sec_num": "4.2.2"
},
{
"text": "Theoretically, words with different case could represent the same named entity, while using lowercase processing can increase the number of training samples but reduce the number of types. So we suggest that when using the updated transformers training, please use cased without, and when using the previous version training, consider using uncased with. What's more, cased with is also an option worth considering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Sensitivity and Version",
"sec_num": "4.2.2"
},
{
"text": "We used 0.005 as our default weight decay value before, which based on several simple attempts. Here, we selected the best four models on 2060s and 2080 respectively to adjust the weight decay under the condition of using previous version classes or not, and sorted them out as figure 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight Decay",
"sec_num": "4.2.3"
},
{
"text": "In theory, the increase of weight decay will make the performance of the model increase to the maximum firstly and then decrease. This is because at the beginning, the model performs poorly in the test set due to over fitting. With the increase of penalty term, the performance of the model is improved and the best value is generated. If the penalty term con-tinues to increase, the model will tend to be more simple model, so the performance of the model drops. However, the actual situation in the evaluation is that the performance of the model firstly increases and then decreases with the increase of weight decay, and then increases to the maximum value and then decreases according to the theory. It is worth noting that the f1-score of case without when weight decay value is 0.009 is the same as that is 0.005, which is difficult to explain according to the classical theory (more specifically, the recall of the former is higher, but the precision of the latter is higher). Though it's true that the best performance of most models is achieved at 0.005, the final selected model in this experiment is uncased with V on 2080 when the weight decay value is 0.007. The specific index comparison table between this Base case model and baseline is shown in table 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight Decay",
"sec_num": "4.2.3"
},
{
"text": "Bert case based on Transformer has achieved better results in the recognition of most entity categories than Linear CRF combined with traditional feature engineering. It can be seen that using a large corpus for pre-training combined with specific downstream task fine-tuning strategies is a very effective and operational model paradigm. However, for the output results of Bert case without CRF standardizing, we could find that many unreasonable annotation results are generated in the test set, and on some indicators, the model does not perform as well as the baseline, especially in the prediction results on Temperature and Measure. All in all, Bert can be used as a relatively good benchmark, but there is still much room for improvement. We planned to replace it with other pre-trained models such as Roberta, Albert, XLNet, and connected it with CRF to observe the effect of the experiment, but it finally failed to achieve due to limited time and capacity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight Decay",
"sec_num": "4.2.3"
},
{
"text": "The classification results of each group participating in this task can be accessed from results. As the final test model was submitted before, it could not be trained from aspects proposed in this paper, and the model could only be trained on RTX 2060 Super, so the final model is uncased with with lower f1-score. In addition to these influencing factors shown above, we also noted that on different operating systems (Ubuntu and Windows), whether or not to enable X service and perform other tasks during training may also change the performance of the model with the same other conditions. However, due to its complexity, we have not got the results temporarily and we hope further research could carry on in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test",
"sec_num": "4.3"
},
{
"text": "This article introduces our relevant experimental research based on this WNUT 2020 shared task-1. By trying BiLSTM CRF, we learn about that the method based on static word vector needs to be trained on a specific dataset, so its transferability is relatively low. And we mainly focused on the fine-tuning experiments based on Bert under different conditions, including learning rate, GPU, transformers version, case sensitivity and weight decay, and conducted discussions, so as to understand that the possible influencing factors in actual model training. It is quite necessary to unify and clarify experimental conditions while evaluating the performance of related models in the future, because even the class imported matters. We noted that recently, several papers presented NER studies using convolutional neural network (CNN) (Li and Guo, 2018; Zhai et al., 2018) . This may imply that more work combining pre-trained model and CNN will be a new direction for NER studies. Additionally, although more and more attention has been focused on deep learning nowadays, the methods based on traditional machine learning still get attention and continue to develop with its interpretability and robustness in specific domain tasks.",
"cite_spans": [
{
"start": 833,
"end": 851,
"text": "(Li and Guo, 2018;",
"ref_id": "BIBREF6"
},
{
"start": 852,
"end": 870,
"text": "Zhai et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming",
"middle": [
"Wei"
],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Incorporating non-local information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {
"DOI": [
"10.3115/1219840.1219885"
]
},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meet- ing on Association for Computational Linguistics, ACL '05, page 363-370, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "J\u00fcrgen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcRgen A Schmidhuber. 1997. Long short-term memory. Neural Computation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bidirectional lstm-crf models for sequence tagging. arXiv: Computation and Language",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv: Computation and Language.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "University of Sheffield: Description of the LaSIE-II system as used for MUC-7. Association for Computational Linguistics",
"authors": [
{
"first": "K",
"middle": [],
"last": "Humphreys",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Azzam",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Huyck",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Humphreys, R. Gaizauskas, S. Azzam, C. Huyck, and Y. Wilks. 1995. University of Sheffield: Descrip- tion of the LaSIE-II system as used for MUC-7. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An annotated corpus for machine reading of instructions in wet lab protocols",
"authors": [
{
"first": "Chaitanya",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Raghu",
"middle": [],
"last": "Machiraju",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "2",
"issue": "",
"pages": "97--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaitanya Kulkarni, Wei Xu, Alan Ritter, and Raghu Machiraju. 2018. An annotated corpus for machine reading of instructions in wet lab protocols. 2:97- 106.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Biomedical named entity recognition with cnn-blstm-crf",
"authors": [
{
"first": "S",
"middle": [
"L"
],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [
"K"
],
"last": "Guo",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of chinese information processing",
"volume": "32",
"issue": "",
"pages": "116--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SL Li and YK Guo. 2018. Biomedical named entity recognition with cnn-blstm-crf [j]. Journal of chi- nese information processing, 32(1):116-122.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Extracting company names from text",
"authors": [
{
"first": "F",
"middle": [],
"last": "Lisa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rau",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings The Seventh IEEE Conference on Artificial Intelligence Application",
"volume": "",
"issue": "",
"pages": "29--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa F Rau. 1991. Extracting company names from text. In Proceedings The Seventh IEEE Conference on Artificial Intelligence Application, pages 29-30. IEEE Computer Society.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "brat: a web-based tool for nlp-assisted text annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topic",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topic, Tomoko Ohta, Sophia Ananiadou, and Junichi Tsu- jii. 2012. brat: a web-based tool for nlp-assisted text annotation. pages 102-107.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "WNUT-2020 Task 1: Extracting Entities and Relations from Wet Lab Protocols",
"authors": [
{
"first": "Jeniya",
"middle": [],
"last": "Tabassum",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeniya Tabassum, Wei Xu, and Alan Ritter. 2020. WNUT-2020 Task 1: Extracting Entities and Rela- tions from Wet Lab Protocols. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Funtowicz",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, and Morgan and Funtowicz. 2019. Huggingface's transformers: State-of-the-art natural language processing.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Comparing cnn and lstm character-level embeddings in bilstm-crf models for chemical and disease named entity recognition",
"authors": [
{
"first": "Zenan",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Verspoor",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.08450"
]
},
"num": null,
"urls": [],
"raw_text": "Zenan Zhai, Dat Quoc Nguyen, and Karin Verspoor. 2018. Comparing cnn and lstm character-level embeddings in bilstm-crf models for chemical and disease named entity recognition. arXiv preprint arXiv:1808.08450.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "An example wet lab protocol",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "A visualization of the BRAT style annotation",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "The architecture of a BiLSTM CRF model(Huang et al., 2015)",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Bert fine-tuning for token classification(Devlin et al., 2018)",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "8008 0.7994 0.7988 0.7974 0.7979 0.7956 2060s 0.7980 0.7965 0.7993 0.7993 0.7995 0.7960 pytorch-transformers 2080 0.8004 0.7997 0.8019 0.7992 0.7975 0.7975 2060s 0.7990 0.7982 0.8006 0.7994 0.7993 0.7997 avg +0.03% +0.12% +0.28% +0.11% -0.03% +0.35%",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Results of BiLSTM CRF and baseline",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>: The performance change under different ver-</td></tr><tr><td>sions and case sensitivity</td></tr></table>",
"text": "",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Comparison of classification results betweenBert case and Linear CRF",
"num": null
}
}
}
}