ACL-OCL / Base_JSON /prefixN /json /nlpcovid19 /2020.nlpcovid19-2.27.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:39:08.559139Z"
},
"title": "Annotating the Pandemic: Named Entity Recognition and Normalisation in COVID-19 Literature",
"authors": [
{
"first": "Nico",
"middle": [],
"last": "Colic",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": "colic@ifi.uzh.ch"
},
{
"first": "Lenz",
"middle": [],
"last": "Furrer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": "furrer@cl.uzh.ch"
},
{
"first": "Fabio",
"middle": [],
"last": "Rinaldi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The COVID-19 pandemic has been accompanied by such an explosive increase in media coverage and scientific publications that researchers find it difficult to keep up. We are presenting a publicly available pipeline to perform named entity recognition and normalisation in parallel to help find relevant publications and to aid in downstream NLP tasks such as text summarisation. In our approach, we are using a dictionary-based system for its high recall in conjunction with two models based on BioBERT for their accuracy. Their outputs are combined according to different strategies depending on the entity type. In addition, we are using a manually crafted dictionary to increase performance for new concepts related to COVID-19. We have previously evaluated our work on the CRAFT corpus, and make the output of our pipeline available on two visualisation platforms.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The COVID-19 pandemic has been accompanied by such an explosive increase in media coverage and scientific publications that researchers find it difficult to keep up. We are presenting a publicly available pipeline to perform named entity recognition and normalisation in parallel to help find relevant publications and to aid in downstream NLP tasks such as text summarisation. In our approach, we are using a dictionary-based system for its high recall in conjunction with two models based on BioBERT for their accuracy. Their outputs are combined according to different strategies depending on the entity type. In addition, we are using a manually crafted dictionary to increase performance for new concepts related to COVID-19. We have previously evaluated our work on the CRAFT corpus, and make the output of our pipeline available on two visualisation platforms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The body of scientific literature is growing at an unprecedented rate, and this is particularly evident in the response of the biomedical research community to the 2020 COVID-19 pandemic. Several platforms have been established to track publications related to COVID-19, most prominently the COVID-19 Open Research Dataset (CORD-19) 1 , a collaboration of the US Government and multiple other organisations, the LitCovid dataset, maintained by the NIH, which indexes papers published on PubMed related to the pandemic (Chen et al., 2020) , or Novel Coronavirus Research Compendium (NCRC) 2 , which contains 800 publications selected manually for their originality and quality.",
"cite_spans": [
{
"start": 518,
"end": 537,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this publication, we are processing the articles of the LitCovid dataset, which at the time of writing contains almost 50 000 publications related the 2020 COVID-19 pandemic only, showing growth at a steady rate since its beginning. The flurry of news and public discussions about the pandemic, which includes a substantial amount of fake news, has been termed \"infodemic\". However, the term could be applied also to the rapid growth of reports and publications pertaining the disease (see Figure 1 ). Interestingly, this growth pattern seems to resemble that of the spread of the disease in western countries (with a delay of one to two months).",
"cite_spans": [],
"ref_spans": [
{
"start": 493,
"end": 501,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While the growth is not exponential as it has occasionally been reported, it is still far beyond what virologists and medical scientists can manually process. This is an exacerbation of a general problem in biomedical research, where researchers cannot keep up with the growth of literature that pertains to their research, and need to resort to named en-tity recognition (NER), named entity normalisation (NEN) and text summarisation technologies to identify relevant publications (Lu, 2011) .",
"cite_spans": [
{
"start": 482,
"end": 492,
"text": "(Lu, 2011)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In NER, entities of interest are identified as text spans in free text; and then, in NEN, mapped to unique IDs in a controlled vocabulary. They constitute a fundamental step for other down-stream text processing tasks, on one hand; but are also a means to its own end, allowing publications to be indexed by the entities they contain, on the other hand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In previous research, we have shown that we can obtain better results by performing NER and NEN in parallel rather than sequentially, avoiding propagation of errors between the steps. We are building on this previous research and add a further processing step to find terms specific to COVID-19.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In March 2020, the US White House collaborated with the National Library of Medicine, the Allen Institue for Artificial Intelligence and other private companies to create the CORD-19 corpus (Wang et al., 2020a) , and with it a set of 18 challenges such as What do we know about COVID-19 risk factors? for data scientists to participate in, hosted on Kaggle 3 .",
"cite_spans": [
{
"start": 190,
"end": 210,
"text": "(Wang et al., 2020a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The response of the text mining community to the pandemic and such shared tasks has been enormous, producing a wide array of webservices, machine learning models and databases; usually adapting existing frameworks to suit the pandemic. Wang et al. (2020c) , for example, are retraining SciSpacy on the CORD-19 corpus to improve its NER performance.",
"cite_spans": [
{
"start": 236,
"end": 255,
"text": "Wang et al. (2020c)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some research has already been directed at downstream tasks, using a simple dictionary-based NER method as a base to perform entity relation extraction (Rao et al., 2020; Wang et al., 2020b) , to create a knowledge base (Khan et al., 2020) or for summarisation systems (Gutierrez et al., 2020; Kieuvongngam et al., 2020) .",
"cite_spans": [
{
"start": 152,
"end": 170,
"text": "(Rao et al., 2020;",
"ref_id": null
},
{
"start": 171,
"end": 190,
"text": "Wang et al., 2020b)",
"ref_id": "BIBREF28"
},
{
"start": 269,
"end": 293,
"text": "(Gutierrez et al., 2020;",
"ref_id": null
},
{
"start": 294,
"end": 320,
"text": "Kieuvongngam et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The problem of NER and NEN in the biomedical domain, generally, has traditionally been approached with pipelines, using rules or dictionaries (Campos et al., 2013; D'Souza and Ng, 2015) . More recently, however, machine learning using various architectures such as LSTMs or CRFs have become more popular (Leaman et al., 2013; Habibi et al., 2017) .",
"cite_spans": [
{
"start": 142,
"end": 163,
"text": "(Campos et al., 2013;",
"ref_id": "BIBREF0"
},
{
"start": 164,
"end": 185,
"text": "D'Souza and Ng, 2015)",
"ref_id": "BIBREF5"
},
{
"start": 304,
"end": 325,
"text": "(Leaman et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 326,
"end": 346,
"text": "Habibi et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In our approach, we build on our previous efforts where we use a parallel architecture to perform NER and NEN simultaneously (Furrer et al., 2019a (Furrer et al., , 2020 . Traditionally, NER and NEN are performed after each other, which means that spans of mentions of entities are identified first, and then mapped to the corresponding entry in a controlled vocabulary. This approach has the drawback that errors made in the first step are irrecoverably propagated to the second stage.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Furrer et al., 2019a",
"ref_id": "BIBREF6"
},
{
"start": 147,
"end": 169,
"text": "(Furrer et al., , 2020",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pipeline",
"sec_num": "3"
},
{
"text": "In our approach, however, we perform those two steps simultaneously, and were able to show that it outperforms the traditional approach (Furrer et al., 2019a) . We are using BioBERT, a pre-trained language model, which we trained on the CRAFT corpus, a collection of nearly 100 full-text medical articles manually annotated for 10 different medical entity types. We have evaluated our approach using the CRAFT corpus, and obtained F1-scores between 0.74 and 0.92 depending on the entity type.",
"cite_spans": [
{
"start": 136,
"end": 158,
"text": "(Furrer et al., 2019a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pipeline",
"sec_num": "3"
},
{
"text": "To improve our results on COVID-19 literature, we are adding an additional step of post-annotating our results using a manually crafted dictionary specific to COVID-19.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pipeline",
"sec_num": "3"
},
{
"text": "The dataset is annotated for entities coming from 10 different ontologies as they are used in the CRAFT corpus, such as Chemical Entities of Biological Interest (CHEBI) or the NCBI Taxonomy Additionaly, we employ a manually curated, COVID-19 specific terminology 4 containing over 250 terms. This is derived from the COVoc 5 vocabulary, developed by members of the Swiss Institute of Bioinformatics. We are using these ontologies because we were able to test our performance using the CRAFT corpus, and because they provide extensive coverage over the biomedical domain (Cohen et al., 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabularies",
"sec_num": "3.1"
},
{
"text": "OGER is a dictionary-based look-up tool using an efficient fuzzy matching algorithm (Furrer et al., 2019b) . Relying on a dictionary mapping relevant entities to their ID, its performance depends on the dictionary's quality and extent, which manually or automatically curated ontologies such as CHEBI provide. It thus requires no training, and can detect entities that an example-based system would miss if they are not present in the training data, provided they are present in the dictionary.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Furrer et al., 2019b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OGER",
"sec_num": "3.2"
},
{
"text": "BERT is a multi-layer transformer trained on the English Wikipedia and BookCorpus (Devlin et al., 2018) . While it is trained to predict whether a sentence follows another and randomly blacked out words, the resulting language model can be finetuned for different tasks, such as NER (Hakala and Pyysalo, 2019) and NEN, or adapted for different domains through further training. BioBERT is the result of training BERT on PubMed articles, making it useful for biomedical applications (Lee et al., 2020; Sun and Yang, 2019) .",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 283,
"end": 309,
"text": "(Hakala and Pyysalo, 2019)",
"ref_id": "BIBREF11"
},
{
"start": 482,
"end": 500,
"text": "(Lee et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 501,
"end": 520,
"text": "Sun and Yang, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BioBERT",
"sec_num": "3.3"
},
{
"text": "We have used BioBERT and trained it further on the CRAFT corpus to build a span wprediction and an ID prediction model. The span predictor produces IOBES labels, and is used in conjunction with OGER to provide ID labels. The ID predictor also conceptualises NEN as a sequence tagging problem and works like a classical NER model, but with the output tagset extended to cover all possible concept labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BioBERT",
"sec_num": "3.3"
},
{
"text": "The ID predictor thus predicts spans and IDs directly, making the use of other models theoretically superfluous. However, it suffers from the fact that it cannot predict concepts not seen during training and that it does not perform well for tokens that occur both in general domain language and in biomedical entities (such as I in hexokinase I). By using the span prediction model in conjunction with OGER, too, we alleviate these shortcomings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BioBERT",
"sec_num": "3.3"
},
{
"text": "For conflicting or overlapping annotations between the BioBERT span and ID classifiers as well as OGER, we were able to show in our previous work that the optimal merging strategy depends on the entity type in question (Furrer et al., 2020) . In this step, we take these findings into account when deciding which system's output to prioritise for the final output. If a span prediction is given preference, the ID label as produced by OGER as described in Section 3.3 is used. In a last step, we run OGER again to produce an additional layer of annotations for terms specific to COVID-19 using the COVoc vocabulary. In this way we hope to be able to maintain the accuracy of our models for the established vocabularies, while allowing for rapid changes to be made to the set of entities specific to the pandemic without having to retrain the BioBERT modules.",
"cite_spans": [
{
"start": 219,
"end": 240,
"text": "(Furrer et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Harmonising, annotating for COVID-19, merging",
"sec_num": "3.4"
},
{
"text": "The outputs are then merged for all entity types, and converted to various formats.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Harmonising, annotating for COVID-19, merging",
"sec_num": "3.4"
},
{
"text": "So far, with our pipeline we have processed over 33 000 abstracts from PubMed and 7883 full-text articles from PMC, with a total amount of over 400 000 and 900 000 annotations, respectively (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "With our pipeline, we are able to continuously process new articles that are added to the LitCovid dataset, and distribute our annotations in the following ways: The OGER annotations can be obtained through an API 6 . The code to run the pipeline 7 , its outputs 8 as well as the CRAFT-trained BioBERT models 9 are publicly available, and with some effort could be modified using OGER's format conversion to process other dataset such as CORD-19.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "\u2022 PubAnnotation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "PubAnnotation is an online repository for annotations on PubMed articles, (Kim et al., 2015 (Kim et al., , 2019 , which also features the annotation visualisation engine TextAE (see Figure 3) . Europe PMC is a repository of publications akin to PubMed, but also allows display of annotations (Consortium, 2015) . We uploaded our annotations to both services.",
"cite_spans": [
{
"start": 74,
"end": 91,
"text": "(Kim et al., 2015",
"ref_id": "BIBREF16"
},
{
"start": 92,
"end": 111,
"text": "(Kim et al., , 2019",
"ref_id": "BIBREF17"
},
{
"start": 292,
"end": 310,
"text": "(Consortium, 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 182,
"end": 191,
"text": "Figure 3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Online Repositories",
"sec_num": "4.1"
},
{
"text": "On our own infrastructure 10 , we host an instance of BRAT, which visualises annotations in a similar fashion as PubAnnotation (Stenetorp et al., 2012) .",
"cite_spans": [
{
"start": 127,
"end": 151,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BRAT",
"sec_num": "4.2"
},
{
"text": "To further facilitate down-stream tasks, we provide our annotations in the most frequently used annotation formats 11 : .txt, CoNLL .tsv and BioC .json. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Downloads",
"sec_num": "4.3"
},
{
"text": "Given the recency of the pandemic, there is currently a lack of resources that allow evaluation of work on the COVID-19 literature. Without a gold standard we cannot offer a true evaluation. We hope to be able to test the efficacy of our own work in the future when such resources become available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Tools that automatically process literature related COVID-19 generally fall into two broad categories: Systems that follow some sort of text summarisation approach, and NER+NEN systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Much attention has been directed at previously mentioned Kaggle challenge, for which over 1500 solutions have been submitted, ranging from statistical data exploration to a full clustering of the literature. One of the top submissions 12 , for example, attempts to identify risk factors of COVID-19 by applying unsupervised topic modeling algorithms. Such approaches are very common among the submissions, but suffer from a high number of false positives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Similarly, platforms that allow browsing corpora of COVID-19 papers such as COVIDScholar 13 and the BERT-driven COVID-19 Research Explorer 14 rely on word embeddings and other unsupervised algorithms to find matching publications or even passages in publications. For the latter, the authors attempt to go beyond traditional document retrieval, and employ an automatically generated corpus to fuel their question answering learning (Ma et al., 2020) . However, such approaches lack the precision typical NER+NEN-driven approaches offer, and don't perform particularly well at matching entity synonyms due to their representation as highrecall word vectors rather than precisely matched entities.",
"cite_spans": [
{
"start": 432,
"end": 449,
"text": "(Ma et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "For example, both applications yield different results for either Angiotensin converting enzyme 2 or ACE2, even though the terms are equivalent (and link to the same entry in the Protein Ontology).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Repositories that perform controlled vocabulary NEN such as KnetMiner, for example, avoid this error (Hassani-Pak et al., 2020) . Services exploring the scientific literature still fall in either of the two camps, and thus fail to exploit the high precision benefits NER+NEN offers and the variety of applications text summarisation approaches afford simultaneously.",
"cite_spans": [
{
"start": 101,
"end": 127,
"text": "(Hassani-Pak et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "ncrc.jhsph.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "bit.ly/384VgBQIn this vein, it has been suggested to approachNER and NEN simultaneously (ter Horst et al., 2017;Lou et al., 2017), which is similar to the approach that we follow. The authors of the LitCovid data set, which we process in the present work, also perform NER and NEN on the dataset using PubTator(Wei et al., 2019). In their work, they annotate for 6 entity types (genes, diseases, chemicals, mutations, species and cells) and use a different architecture for every single type. For example, they use a linear classifier for annotating diseases(Leaman and Lu, 2016), and a BERT-based transformer for finding chemicals. This differs fundamentally from our approach, where we employ the same architecture for all entity types. Furthermore, apart from the NCBI Taxonomy, we are using different controlled vocabularies for entity normalisation for all types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "bit.ly/3jJxhgJ 5 github.com/EBISPOT/covoc/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "bit.ly/2Vrbekw 7 github.com/Aequivinius/covid 8 bit.ly/3eMylOq 9 doi.org/10.5281/zenodo.3822363 10 bit.ly/3eITn0o 11 bit.ly/386BbuN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "bit.ly/2VkN6QP 13 covidscholar.org/ 14 bit.ly/3fWNOLG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A modular framework for biomedical concept recognition",
"authors": [
{
"first": "David",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Matos",
"suffix": ""
},
{
"first": "Jos\u00e9 Lu\u00eds",
"middle": [],
"last": "Oliveira",
"suffix": ""
}
],
"year": 2013,
"venue": "BMC bioinformatics",
"volume": "14",
"issue": "1",
"pages": "1--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Campos, S\u00e9rgio Matos, and Jos\u00e9 Lu\u00eds Oliveira. 2013. A modular framework for biomedical concept recognition. BMC bioinformatics, 14(1):1-21.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Keep up with the latest coronavirus research",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Allot",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2020,
"venue": "Nature",
"volume": "579",
"issue": "7798",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/d41586-020-00694-1"
]
},
"num": null,
"urls": [],
"raw_text": "Q. Chen, A. Allot, and Z. Lu. 2020. Keep up with the latest coronavirus research. Nature, 579(7798):193.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Colorado Richly Annotated Full Text (CRAFT) corpus: Multi-model annotation in the biomedical domain",
"authors": [
{
"first": "Karin",
"middle": [],
"last": "K Bretonnel Cohen",
"suffix": ""
},
{
"first": "Kar\u00ebn",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Fort",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Funk",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Bada",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"E"
],
"last": "Palmer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hunter",
"suffix": ""
}
],
"year": 2017,
"venue": "Handbook of Linguistic Annotation",
"volume": "",
"issue": "",
"pages": "1379--1394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K Bretonnel Cohen, Karin Verspoor, Kar\u00ebn Fort, Christopher Funk, Michael Bada, Martha Palmer, and Lawrence E Hunter. 2017. The Colorado Richly Annotated Full Text (CRAFT) corpus: Multi-model annotation in the biomedical domain. In Hand- book of Linguistic Annotation, pages 1379-1394. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Europe pmc: a fulltext literature database for the life sciences and platform for innovation",
"authors": [
{
"first": "Pmc",
"middle": [],
"last": "Europe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Consortium",
"suffix": ""
}
],
"year": 2015,
"venue": "Nucleic acids research",
"volume": "43",
"issue": "D1",
"pages": "1042--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Europe PMC Consortium. 2015. Europe pmc: a full- text literature database for the life sciences and platform for innovation. Nucleic acids research, 43(D1):D1042-D1048.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sieve-based entity linking for the biomedical domain",
"authors": [
{
"first": "D'",
"middle": [],
"last": "Jennifer",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "297--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer D'Souza and Vincent Ng. 2015. Sieve-based entity linking for the biomedical domain. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 297-302.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "UZH@CRAFT-ST: a sequence-labeling approach to concept recognition",
"authors": [
{
"first": "Lenz",
"middle": [],
"last": "Furrer",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Cornelius",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Rinaldi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
"volume": "",
"issue": "",
"pages": "185--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenz Furrer, Joseph Cornelius, and Fabio Rinaldi. 2019a. UZH@CRAFT-ST: a sequence-labeling ap- proach to concept recognition. In Proceedings of The 5th Workshop on BioNLP Open Shared Tasks, pages 185-195.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parallel sequence tagging for concept recognition",
"authors": [
{
"first": "Lenz",
"middle": [],
"last": "Furrer",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Cornelius",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Rinaldi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.07424"
]
},
"num": null,
"urls": [],
"raw_text": "Lenz Furrer, Joseph Cornelius, and Fabio Rinaldi. 2020. Parallel sequence tagging for concept recognition. arXiv preprint arXiv:2003.07424.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "OGER++: hybrid multi-type entity recognition",
"authors": [
{
"first": "Lenz",
"middle": [],
"last": "Furrer",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Jancso",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Colic",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Rinaldi",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Cheminformatics",
"volume": "11",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenz Furrer, Anna Jancso, Nicola Colic, and Fabio Ri- naldi. 2019b. OGER++: hybrid multi-type entity recognition. Journal of Cheminformatics, 11(1):7.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep learning with word embeddings improves biomedical named entity recognition",
"authors": [
{
"first": "Maryam",
"middle": [],
"last": "Habibi",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "David",
"middle": [
"Luis"
],
"last": "Wiegandt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Leser",
"suffix": ""
}
],
"year": 2017,
"venue": "Bioinformatics",
"volume": "33",
"issue": "14",
"pages": "37--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maryam Habibi, Leon Weber, Mariana Neves, David Luis Wiegandt, and Ulf Leser. 2017. Deep learning with word embeddings improves biomed- ical named entity recognition. Bioinformatics, 33(14):i37-i48.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Biomedical named entity recognition with multilingual BERT",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Hakala",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
"volume": "",
"issue": "",
"pages": "56--61",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5709"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Hakala and Sampo Pyysalo. 2019. Biomedical named entity recognition with multilingual BERT. In Proceedings of The 5th Workshop on BioNLP Open Shared Tasks, pages 56-61, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "KnetMiner: a comprehensive approach for supporting evidence-based gene discovery and complex trait analysis across species",
"authors": [
{
"first": "Keywan",
"middle": [],
"last": "Hassani-Pak",
"suffix": ""
},
{
"first": "Ajit",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Brandizi",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Hearnshaw",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Amberkar",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "John",
"middle": [
"H"
],
"last": "Phillips",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Doonan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rawlings",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keywan Hassani-Pak, Ajit Singh, Marco Brandizi, Joseph Hearnshaw, Sandeep Amberkar, Andrew L Phillips, John H Doonan, and Chris Rawlings. 2020. KnetMiner: a comprehensive approach for support- ing evidence-based gene discovery and complex trait analysis across species. bioRxiv.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Joint entity recognition and linking in technical domains using undirected probabilistic graphical models",
"authors": [
{
"first": "Horst",
"middle": [],
"last": "Hendrik Ter",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hartung",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Language, Data and Knowledge",
"volume": "",
"issue": "",
"pages": "166--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendrik ter Horst, Matthias Hartung, and Philipp Cimi- ano. 2017. Joint entity recognition and linking in technical domains using undirected probabilistic graphical models. In International Conference on Language, Data and Knowledge, pages 166-180. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mohammad Saifur Rahman, Tanvir Alam, and M Sohel Rahman. 2020. Covid-19base: A knowledgebase to explore biomedical entities related to covid-19",
"authors": [
{
"first": "Md",
"middle": [],
"last": "Junaed Younus Khan",
"suffix": ""
},
{
"first": "Tawkat",
"middle": [],
"last": "Khondaker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "Hamada",
"middle": [],
"last": "Iram Tazim Hoque",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Al-Absi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.05954"
]
},
"num": null,
"urls": [],
"raw_text": "Junaed Younus Khan, Md Khondaker, Tawkat Is- lam, Iram Tazim Hoque, Hamada Al-Absi, Moham- mad Saifur Rahman, Tanvir Alam, and M Sohel Rah- man. 2020. Covid-19base: A knowledgebase to ex- plore biomedical entities related to covid-19. arXiv preprint arXiv:2005.05954.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic text summarization of covid-19 medical research articles using bert and gpt-2",
"authors": [
{
"first": "Virapat",
"middle": [],
"last": "Kieuvongngam",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.01997"
]
},
"num": null,
"urls": [],
"raw_text": "Virapat Kieuvongngam, Bowen Tan, and Yiming Niu. 2020. Automatic text summarization of covid-19 medical research articles using bert and gpt-2. arXiv preprint arXiv:2006.01997.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Pubannotation-query: a search tool for corpora with multi-layers of annotation",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"Bretonnel"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Jung-Jae",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2015,
"venue": "BMC Proceedings",
"volume": "9",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Kevin Bretonnel Cohen, and Jung-jae Kim. 2015. Pubannotation-query: a search tool for corpora with multi-layers of annotation. In BMC Proceedings, volume 9, pages 1-3. BioMed Central.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Open agile text mining for bioinformatics: the PubAnnotation ecosystem",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Toyofumi",
"middle": [],
"last": "Fujiwara",
"suffix": ""
},
{
"first": "Shujiro",
"middle": [],
"last": "Okuda",
"suffix": ""
},
{
"first": "Tiffany",
"middle": [
"J"
],
"last": "Callahan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "35",
"issue": "21",
"pages": "4372--4380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Yue Wang, Toyofumi Fujiwara, Shu- jiro Okuda, Tiffany J Callahan, and K Bretonnel Co- hen. 2019. Open agile text mining for bioinformat- ics: the PubAnnotation ecosystem. Bioinformatics, 35(21):4372-4380.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dnorm: disease name normalization with pairwise learning to rank",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "Bioinformatics",
"volume": "29",
"issue": "22",
"pages": "2909--2917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman, Rezarta Islamaj Dogan, and Zhiy- ong Lu. 2013. Dnorm: disease name normaliza- tion with pairwise learning to rank. Bioinformatics, 29(22):2909-2917.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Tag-gerOne: joint named entity recognition and normalization with semi-Markov models",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Bioinformatics",
"volume": "32",
"issue": "18",
"pages": "2839--2846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman and Zhiyong Lu. 2016. Tag- gerOne: joint named entity recognition and normal- ization with semi-Markov models. Bioinformatics, 32(18):2839-2846.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A transition-based joint model for disease named entity recognition and normalization",
"authors": [
{
"first": "Yinxia",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shufeng",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2017,
"venue": "Bioinformatics",
"volume": "33",
"issue": "15",
"pages": "2363--2371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinxia Lou, Yue Zhang, Tao Qian, Fei Li, Shufeng Xiong, and Donghong Ji. 2017. A transition-based joint model for disease named entity recognition and normalization. Bioinformatics, 33(15):2363-2371.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Pubmed and beyond: a survey of web tools for searching biomedical literature",
"authors": [
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2011,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyong Lu. 2011. Pubmed and beyond: a survey of web tools for searching biomedical literature. Database, 2011.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Zero-shot neural retrieval via domain-targeted synthetic query generation",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Korotkov",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14503"
]
},
"num": null,
"urls": [],
"raw_text": "Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. 2020. Zero-shot neural re- trieval via domain-targeted synthetic query genera- tion. arXiv preprint arXiv:2004.14503.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Naveen Sivadasan, and Rajgopal Srinivasan. 2020. Text and network-mining for covid-19 intervention studies",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Saipradeep",
"suffix": ""
},
{
"first": "Sujatha",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kotte",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Rao, VG Saipradeep, Thomas Joseph, Sujatha Kotte, Naveen Sivadasan, and Rajgopal Srinivasan. 2020. Text and network-mining for covid-19 inter- vention studies.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "BRAT: a web-based tool for NLP-assisted text annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. BRAT: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstra- tions at the 13th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 102-107.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Transfer learning in biomedical named entity recognition: An evaluation of BERT in the PharmaCoNER task",
"authors": [
{
"first": "Cong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
"volume": "",
"issue": "",
"pages": "100--104",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5715"
]
},
"num": null,
"urls": [],
"raw_text": "Cong Sun and Zhihao Yang. 2019. Transfer learning in biomedical named entity recognition: An evaluation of BERT in the PharmaCoNER task. In Proceedings of The 5th Workshop on BioNLP Open Shared Tasks, pages 100-104, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Automatic textual evidence mining in covid-19 literature",
"authors": [
{
"first": "Xuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Weili",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Aabhas",
"middle": [],
"last": "Chauhan",
"suffix": ""
},
{
"first": "Yingjun",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.12563"
]
},
"num": null,
"urls": [],
"raw_text": "Xuan Wang, Weili Liu, Aabhas Chauhan, Yingjun Guan, and Jiawei Han. 2020b. Automatic textual ev- idence mining in covid-19 literature. arXiv preprint arXiv:2004.12563.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Comprehensive named entity recognition on cord-19 with distant or weak supervision",
"authors": [
{
"first": "Xuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiangchen",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yingjun",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Bangzheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.12218"
]
},
"num": null,
"urls": [],
"raw_text": "Xuan Wang, Xiangchen Song, Yingjun Guan, Bangzheng Li, and Jiawei Han. 2020c. Com- prehensive named entity recognition on cord-19 with distant or weak supervision. arXiv preprint arXiv:2003.12218.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Pubtator central: automated concept annotation for biomedical full text articles",
"authors": [
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Allot",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Nucleic acids research",
"volume": "47",
"issue": "W1",
"pages": "587--593",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Hsuan Wei, Alexis Allot, Robert Leaman, and Zhiyong Lu. 2019. Pubtator central: automated con- cept annotation for biomedical full text articles. Nu- cleic acids research, 47(W1):W587-W593.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Publications per day included in LitCovid"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Overall structure of the pipeline (NCBITaxon)."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Annotations visualised by PubAnnotation's TextAE."
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>\u2022 Our own webservice using BRAT</td></tr><tr><td>\u2022 Freely downloadable files</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Annotations per vocabulary for PubMed and PMC"
}
}
}
}