| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:09:13.493982Z" |
| }, |
| "title": "BERTChem-DDI : Improved Drug-Drug Interaction Prediction from text using Chemical Structure Information", |
| "authors": [ |
| { |
| "first": "Ishani", |
| "middle": [], |
| "last": "Mondal", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Microsoft Research Lab Lavelle Road", |
| "institution": "", |
| "location": { |
| "settlement": "Bengaluru", |
| "country": "India" |
| } |
| }, |
| "email": "ishani340@gmail.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Traditional biomedical version of embeddings obtained from pre-trained language models have recently shown state-of-the-art results for relation extraction (RE) tasks in the medical domain. In this paper, we explore how to incorporate domain knowledge, available in the form of molecular structure of drugs, for predicting Drug-Drug Interaction from textual corpus. We propose a method, BERTChem-DDI, to efficiently combine drug embeddings obtained from the rich chemical structure of drugs (encoded in SMILES) along with off-the-shelf domain-specific BioBERT embedding-based RE architecture. Experiments conducted on the DDIExtraction 2013 corpus clearly indicate that this strategy improves other strong baselines architectures by 3.4% macro F1-score.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Traditional biomedical version of embeddings obtained from pre-trained language models have recently shown state-of-the-art results for relation extraction (RE) tasks in the medical domain. In this paper, we explore how to incorporate domain knowledge, available in the form of molecular structure of drugs, for predicting Drug-Drug Interaction from textual corpus. We propose a method, BERTChem-DDI, to efficiently combine drug embeddings obtained from the rich chemical structure of drugs (encoded in SMILES) along with off-the-shelf domain-specific BioBERT embedding-based RE architecture. Experiments conducted on the DDIExtraction 2013 corpus clearly indicate that this strategy improves other strong baselines architectures by 3.4% macro F1-score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Concurrent administration of two or more drugs to a patient to cure an ailment might lead to positive or negative reaction (side-effect). These kinds of interactions are termed as Drug-Drug Interactions (DDIs). Predicting drug-drug interactions (DDI) is a complex task as it requires to understand the mechanism of action of two interacting drugs. A large number of efforts by the researchers have been witnessed in terms of automatic extraction of DDIs from the textual corpus (Sahu and Anand, 2018), (Liu et al., 2016) , (Sun et al., 2019) , (Li and Ji, 2019) and predicting unknown DDI from the Knowledge Graph (Purkayastha et al., 2019) , (Karim et al., 2019) . Automatic extraction of DDI from texts aids in maintaining the databases with high coverage and help the medical experts in their diagnosis and novel experiments.", |
| "cite_spans": [ |
| { |
| "start": 502, |
| "end": 520, |
| "text": "(Liu et al., 2016)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 523, |
| "end": 541, |
| "text": "(Sun et al., 2019)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 544, |
| "end": 561, |
| "text": "(Li and Ji, 2019)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 614, |
| "end": 640, |
| "text": "(Purkayastha et al., 2019)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 643, |
| "end": 663, |
| "text": "(Karim et al., 2019)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In parallel to the progress of DDI extraction from the textual corpus, some efforts have been observed recently where the researchers came up with various strategies of augmenting chemical structure information of the drugs (Asada et al., 2017) and textual description of the drugs (Zhu et al., 2020a) to improve Drug-Drug Interaction prediction performance from corpus and Knowledge Graphs.", |
| "cite_spans": [ |
| { |
| "start": 224, |
| "end": 244, |
| "text": "(Asada et al., 2017)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 282, |
| "end": 301, |
| "text": "(Zhu et al., 2020a)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The DDI Prediction from the textual corpus has been framed by the earlier researchers as relation classification problem. Earlier methods (Sahu and Anand, 2018) , (Liu et al., 2016) , (Sun et al., 2019) , (Li and Ji, 2019) for relation classification are based on CNN or RNN based Neural Networks.", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 160, |
| "text": "(Sahu and Anand, 2018)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 163, |
| "end": 181, |
| "text": "(Liu et al., 2016)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 184, |
| "end": 202, |
| "text": "(Sun et al., 2019)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 205, |
| "end": 222, |
| "text": "(Li and Ji, 2019)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recently, with the massive success of the pretrained language models (Devlin et al., 2019) , (Yang et al., 2019) in many NLP classification / sequence labeling tasks, we formulate the problem of DDI classification as a relation classification task by leveraging both the entities and sentence-level information. We propose a model that leverages both domain-specific contextual embeddings (Bio-BERT) from the target entities and also external Chemical Structure information of the target entities (drugs). In the recent years, representation learning has played a pivotal role in solving various machine learning tasks. In addition to information of drug entities from the text, we make use of the rich hidden representation obtained from the molecule generation using Variational Auto-Encoder (G\u00f3mez-Bombarelli et al., 2018) representation of the drugs to learn the chemical structure representation. During unsupervised learning of chemical structure information of the drugs using Variational AutoEncoder (Kingma and Welling, 2014) , we make use of the canonical SMILES representation (Simplified Molecular Input Line Entry System) obtained from the DrugBank (Wishart et al., 2008) . We illustrate the overview of the proposed method in Figure 1 . Experiments conducted on the DDIExtraction 2013 corpus (Herrero-Zazo et al., 2013) reveals that this method outperforms the existing baseline models and is in line with the new direction of research of fusing various infor- mation to boost DDI classification performance.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 90, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 93, |
| "end": 112, |
| "text": "(Yang et al., 2019)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1008, |
| "end": 1034, |
| "text": "(Kingma and Welling, 2014)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1162, |
| "end": 1184, |
| "text": "(Wishart et al., 2008)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1306, |
| "end": 1333, |
| "text": "(Herrero-Zazo et al., 2013)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1240, |
| "end": 1248, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In a nutshell, the major contributions of this work are summarized as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We propose a method that jointly leverages textual and external Knowledge information to classify relation type between the drug pairs mentioned in the text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We show the molecular information from the SMILES encoding using Variational AutoEncoder helps in extracting DDIs from texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Our method achieves new state-of-the-art performance on DDI Extraction 2013 corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Given a sentence s with target drug entities d 1 and d 2 , the task is to classify the type of relation (y) the drugs hold between them, y \u2208 (y 1 , ...., y N ), where N denotes the number of relation types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our model for extracting DDIs from texts is based on the pre-trained BERT-based relation classification model by (Wu and He, 2019) . Given a sentence s with drugs d 1 and d 2 , let the final hidden state output from BERT module is H. Let the vectors H i to H j are the final hidden state vectors from BERT for entity d 1 , and H k to H m are the final hidden state vectors from BERT for entity d 2 . An average operation is applied to obtain the vector representation for each of the drug entities. An activation operation tanh is applied followed by a fully connected layer to each of the two vectors, and the output for d 1 and d 2 are H 1 and H 2 respectively.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 130, |
| "text": "(Wu and He, 2019)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text-based Relation Classification", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "H 1 = W 1 [tanh( 1 (j \u2212 i + 1) j t=i H t ] + b 1 (1) H 2 = W 2 [tanh( 1 (m \u2212 k + 1) m t=k H t ] + b 2 (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text-based Relation Classification", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We make W 1 and W 2 , b 1 and b 2 share the same parameters. In other words, we set W 1 = W 2 and keep b 1 = b 2 . For the final hidden state vector of the first token ('[CLS]'), we also add an activation operation and a fully connected layer, which is formally expressed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text-based Relation Classification", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "H 0 = W 0 (tanh(H 0 )) + b 0 (3) Matrices W 0 , W 1 , W 2 have the same dimensions, i.e. W 0 \u2208 R d * d ,W 1 \u2208 R d * d , W 2 \u2208 R d * d , where d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text-based Relation Classification", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "is the hidden state size from BERT. We concatenate H 0 , H 1 and H 2 and then add a fully connected layer and a softmax layer, which can be expressed as :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text-based Relation Classification", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "h = W 3 [concat(H 0 , H 1 , H 2 )] + b 3 (4) y t = sof tmax(h ) (5) W 3 \u2208 R N * 3d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text-based Relation Classification", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": ", and y t is the softmax probability output over N . In Equations (1), (2), (3), (4) the bias vectors are b 0 , b 1 , b 2 , b 3 . We use cross entropy as the loss function. We denote this text-based architecture as BERT-DDI.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text-based Relation Classification", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For the purpose of constructing an encoder from which a continuous latent representation is obtained, molecular representation of drugs has been used as both input and output. G\u00f3mez-Bombarelli et al. (G\u00f3mez-Bombarelli et al., 2018) converted the discrete SMILES representations of the drug molecules into a continuous multi-dimensional representation using the unsupervised deep learning algorithm Variational Auto-Encoder(VAE) (Kingma and Welling, 2014) . This representation has also been leveraged by (Purkayastha et al., 2019) . The input", |
| "cite_spans": [ |
| { |
| "start": 200, |
| "end": 231, |
| "text": "(G\u00f3mez-Bombarelli et al., 2018)", |
| "ref_id": null |
| }, |
| { |
| "start": 428, |
| "end": 454, |
| "text": "(Kingma and Welling, 2014)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 504, |
| "end": 530, |
| "text": "(Purkayastha et al., 2019)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chemical Structure Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "x = (x 1 ,x 2 ,....,x n ) to VAE is represented by x i \u2208 X where X = C, =, (, ), O, F , 1, 2, \u2022 \u2022 \u2022 9 in the SMILES representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chemical Structure Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Each x i is a X-dimensional one-hot vector. We denote this VAE architecture used in our experiments as ChemVAE and is explained as follows: As an encoder it uses three 1D convolutional layers, followed by a single fully-connected layer. The decoder uses three layers of GRU networks. The objective of this work is to maximize the probability distribution of generation of SMILES representation of drug molecules with the help of latent representation as presented in equation below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chemical Structure Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "P (X SM ILES ) = P (X SM ILES |z)P (z)dz (6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chemical Structure Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In equation 6, X SM ILES denotes the drug molecules, z represents the latent SMILES representation, P (X SM ILES ) denotes the probability distribution of drug molecules. The ChemVAE model takes SMILES representation of the drugs as input and encodes the drugs into continuous latent representation (z). The decoder then samples a string from the probability distribution over characters in the input SMILES representation. Finally, the hidden representation for each of the drug entities is treated as its chemical structure representation from ChemVAE. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chemical Structure Representation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "From the sentence s containing two target drug entities d 1 and d 2 , we obtain the chemical structure representation of two drugs c 1 and c 2 respectively using ChemVAE. We concatenate these two embeddings c 1 and c 2 and pass those through a fully connected layer as represented as follows: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERTChem-DDI", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "chm = W [concat(c1, c2)] + b", |
| "eq_num": "(" |
| } |
| ], |
| "section": "BERTChem-DDI", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "o = W 3 [concat(H 0 , H 1 , H 2 , chm)] + b 3 (8) y t = sof tmax(o )", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "BERTChem-DDI", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Finally the training optimization is achieved using the cross-entropy loss (L t ) :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERTChem-DDI", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L t = t y t log y t", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "BERTChem-DDI", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "3 Experimental Setup", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERTChem-DDI", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In this section, we explain the dataset and experiments of using ChemVAE and BERTChem-DDI.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERTChem-DDI", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We have followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task (Herrero-Zazo et al., 2013) for the evaluation. This data set comprises of documents annotated with drug mentions and five types of interactions: Mechanism, Effect, Advice, Int and Other. The task is a multi-class classification to classify each of the drug pairs in the sentences into one of the types and we evaluate using Precision (P), Recall (R) and F1-score (F1) for each relation type. During pre-processing, we obtain the DRUG mentions in the corpus and map those into unique DrugBank identifiers. This mention normalization has been performed based on the longest overlap of drug mentions in the DrugBank. This mention normalization has been done for obtaining the corresponding SMILES representation to encode molecular structure information. The dataset statistics of the total drugs and the normalized drugs are enumerated in table 1. We initialize the nonnormalized drug representations using pre-trained word2vec trained on PubMED 1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset and pre-processing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We make use of the pre-trained contextual embeddings such as bert-base-cased, scibert-scivocabuncased (Beltagy et al., 2019) and domainspecific biobert v1.0 pubmed pmc and biobert v1.0 pubmed as the initialization of the transformer encoder in BERTChem-DDI. We uniformly keep the maximum sequence length as 300, batch size 16, initial learning rate for ADAM optimizer as 2e-5, drop out 0.1 for all the embedding ablations and trained for 5 epochs. During unsupervised training of ChemVAE with drugs from ZINC (Irwin and Shoichet, 2005) , the input SMILES representation has been trimmed to 120. The hidden dimension of ChemVAE encoder is 200 and for the decoder it is 500. Finally, a 292-dimensional representation of the drugs has been ultimately used for initialization of the BERTChem-DDI model's chemical structure representations of the drugs.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 124, |
| "text": "(Beltagy et al., 2019)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 509, |
| "end": 535, |
| "text": "(Irwin and Shoichet, 2005)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Details", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this section, we provide a detailed analysis of the various results and findings that we have observed during experimentation. We have demonstrated (Asada et al., 2018) 81 71 73 45 72 80 71 74 54 72 (Sun et al., 2019) 80 73 78 58 75 (Vivian et al., 2017) 85 76 77 57 77 (Zhu et al., 2020b) strong empirical results based on the proposed approach for both text and chemical structure. We further want to understand the specific contributions by the chemical structure component besides the pre-trained BERT and its other domain-specific variants. For this purpose, we refer to our experimental configurations in meaningful ways while enumerating the results.", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 171, |
| "text": "(Asada et al., 2018)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 202, |
| "end": 220, |
| "text": "(Sun et al., 2019)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 236, |
| "end": 257, |
| "text": "(Vivian et al., 2017)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 273, |
| "end": 292, |
| "text": "(Zhu et al., 2020b)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Ablation of Embeddings on BERT-DDI: During ablation analysis, we observe that the incorporation of domain-specific information in biobert v.1 pubmed boosts up the predictive performance in terms of macro-F1 score (across all relation types) by 2.3% compared to bert-base-cased. Moreover, the scibert-vocab-cased embedddings due to the scientific details obtained during fine-tuning achieves reasonable boost in performance. biobert v.1 pubmed based BERT-DDI is thus the bestperforming text-based relation classification model. The results are enumerated in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 557, |
| "end": 564, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Advantage of Chemical Structure embeddings on BERTChem-DDI: During empirical analysis of the BERTChem-DDI model, we observe how much performance gain can be achieved by augmenting the chemical structure information. From the results enumerated in terms of macro F1-score on all the relation types in table 3, we observe that the best-performing BERT-DDI model achieves a performance boost of 1.6% after adding chemical structure information in BERTChem-DDI. Probing deeper, we observe that the relation types Mechanism (3.2%) and Advice (2.11%) achieve significant performance improvement over BERT-DDI.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Comparison with the existing baselines: We compare our best-performing model with some of the best-performing existing baselines. Our method achieves the state-of-the-art performance based on the results in Table 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 207, |
| "end": 214, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this paper, we develop an approach for DDI relation classification based on pre-trained language model and chemical structure representation of drugs. Experiments on the benchmark DDI dataset proves the efficacy of our method. Possible directions of further research might be to explore Knowledge Graph based drug representation combined with textual description and other relation specific embeddings obtained from various ontologies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work is an extension of the thesis work by the author during her course at the Indian Institute of Technology, Kharagpur. Besides, the author would also like to thank the anonymous reviewers for their insightful comments and feedback on the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Extracting drug-drug interactions with attention CNNs", |
| "authors": [ |
| { |
| "first": "Masaki", |
| "middle": [], |
| "last": "Asada", |
| "suffix": "" |
| }, |
| { |
| "first": "Makoto", |
| "middle": [], |
| "last": "Miwa", |
| "suffix": "" |
| }, |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "9--18", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W17-2302" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Masaki Asada, Makoto Miwa, and Yutaka Sasaki. 2017. Extracting drug-drug interactions with atten- tion CNNs. In BioNLP 2017, pages 9-18, Van- couver, Canada,. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Enhancing drug-drug interaction extraction from texts by molecular structure information", |
| "authors": [ |
| { |
| "first": "Masaki", |
| "middle": [], |
| "last": "Asada", |
| "suffix": "" |
| }, |
| { |
| "first": "Makoto", |
| "middle": [], |
| "last": "Miwa", |
| "suffix": "" |
| }, |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Masaki Asada, Makoto Miwa, and Yutaka Sasaki. 2018. Enhancing drug-drug interaction extraction from texts by molecular structure information.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Scibert: A pretrained language model for scientific text", |
| "authors": [ |
| { |
| "first": "Iz", |
| "middle": [], |
| "last": "Beltagy", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Lo", |
| "suffix": "" |
| }, |
| { |
| "first": "Arman", |
| "middle": [], |
| "last": "Cohan", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "EMNLP/IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In EMNLP/IJCNLP.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Automatic chemical design using a data-driven continuous representation of molecules", |
| "authors": [ |
| { |
| "first": "Adams", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Al\u00e1n", |
| "middle": [], |
| "last": "Aspuru-Guzik", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ACS Central Science", |
| "volume": "4", |
| "issue": "2", |
| "pages": "268--276", |
| "other_ids": { |
| "DOI": [ |
| "10.1021/acscentsci.7b00572" |
| ], |
| "PMID": [ |
| "29532027" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adams, and Al\u00e1n Aspuru-Guzik. 2018. Automatic chemical design using a data-driven continuous rep- resentation of molecules. ACS Central Science, 4(2):268-276. PMID: 29532027.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The ddi corpus: An annotated corpus with pharmacological substances and drug-drug interactions", |
| "authors": [ |
| { |
| "first": "Mar\u00eda", |
| "middle": [], |
| "last": "Herrero-Zazo", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabel", |
| "middle": [], |
| "last": "Segura-Bedmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Paloma", |
| "middle": [], |
| "last": "Mart\u00ednez", |
| "suffix": "" |
| }, |
| { |
| "first": "Thierry", |
| "middle": [], |
| "last": "Declerck", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Journal of Biomedical Informatics", |
| "volume": "46", |
| "issue": "5", |
| "pages": "914--920", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.jbi.2013.07.011" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mar\u00eda Herrero-Zazo, Isabel Segura-Bedmar, Paloma Mart\u00ednez, and Thierry Declerck. 2013. The ddi corpus: An annotated corpus with pharmacological substances and drug-drug interactions. Journal of Biomedical Informatics, 46(5):914 -920.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Zinc a free database of commercially available compounds for virtual screening", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Irwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Shoichet", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Journal of chemical information and modeling", |
| "volume": "45", |
| "issue": "", |
| "pages": "177--82", |
| "other_ids": { |
| "DOI": [ |
| "10.1021/ci049714+" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Irwin and Brian Shoichet. 2005. Zinc a free database of commercially available compounds for virtual screening. Journal of chemical information and modeling, 45:177-82.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Drug-drug interaction prediction based on knowledge graph embeddings and convolutionallstm network", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Md", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Karim", |
| "suffix": "" |
| }, |
| { |
| "first": "Joao", |
| "middle": [], |
| "last": "Cochez", |
| "suffix": "" |
| }, |
| { |
| "first": "Mamtaz", |
| "middle": [], |
| "last": "Bosco Jares", |
| "suffix": "" |
| }, |
| { |
| "first": "Oya", |
| "middle": [], |
| "last": "Uddin", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Beyan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Decker", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB '19", |
| "volume": "", |
| "issue": "", |
| "pages": "113--123", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3307339.3342161" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Md. Rezaul Karim, Michael Cochez, Joao Bosco Jares, Mamtaz Uddin, Oya Beyan, and Stefan Decker. 2019. Drug-drug interaction prediction based on knowledge graph embeddings and convolutional- lstm network. In Proceedings of the 10th ACM In- ternational Conference on Bioinformatics, Compu- tational Biology and Health Informatics, BCB '19, page 113-123, New York, NY, USA. Association for Computing Machinery.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Autoencoding variational bayes", |
| "authors": [ |
| { |
| "first": "Diederik", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [], |
| "last": "Welling", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik Kingma and Max Welling. 2014. Auto- encoding variational bayes.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining", |
| "authors": [ |
| { |
| "first": "Jinhyuk", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Wonjin", |
| "middle": [], |
| "last": "Yoon", |
| "suffix": "" |
| }, |
| { |
| "first": "Sungdong", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Donghyeon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Sunkyu", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Chan", |
| "middle": [], |
| "last": "Ho So", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaewoo", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Bioinformatics", |
| "volume": "36", |
| "issue": "4", |
| "pages": "1234--1240", |
| "other_ids": { |
| "DOI": [ |
| "10.1093/bioinformatics/btz682" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Syntax-aware multi-task graph convolutional networks for biomedical relation extraction", |
| "authors": [ |
| { |
| "first": "Diya", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)", |
| "volume": "", |
| "issue": "", |
| "pages": "28--33", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-6204" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diya Li and Heng Ji. 2019. Syntax-aware multi-task graph convolutional networks for biomedical rela- tion extraction. In Proceedings of the Tenth Inter- national Workshop on Health Text Mining and Infor- mation Analysis (LOUHI 2019), pages 28-33, Hong Kong. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Drug-drug interaction extraction via convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Shengyu", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Buzhou", |
| "middle": [], |
| "last": "Tang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qingcai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaolong", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Computational and Mathematical Methods in Medicine", |
| "volume": "2016", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": { |
| "DOI": [ |
| "10.1155/2016/6918381" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shengyu Liu, Buzhou Tang, Qingcai Chen, and Xiao- long Wang. 2016. Drug-drug interaction extraction via convolutional neural networks. Computational and Mathematical Methods in Medicine, 2016:1-8.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Drug-drug interactions prediction based on drug embedding and graph auto-encoder", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Purkayastha", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Mondal", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sarkar", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "K" |
| ], |
| "last": "Pillai", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE)", |
| "volume": "", |
| "issue": "", |
| "pages": "547--552", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Purkayastha, I. Mondal, S. Sarkar, P. Goyal, and J. K. Pillai. 2019. Drug-drug interactions prediction based on drug embedding and graph auto-encoder. In 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), pages 547-552.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Drugdrug interaction extraction from biomedical texts using long short-term memory network", |
| "authors": [ |
| { |
| "first": "Kumar", |
| "middle": [], |
| "last": "Sunil", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Sahu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Anand", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Journal of Biomedical Informatics", |
| "volume": "86", |
| "issue": "", |
| "pages": "15--24", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.jbi.2018.08.005" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sunil Kumar Sahu and Ashish Anand. 2018. Drug- drug interaction extraction from biomedical texts us- ing long short-term memory network. Journal of Biomedical Informatics, 86:15 -24.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Drugdrug interaction extraction via recurrent hybrid convolutional neural networks with an improved focal loss", |
| "authors": [ |
| { |
| "first": "Xia", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Ke", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "Long", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Sutcliffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Feijuan", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Sushing", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Entropy", |
| "volume": "21", |
| "issue": "1", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.3390/e21010037" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xia Sun, Ke Dong, Long Ma, Richard Sutcliffe, Fei- juan He, Sushing Chen, and Jun Feng. 2019. Drug- drug interaction extraction via recurrent hybrid con- volutional neural networks with an improved focal loss. Entropy, 21(1):37.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "An attention-based effective neural model for drug-drug interactions extraction", |
| "authors": [ |
| { |
| "first": "Vivian", |
| "middle": [], |
| "last": "Vivian", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongfei", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ling", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhehuan", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhang", |
| "middle": [], |
| "last": "Li Zhengguang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhihao", |
| "middle": [], |
| "last": "Yijia", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "BMC Bioinformatics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1186/s12859-017-1855-x" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vivian Vivian, Hongfei Lin, Ling Luo, Zhehuan Zhao, li Zhengguang, Zhang Yijia, Zhihao Yang, and Jian Wang. 2017. An attention-based effective neural model for drug-drug interactions extraction. BMC Bioinformatics, 18.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Drugbank: a knowledgebase for drugs, drug actions and drug targets", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Wishart", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Knox", |
| "suffix": "" |
| }, |
| { |
| "first": "An", |
| "middle": [ |
| "Chi" |
| ], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Dean", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Savita", |
| "middle": [], |
| "last": "Shrivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Tzur", |
| "suffix": "" |
| }, |
| { |
| "first": "Bijaya", |
| "middle": [], |
| "last": "Gautam", |
| "suffix": "" |
| }, |
| { |
| "first": "Murtaza", |
| "middle": [], |
| "last": "Hassanali", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Nucleic acids research", |
| "volume": "36", |
| "issue": "", |
| "pages": "901--907", |
| "other_ids": { |
| "DOI": [ |
| "10.1093/nar/gkm958" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Wishart, Craig Knox, An Chi Guo, Dean Cheng, Savita Shrivastava, Dan Tzur, Bijaya Gautam, and Murtaza Hassanali. 2008. Drugbank: a knowledge- base for drugs, drug actions and drug targets. nucleic acids res 36:d901-d906. Nucleic acids research, 36:D901-6.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Enriching pretrained language model with entity information for relation classification", |
| "authors": [ |
| { |
| "first": "Shanchan", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yifan", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Xlnet: Generalized autoregressive pretraining for language understanding", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zihang", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "NeurIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. Yang, Zihang Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Drug-drug interaction extraction via hierarchical RNNs on sequence and shortest dependency paths", |
| "authors": [ |
| { |
| "first": "Yijia", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongfei", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhihao", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Dumontier", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Bioinformatics", |
| "volume": "34", |
| "issue": "5", |
| "pages": "828--835", |
| "other_ids": { |
| "DOI": [ |
| "10.1093/bioinformatics/btx659" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yijia Zhang, Wei Zheng, Hongfei Lin, Jian Wang, Zhi- hao Yang, and Michel Dumontier. 2017. Drug-drug interaction extraction via hierarchical RNNs on se- quence and shortest dependency paths. Bioinformat- ics, 34(5):828-835.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Extracting drug-drug interactions from texts with biobert and multiple entityaware attentions", |
| "authors": [ |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lishuang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongbin", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Anqiao", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Xueyang", |
| "middle": [], |
| "last": "Qin", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Journal of biomedical informatics", |
| "volume": "106", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.jbi.2020.103451" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yu Zhu, Lishuang Li, Hongbin Lu, Anqiao Zhou, and Xueyang Qin. 2020a. Extracting drug-drug inter- actions from texts with biobert and multiple entity- aware attentions. Journal of biomedical informatics, 106:103451.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Extracting drug-drug interactions from texts with biobert and multiple entityaware attentions", |
| "authors": [ |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lishuang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongbin", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Anqiao", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Xueyang", |
| "middle": [], |
| "last": "Qin", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Journal of Biomedical Informatics", |
| "volume": "106", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.jbi.2020.103451" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yu Zhu, Lishuang Li, Hongbin Lu, Anqiao Zhou, and Xueyang Qin. 2020b. Extracting drug-drug inter- actions from texts with biobert and multiple entity- aware attentions. Journal of Biomedical Informatics, 106:103451.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Schematic Representation of BERTChem-DDI with the input sentence \"Glepafloxain is a competitive inhibitor of the metabolism of Theophylline\" tagged with two drug entities Glepafloxacin and Theophylline.", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF1": { |
| "text": "7) W and b are the parameters of the fully-connected layer of the chemical structure representation of d 1 and d 2 . The final layer of BERTChem-DDI model contains the concatenation of all the previous textbased outputs (see Section 2.1) and chemical structure representation as expressed in the equations:", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "TABREF1": { |
| "text": "Statistics of the DDI Extraction corpus 2013.", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "text": "Ablation of the contextual embeddings.", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Models</td><td>Embeddings</td><td>Macro F1</td></tr><tr><td>BERT-DDI</td><td>biobert v1.0 pubmed pmc</td><td>0.818</td></tr><tr><td colspan=\"2\">BERTChem-DDI biobert v1.0 pubmed pmc</td><td>0.829</td></tr><tr><td>BERT-DDI</td><td>biobert v1.1 pubmed</td><td>0.822</td></tr><tr><td>BERTChem-DDI</td><td>biobert v1.1 pubmed</td><td>0.838</td></tr></table>" |
| }, |
| "TABREF4": { |
| "text": "", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>: Probing deeper into the influence of chemical</td></tr><tr><td>structure information into the BERT-based models for</td></tr><tr><td>DDI Relation Classification.</td></tr></table>" |
| }, |
| "TABREF7": { |
| "text": "Comparison of F1 scores for all the relation types using existing baselines on test set.", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Adv indicates</td></tr></table>" |
| } |
| } |
| } |
| } |