[ { "content": "A Multi-Modal Contrastive Diffusion Model for Therapeutic Peptide Generation\nYongkang Wang1*, Xuan Liu1*, Feng Huang1, Zhankun Xiong1, Wen Zhang1,2 3†\n1College of Informatics, Huazhong Agricultural University, Wuhan 430070, China\n2Hubei Key Laboratory of Agricultural Bioinformatics, Huazhong Agricultural University, Wuhan 430070, China\n3Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, Wuhan 430070,China\n{wyky481, lx666, fhuang233, xiongzk }@webmail.hzau.edu.cn, zhangwen@mail.hzau.edu.cn\nAbstract\nTherapeutic peptides represent a unique class of pharmaceu-\ntical agents crucial for the treatment of human diseases. Re-\ncently, deep generative models have exhibited remarkable\npotential for generating therapeutic peptides, but they only\nutilize sequence or structure information alone, which hin-\nders the performance in generation. In this study, we pro-\npose a Multi-Modal Contrastive Diffusion model (MMCD),\nfusing both sequence and structure modalities in a diffusion\nframework to co-generate novel peptide sequences and struc-\ntures. Specifically, MMCD constructs the sequence-modal\nand structure-modal diffusion models, respectively, and de-\nvises a multi-modal contrastive learning strategy with inter-\ncontrastive and intra-contrastive in each diffusion timestep,\naiming to capture the consistency between two modalities\nand boost model performance. The inter-contrastive aligns se-\nquences and structures of peptides by maximizing the agree-\nment of their embeddings, while the intra-contrastive differ-\nentiates therapeutic and non-therapeutic peptides by max-\nimizing the disagreement of their sequence/structure em-\nbeddings simultaneously. The extensive experiments demon-\nstrate that MMCD performs better than other state-of-the-\nart deep generative methods in generating therapeutic pep-\ntides across various metrics, including antimicrobial/anti-\ncancer score, diversity, and peptide-docking.\nIntroduction\nTherapeutic peptides, such as antimicrobial and anticancer\npeptides, are a unique class of pharmaceutical agents that\ncomprise short chains of amino acids, exhibiting significant\npotential in treating complex human diseases (Jakubczyk\net al. 2020). Traditionally, therapeutic peptides are discov-\nered through a comprehensive screening of sequence spaces\nusing phage/yeast display technologies (Muttenthaler et al.\n2021) or computational tools trained for scoring desired\nproperties (Lee et al. 2017; Lee, Wong, and Ferguson 2018).\nHowever, the combinatorial space of possible peptides is\nvast and only a small solution satisfies therapeutic require-\nments; thus, such screening methods based on brute force\ncan be time-consuming and costly.\n*These authors contributed equally.\n†Corresponding authors.\nCopyright © 2024, Association for the Advancement of Artificial\nIntelligence (www.aaai.org). All rights reserved.In recent years, deep generative models (DGMs) have\ndemonstrated success in generating images (Liu and Chilton\n2022), texts (Iqbal and Qureshi 2022), proteins (Wu et al.\n2021), and also gained popularity in peptides. DGMs ex-\nplored a more expansive chemical space that affords the\ncreation of structurally novel peptides, by training neu-\nral networks to approximate the underlying distribution of\nobserved or known ones (Wan, Kontogiorgos, and Fuente\n2022). For example, autoregression-based methods depicted\npeptide sequences as sentences composed of residue tokens,\nso that the problem can be solved by predicting residue ar-\nrangement via recurrent neural networks (RNN) (M ¨uller,\nHiss, and Schneider 2018; Capecchi et al. 2021). Variational\nautoencoder (V AE)-based methods generated new peptide\nsequences by sampling from the latent space learned through\nan encoder-decoder architecture, with or without therapeu-\ntic properties as conditional constraints (Ghorbani et al.\n2022; Szymczak et al. 2023b). Generative adversarial net-\nwork (GAN)-based methods trained the generator and dis-\ncriminator using known data, which compete against each\nother to generate new peptides (Tucs et al. 2020; Oort et al.\n2021; Lin, Lin, and Lane 2022). Nowadays, diffusion mod-\nels (Yang et al. 2023) are prevalent in the generation of pro-\ntein sequences and structures, owing to their superior capa-\nbility in fitting distributions compared to prior techniques\n(Shi et al. 2023; Wu et al. 2022). Likewise, these advanced\ndiffusion models can be extended to peptide generation and\nare expected to deliver favorable outcomes.\nDespite the commendable progress of efforts above, they\nfocused on generating either sequences (i.e., residue ar-\nrangements) or structures (i.e., spatial coordinates of back-\nbone atoms), ignoring that models fusing information from\nboth modalities may outperform their uni-modal counter-\nparts (Huang et al. 2021). However, how to effectively in-\ntegrate the multi-modal information and capture their con-\nsistency in peptide generation is a major challenge. Addi-\ntionally, compared with generation tasks for images, texts,\nand proteins that involve millions of labeled samples, public\ndatasets for therapeutic peptides typically contain only thou-\nsands of sequence or structure profiles, induced by the high\ncost of in vitro screening. This limited amount of available\ndata may result in overfitting (Webster et al. 2019), which\nconfines generated outcomes within a restricted distribution,\nconsequently compromising the model’s generalization abil-arXiv:2312.15665v2 [q-bio.QM] 4 Jan 2024ity. How to fully leverage existing peptide data, such as ther-\napeutic and non-therapeutic peptides, to enhance the gener-\nation performance could be regarded as another challenge.\nTo address these challenges, we propose a Multi-Modal\nContrastive Diffusion model for therapeutic peptide genera-\ntion, named MMCD . Specifically, we build a multi-modal\nframework that integrates sequence-modal and structure-\nmodal diffusion models for co-generating residue arrange-\nments and backbone coordinates of peptides. To ensure con-\nsistency between the two modalities during the generation\nprocess, we bring in an inter-modal contrastive learning\n(Inter-CL) strategy. Inter-CL aligns sequences and struc-\ntures, by maximizing the agreement between their embed-\ndings derived from the same peptides at each diffusion\ntimestep. Meanwhile, to avoid the issue of inferior per-\nformance caused by limited therapeutic peptide data, we\nincorporate substantial known non-therapeutic peptides as\ndata augmentations to devise an intra-modal CL (Intra-\nCL). Intra-CL differentiates therapeutic and non-therapeutic\npeptides by maximizing the disagreement of their se-\nquence/structure embeddings at each diffusion timestep,\ndriving the model to precisely fit the distribution of thera-\npeutic peptides. Overall, the main contributions of this work\nare described as follows:\n• We propose a multi-modal diffusion model that inte-\ngrates both sequence and structure information to co-\ngenerate residue arrangements and backbone coordinates\nof therapeutic peptides, whereas previous works focused\nonly on a single modality.\n• We design the inter-intra CL strategy at each diffusion\ntimestep, which aims to maximize the agreement be-\ntween sequence and structure embeddings for aligning\nmulti-modal information, and maximize the disagree-\nment between therapeutic and non-therapeutic peptides\nfor boosting model generalization.\n• Extensive experiments conducted on peptide datasets\ndemonstrate that MMCD surpasses the current state-of-\nthe-art baselines in generating therapeutic peptides, par-\nticularly in terms of antimicrobial/anticancer score, di-\nversity, and pathogen-docking.\nRelated works\nDiffusion Model for Protein Generation\nDiffusion models (Song and Ermon 2019; Trippe et al. 2023)\ndevote to learning the noise that adequately destroys the\nsource data and iteratively remove noise from the prior dis-\ntribution to generate new samples, which have emerged as\ncutting-edge methods for numerous generation tasks, es-\npecially in proteins (Wu et al. 2022; Cao et al. 2023).\nFor example, Liu et al. (2023) proposed a textual condi-\ntionally guided diffusion model for sequence generation.\nHoogeboom et al. (2022) introduced ProtDiff with an E(3)\nequivariant graph neural network to learn a diverse distri-\nbution over backbone coordinates of structures. Luo et al.\n(2022) considered both the position and orientation of anti-\nbody residues, achieving an equivariant diffusion model for\nsequence-structure co-generation. Despite their success, thefusion of both sequence and structure modalities in diffu-\nsion models has not been comprehensively investigated, and\ntheir potential for peptide generation remains unexplored.\nTo fill this gap, we implement a peptide-oriented diffu-\nsion model capable of sequence-structure co-generation and\nmulti-modal data fusion.\nContrastive Learning\nBeing popular in self-supervised learning, contrastive learn-\ning (CL) allows models to learn the knowledge behind data\nwithout explicit labels (Xia et al. 2022; Zhu et al. 2023). It\naims to bring an anchor (i.e., data sample) closer to a posi-\ntive/similar instance and away from many negative/dissimi-\nlar instances, by optimizing their mutual information in the\nembedding space. Strategies to yield the positive and neg-\native pairs often dominate the model performance (Zhang\net al. 2022). For example, Yuan et al. (2021) proposed a\nmulti-modal CL to align text and image data, which encour-\nages the agreement of corresponding text-image pairs (posi-\ntive) to be greater than those of all non-corresponding pairs\n(negative). Wu, Luu, and Dong (2022) designed a CL frame-\nwork that makes full use of semantic relations among text\nsamples via efficient positive and negative sampling strate-\ngies, to mitigate data sparsity for short text modeling. Zhang\net al. (2023b) augmented the protein structures using differ-\nent conformers, and maximized the agreement/disagreement\nbetween the learned embeddings of same/different proteins,\naiming to learn more discriminative representations. How-\never, these CL strategies have yet to be extended to peptide-\nrelated studies. Therefore, we devise the novel CL strategy\nin peptide generation, which serves as an auxiliary objective\nto enforce sequence-structure alignment and boost model\nperformance.\nMethodology\nIn this section, we formulate the peptide co-generation prob-\nlem for sequence and structure. Subsequently, we elabo-\nrately enumerate the components of our method MMCD,\nincluding the diffusion model for peptide generation and the\nmulti-modal contrastive learning strategy. The overview of\nMMCD is illustrated in Figure 1.\nProblem Formulation\nA peptide with Nresidues (amino acids) can be repre-\nsented as a sequence-structure tuple, denoted as X=\n(S, C).S= [si]N\ni=1stands for the sequence with si∈\n{ACDEFGHIKLMNPQRSTV WY }as the type of\nthei-th residue, and C= [ci]N\ni=1stands for the structure\nwithci∈R3∗4as Cartesian coordinates of the i-th residue\n(involving four backbone atoms N-C α-C-O). Our goal is to\nmodel the joint distribution of Xbased on the known pep-\ntide data, so that sequences (i.e., residue types) and struc-\ntures (i.e., residue coordinates) of new peptides can be co-\ngenerated by sampling the distribution.\nDiffusion Model for Peptide Generation\nThe diffusion model defines the Markov chains of processes,\nin which latent variables are encoded by a forward diffu-\nsion process and decoded by a reverse generative processembedding embeddingInter -contrastive learning...\n...\nIntra-contrastive learning\nTherapeutic Non-therapeuticNoise\nprior Peptide\npositive pairs \nnega tive pairs MLP\nMLP EGNNTransformer\nST ...St-1StS0 ...q(St\n | St-1)\np(St-1\n | St\n )\nCT ... Ct-1CtC0 ...\nq(Ct\n | Ct-1)p(Ct-1\n | Ct\n )\n...QVQ RWQ D\nSDDAMWV...\n...*******\n******* ...\n...\n...\nSt\nSt\nCt... ...\n... ...Intra-contrastive learningSequences\nStructures\nMarginal \ndistribution\nGaussian \nnoiseMulti -Modal CL\nMulti -Modal CLDenoisingAdd noise\nDenoising\nAdd noise\nCt\nFigure 1: Overview of the MMCD. MMCD consists of a diffusion model for the peptide sequence-structure co-generation and\nmulti-modal contrastive learning (CL). The diffusion model involves a forward process ( q(·|·)) for adding noise and a reverse\nprocess ( p(·|·)) for denoising at each timestep t. The reverse process utilizes a transformer encoder (or EGNN) to extract\nembeddings from sequences S(or structures C), and a sequence (or structure)-based MLP to map embeddings to the marginal\ndistribution (or Gaussian) noise. The multi-modal CL includes an Inter-CL and an Intra-CL, which aims to align sequence and\nstructure embeddings, and differentiate therapeutic and non-therapeutic peptide embeddings.\n(Sohl-Dickstein et al. 2015). Let X0= (S0, C0)denotes\nthe ground-truth peptide and Xt= (St, Ct)fort= 1, ..., T\nto be the latent variable at timestep t. The peptide gener-\nation can be modeled as an evolving thermodynamic sys-\ntem, where the forward process q(Xt|Xt−1)gradually in-\njects small noise to the data X0until reaching a random\nnoise distribution at timestep T, and the reverse process\npθ(Xt−1|Xt)with learnable parameters θlearns to denoise\nthe latent variable Xttowards the data distribution (Luo\net al. 2022).\nDiffusion for Peptide Sequence. Following Anand and\nAchim (2022), we treat residue types as categorical data and\napply discrete diffusion to sequences, where each residue\ntype is characterized using one-hot encoding with 20 types.\nFor the forward process, we add noise to residue types using\nthe transition matrices with the marginal distribution (Austin\net al. 2021; Vignac et al. 2023) (see details in Appendix A).\nFor the reverse process, the diffusion trajectory is parame-\nterized by the probability q(St−1|St, S0)and a network\nˆpθis defined to predict the probability of S0(Austin et al.2021), that is:\npθ\u0000\nSt−1|St\u0001\n=Y\n1≤i≤Nq(st−1\ni|St,ˆS0)·ˆpθ(ˆS0|St)(1)\nwhere st\nidenotes the one-hot feature for the i-th residue in\nthe sequence Sat timestep t, and ˆS0is the predicted proba-\nbility of S0. In this work, we design the ˆpθas follows:\nˆpθ\u0010\nˆS0|St\u0011\n=Y\n1≤i≤NSoftmax\u0000\nˆs0\ni| Fs\u0000\nht\ni\u0001\u0001\n(2)\nwhere ht\niis the input feature of residue iwith the diffusion\nnoise at time t(the initialization of ht\niis provided in Ap-\npendix A). Fsis a hybrid neural network to predict the noise\nof residue types from the marginal distribution, and then the\nnoise would be removed to compute the probability of ˆs0\ni.\nSoftmax is applied over all residue types. Here, we imple-\nmentFswith a transformer encoder and an MLP. The for-\nmer learns contextual embeddings of residues from the se-\nquence, while the latter maps these embeddings to the noises\nof residue types. The learned sequence embedding (defined\nasS) involves downstream contrastive learning strategies.Diffusion for Peptide Structure. As the coordinates of\natoms are continuous variables in the 3D space, the forward\nprocess can be defined by adding Gaussian noise to atom\ncoordinates (Ho, Jain, and Abbeel 2020) (see details in Ap-\npendix A). Following Trippe et al. (2023), the reverse pro-\ncess can be defined as:\npθ(ct−1\ni|Ct) =N(ct−1\ni|µθ(Ct, t), βtI) (3)\nµθ\u0000\nCt, t\u0001\n=1√\nαt\u0012\nct\ni−βt\n√\n1−αtϵθ\u0000\nCt, t\u0001\u0013\n(4)\nwhere cirefers to coordinates of the i-th residue in the\nstructure C;βis the noise rate, formally αt= 1−βt,\nαt=Qt\nτ=1(1−βτ); the network ϵθis used to gradually\nrecover the structural data by predicting the Gaussian noise.\nIn this work, we design the ϵθas follows:\nϵθ(Ct, t) =Fc\u0000\nrt\ni, ht\ni\u0001\n(5)\nwhere rirepresents the coordinates of residue i,hiis the\nresidue feature, and Fcis a hybrid neural network for pre-\ndicting Gaussian noises at timestep t. Similar to sequence\ndiffusion, we implement Fcwith an equivariant graph neu-\nral network (EGNN) (Satorras, Hoogeboom, and Welling\n2021) and an MLP. The former learns spatial embeddings\nof residues from the structure (formalized as a 3D graph),\nwhile the latter maps these embeddings to Gaussian noises.\nThe learned structure embedding (defined as C) also involves\ndownstream contrastive learning strategies.\nDiffusion Objective. Following previous work (Anand\nand Achim 2022), we decompose the objective of the pep-\ntide diffusion process into sequence loss and structure loss.\nFor the sequence loss Lt\nS, we aim to minimize the cross-\nentropy (CE) loss between the actual and predicted residue\ntypes at timestep t:\nLt\nS=1\nNX\n1≤i≤NCE\u0000\ns0\ni,ˆpθ(ˆs0\ni|St)\u0001\n(6)\nFor the structure loss Lt\nC, the objective is to calculate the\nmean squared error (MSE) between the predicted noise ϵθ\nand standard Gaussian noise ϵat timestep t:\nLt\nC=1\nNX\n1≤i≤N\r\rϵi−ϵθ(Ct, t)\r\r2(7)\nMulti-Modal Contrastive Learning Strategy\nWhen multiple modal data (e.g., sequence and structure) co-\nexist, it becomes imperative to capture their consistency to\nreduce the heterogeneous differences between modalities,\nallowing them to be better fused in generation tasks. Mutual\ninformation (MI) is a straightforward solution to measure\nthe non-linear dependency (consistency) between variables\n(Liu et al. 2023); thus, maximizing MI between modalities\ncan force them to align and share more crucial information.\nAlong this line, we bring in contrastive learning (CL) to\nalign sequences and structures by maximizing their MI in\nthe embedding space. Specifically, we devise CL strategies\nfor each diffusion timestep t, as follows:Inter-CL. For a peptide, we define its sequence as the an-\nchor, its structure as the positive instance, and the structures\nof other peptides in a mini-batch as the negative instances.\nThen, we maximize the MI of positive pair (anchor and posi-\ntive instance) while minimizing the MI of negative pairs (an-\nchor and negative instances), based on embeddings learned\nfrom the networks ˆpθandϵθ. Further, we establish a ’dual’\ncontrast where the structure acts as an anchor and sequences\nare instances. The objective is to minimize the following\nInfoNCE-based (Chen et al. 2020) loss function:\nLt\ninter=−1\n2\"\nlogE\u0000\nSt\ni,Ct\ni\u0001\nPM\nj=1E\u0000\nSt\ni,Ct\nj\u0001+ logE\u0000\nCt\ni,St\ni\u0001\nPM\nj=1E\u0000\nCt\ni,St\nj\u0001#\n(8)\nwhere Si/Ciis the sequence/structure embeddings of i-th\npeptide in the mini-batch, E(·,·)is the cosine similarity\nfunction with the temperature coefficient to measure the MI\nscore between two variables, Mis the size of a mini-batch.\nIn addition, the used diffusion model can only remem-\nber confined generation patterns if therapeutic peptide data\nfor training is limited, which may lead to inferior general-\nization towards novel peptides. To alleviate this issue, we\nintroduce contrastive learning to boost the generative capac-\nity of networks ˆpθandϵθby enriching the supervised sig-\nnals. However, it is unwise to construct positive instances by\nperforming data augmentations on therapeutic peptides, as\neven minor perturbations may lead to significant functional\nchanges (Yadav, Kumar, and Singh 2022). Hence, our fo-\ncus lies on employing effective strategies for selecting neg-\native instances. In this regard, we collect non-therapeutic\npeptides from public databases to treat them as negative in-\nstances, and maximize the disagreement between embed-\ndings of therapeutic and non-therapeutic peptides. In detail,\nwe devise an Intra-CL strategy for each diffusion timestep t,\nas follows:\nIntra-CL. In a mini-batch, we define the sequence of a\ntherapeutic peptide ias the anchor, and the sequence of an-\nother therapeutic peptide jas the positive instance, while the\nsequences of non-therapeutic peptides kare regarded as neg-\native instances. Similar to Inter-CL, we then maximize/mini-\nmize the MI of positive/negative pairs. And we also establish\na structure-oriented contrast by using structures of therapeu-\ntic and non-therapeutic peptides to construct the anchor, pos-\nitive, and negative instances. The objective is to minimize\nthe following loss function (Zheng et al. 2021):\nLt\nintra=−1\nMMX\nj=1,j̸=i1yi=yj \nlogE\u0000\nSt\ni,St\nj\u0001\nPM\nk=11yi̸=ykE(St\ni,St\nk)\n+ logE\u0000\nCt\ni,Ct\nj\u0001\nPM\nk=11yi̸=ykE(Ct\ni,Ct\nk)!\n(9)\nwhere yirepresents the class of peptide i(i.e., therapeutic or\nnon-therapeutic). 1yi=yjand1yi̸=ykstand for the indicator\nfunctions, where the output is 1ifyi=yj(peptides iandj\nbelong to the same class) or yi̸=yk(the types of peptides\niandkare different); otherwise the output is 0. The indica-\ntor function filters therapeutic and non-therapeutic peptides\nfrom the data for creating positive and negative pairs.MethodsAMP ACP\nSimilarity ↓ Instability ↓ Antimicrobial ↑ Similarity ↓ Instability ↓ Anticancer ↑\nLSTM-RNN 39.6164 45.0862 0.8550 36.9302 47.0669 0.7336\nAMPGAN∗38.3080 51.5236 0.8617 - - -\nHydrAMP∗31.0662 59.6340 0.8145 - - -\nWAE-PSO∗- - - 41.2524 42.5061 0.7443\nDiffAB 28.9849 43.3607 0.8024 31.4220 36.0610 0.6669\nSimDiff 25.5385 41.1629 0.8560 28.8245 33.0405 0.7222\nMMCD 24.4107 39.9649 0.8810 27.4685 31.7381 0.7604\n’*’ represents that the method relies on domain-specific biological knowledge. ’-’ represents that the method is un-\nsuitable for the current task. For example, AMPGAN and HydrAMP are only designed for the AMP generation.\nTable 1: Results for the sequence generation\nMethodsAMP ACP\nRamachandran ↑ RMSD ↓ Docking ↑ Ramachandran ↑ RMSD ↓\nAPPTEST 69.6576 2.7918 1362 67.9826 2.8055\nFoldingDiff 72.4681 2.5118 1574 72.0531 2.6033\nProtDiff 71.3078 2.5544 1533 69.7589 2.4960\nDiffAB 72.9647 2.3844 1608 71.3225 2.5513\nSimDiff 76.1378 2.1004 1682 76.6164 2.4118\nMMCD 80.4661 1.8278 1728 78.2157 2.0847\nTable 2: Results for the structure generation.\nThe reason behind the design of Intra-CL is intuitive.\nFirst, the non-therapeutic class naturally implies opposite in-\nformation against the therapeutic class, and hence it makes\nthe model more discriminative. Second, the fashion to max-\nimize the disagreement between classes (1) can induce bi-\nases in the embedding distribution of therapeutic peptides,\nidentifying more potential generation space, and (2) can ex-\nplicitly reinforce embedding-class correspondences during\ndiffusion, maintaining high generation fidelity (Zhu et al.\n2022). Further analysis is detailed in the ablation study.\nModel Training\nThe ultimate objective function is the sum of the diffusion\nprocess for sequence and structure generation, along with\nthe CL tasks for Intra-CL and Inter-CL:\nLtotal=Et∼Uniform(1...T)\u0002\nα\u0000\nLt\nS+Lt\nC\u0001\n+ (1−α)\u0000\nLt\nintra+Lt\ninter\u0001\u0003\n(10)\nwhere αrepresents a hyperparameter to balance the contri-\nbutions of different tasks. The Uniform(1...T) shows the uni-\nform distribution for the diffusion timesteps. The implemen-\ntation details of MMCD and the sampling process of peptide\ngeneration can be found in Appendix A.\nExperiments\nExperimental Setups\nDatasets. Following previous studies (Thi Phan et al.\n2022; Zhang et al. 2023a), we collected therapeutic pep-\ntide data from public databases, containing two biologi-\ncal types, i.e., antimicrobial peptides (AMP) and anticancer\npeptides (ACP). Among these collected peptides, a portion\nof them only have 1D sequence information, without 3Dstructure information. Then, we applied Rosetta-based com-\nputational tools (Chaudhury, Lyskov, and Gray 2010) to pre-\ndict the missing structures based on their sequences. Finally,\nwe compiled two datasets, one containing 20,129 antimi-\ncrobial peptides and the other containing 4,381 anticancer\npeptides. In addition, we paired an equal number of labeled\nnon-therapeutic peptides (collected from public databases)\nwith each of the two datasets, exclusively for the contrastive\nlearning task.\nBaselines. We compared our method with the follow-\ning advanced methods for peptide generation at sequence\nand structure levels. For the sequence generation, the\nautoregression-based method LSTM-RNN (M ¨uller, Hiss,\nand Schneider 2018), the GAN-based method AMPGAN\n(Oort et al. 2021), and the V AE-based methods including\nWAE-PSO (Yang et al. 2022) and HydrAMP (Szymczak\net al. 2023a) are listed as baselines. For the structure gener-\nation, we took APPTEST (Timmons and Hewage 2021) as a\nbaseline, which combines the neural network and simulated\nannealing algorithm for structure prediction. Moreover, we\nextended diffusion-based methods for protein generation to\npeptides. The diffusion-based methods for structure genera-\ntion (e.g., FoldingDiff (Wu et al. 2022) and ProtDiff (Trippe\net al. 2023)) and the sequence-structure co-design (e.g., Dif-\nfAB(Luo et al. 2022) and SimDiff(Zhang et al. 2023b)), are\nconsidered for the comparison separately in the sequence\nand structure generation.\nEvaluation protocol. Here, we required each model (ours\nand baselines) to generate 1,000 new peptides, and then\nevaluated the quality of generated peptides with the follow-\ning metrics. For the sequence, similarity score is used toMMCD (w/o Inter -CL) on AMP\n MMCD on AMP\nMMCD (w/o Intra -CL) on AMP and non -AMP MMCD on AMP and non -AMP(a)\n(b)\n(a) (b)\naverage\naverageFigure 2: (a) The sample ratio under different sequence lengths in the AMP dataset, where the red line is the average ratio. (b)\nThe similarity and RMSD scores of MMCD and baselines across different sequence lengths.\nquantify how closely the generated sequences match exist-\ning ones, with a lower score indicating higher novelty; insta-\nbility score (M ¨uller et al. 2017) indicates the degree of pep-\ntide instability; antimicrobial /anticancer score evaluates\nthe probability of peptides having therapeutic properties.\nFor the structure, Ramachandran score (Hollingsworth and\nKarplus 2010) accesses the reliability of peptide structures;\nRMSD score measures the structural similarity between\ngenerated and existing peptides, with a lower score indi-\ncating higher authenticity; docking score (Fl ´orez-Castillo\net al. 2020) evaluates the binding degree of antimicro-\nbial peptides to bacterial membrane proteins (PDB ID:\n6MI7). We only reported the average metrics over all gen-\nerated peptides for each method in the experimental re-\nsults. Detailed information about the datasets, baselines,\nmetrics, and implementations can be found in Appendix\nB. Our code, data and appendix are available on GitHub\n(https://github.com/wyky481l/MMCD)\nExperimental Results\nPerformance comparison. In the results of sequence gen-\neration under two datasets (as shown in Table 1), MMCD ex-\nhibited lower similarity and instability scores than all base-\nlines, suggesting its good generalization ability in generating\ndiverse and stable peptides. Meanwhile, MMCD surpassed\nall baselines with higher antimicrobial and anticancer scores\nacross AMP and ACP datasets, highlighting its strong po-\ntential for generating therapeutic peptides. Beyond that, we\nnoticed that diffusion-based baselines (e.g., SimDiff, Dif-\nfAB) exhibit higher stability and diversity but lower ther-\napeutic scores compared to baselines that incorporate bio-\nlogical knowledge (e.g., AMPGAN, HydrAMP, WAE-PSO,\ndetails in Appendix B). By contrast, MMCD introduced bio-\nlogical knowledge into the diffusion model by designing the\ncontrastive learning of therapeutic and non-therapeutic pep-\ntides, thereby delivering optimality across various metrics.\nFor the results of structure generation (as shown in Ta-\nble 2), MMCD also outperformed all the baselines and ex-\nceeded the best baselines (DiffAB and SimDiff) by 23.3 %\nand 12.9 %in RMSD scores, 10.2 %and 5.6 %in Ramachan-\ndran scores, and 7.4 %and 2.7 %in docking scores for AMP\ndataset. The higher Ramachandran score and lower RMSD\nscore of MMCD underlined the reliability of our generated\npeptide structures. Especially in peptide docking, we foundthat MMCD shows the best docking score compared with\nbaselines, which indicates great binding interactions with\nthe target protein. Overall, MMCD is superior to all base-\nlines in both sequence and structure generation of peptides,\nand its impressive generative ability holds great promise to\nyield high-quality therapeutic peptides.\nPerformance on different sequence lengths. In our\ndataset, sequence lengths of different peptides exhibited sub-\nstantial variation, with the number of residues ranging from\n5 to 50 (Figure 2-a). We required models to generate 20\nnew peptides (sequences or structures) at each sequence\nlength. Note that two methods, AMPGAN and HydrAMP,\nwere excluded from the comparison because they cannot\ngenerate peptides with fixed lengths. From the generated re-\nsults on the AMP dataset (Figure 2-b), MMCD exceeded\nthe baselines in terms of similarity and RMSD scores at\neach sequence length. With the increasing sequence lengths,\nthere is a general trend of increased similarity and RMSD\nscores across all methods. One possible reason for this trend\nis that designing longer peptides becomes more complex,\ngiven the more prominent search space involved. Addition-\nally, the scarcity of long-length peptides poses challenges in\naccurately estimating the similarity between generated and\nknown peptides. In summary, these observations supported\nthat MMCD excels at generating diverse peptides across dif-\nferent lengths, especially shorter ones.\nAblation study\nTo investigate the necessity of each module in MMCD, we\nconducted several comparisons between MMCD with its\nvariants: (1) MMCD (w/o Inter-CL) that removes the Inter-\nCL task, (2) MMCD (w/o Intra-CL) that removes the Intra-\nCL task, and (3) MMCD (w/o Inter-CL & Intra-CL) that re-\nmoves both Inter-CL and Intra-CL tasks. The comparisons\nwere operated on both AMP and ACP datasets, and the re-\nsults are shown in Table 3 and Appendix Table 1. When the\nInter-CL was removed (w/o Inter-CL), we observed a de-\ncline in all metrics for peptide sequence and structure gen-\neration, implying the importance of aligning two modalities\nvia CL. The variant (w/o Intra-CL) results signified that us-\ning the CL to differentiate therapeutic and non-therapeutic\npeptides contributes to the generation. As expected, the per-\nformance of MMCD dropped significantly after removingMethodsAMP ACP\nSimilarity ↓Instability ↓Antimicrobial ↑Similarity ↓Instability ↓Anticancer ↑\nMMCD (w/o InterCL & IntraCL) 27.4794 42.5359 0.8013 31.2820 34.6888 0.6996\nMMCD (w/o IntraCL) 26.6889 41.2631 0.8584 28.9782 33.0268 0.7513\nMMCD (w/o InterCL) 24.9079 41.7646 0.8494 28.0143 33.9816 0.7352\nMMCD 24.4107 39.9649 0.8810 27.4685 31.7381 0.7604\nTable 3: Ablation study on the sequence-level generation task.\nMMCD (w/o InterCL) on AMP\n MMCD on AMP\nMMCD (w/o IntraCL) on AMP and non -AMP MMCD on AMP and non -AMP(a)\n(b)\nFigure 3: (a) The t-SNE for structure and sequence em-\nbeddings of therapeutic peptides (AMP data) obtained from\nMMCD (w/o Inter-CL) and MMCD. (b) The t-SNE for em-\nbeddings (including structures and sequences) of therapeutic\n(AMP) and non-therapeutic (non-AMP) peptides obtained\nfrom MMCD (w/o Intra-CL) and MMCD.\nboth Inter-CL and Intra-CL (w/o Inter-CL & Intra-CL).\nTo better understand the strengths of Inter-CL and Intra-\nCL, we performed the t-SNE (Van der Maaten and Hin-\nton 2008) visualization using the learned embeddings of\npeptides on the AMP dataset. As illustrated in Figure 3-\na, Inter-CL effectively promoted the alignment of sequence\nand structure embeddings, facilitating the shared crucial in-\nformation (dashed circle) to be captured during diffusion.\nThe t-SNE of Intra-CL (Figure 3-b) also revealed that it bet-\nter distinguished therapeutic peptides from non-therapeutic\nones in the embedding distribution. And the resulting dis-\ntribution bias may identify more potential generation space,\nthus leading to higher quality and diversity of therapeutic\npeptides generated by MMCD. Overall, MMCD with all the\nmodules fulfilled superior performance, and removing any\nmodules will diminish its generation power.\nPeptide-docking analysis\nTo test the validity of generated peptide structures, we con-\nducted a molecular-docking simulation. Here, a peptide was\nrandomly selected from the AMP dataset as the reference,and the methods (Figure 4) were employed to generate cor-\nresponding structures based on the sequence of the reference\npeptide (see details in Appendix C). The lipopolysaccharide\non the outer membrane of bacteria (Li, Orlando, and Liao\n2019) was selected as the target protein for molecular dock-\ning. Then, we extracted the residues within a 5 ˚A proxim-\nity between peptides (i.e., the reference and generated struc-\ntures) and the active pocket of target protein in docking com-\nplexes, to visualize their binding interactions (Miller et al.\n2021). Of these docking results, all methods yielded a new\nstructure capable of binding to the target protein, and our\nmethod exhibited the highest docking scores and displayed\nbinding residues most similar to the reference structure. This\nprominent result underscored the reliability and therapeutic\npotential of our method for peptide generation.\nReference\nDockingMMCD SimDiff\nDiffABDocking score = 1754 RMSD = 1.76Docking score = 1726 RMSD = 2.04Docking score = 1690 \nRMSD = 2.32Docking score = 1597 \nFoldingDiff\nRMSD = 2.45Docking score = 1582\nProtDiff\nRMSD = 2.51Docking score = 1551 \nFigure 4: Docking analysis (interactive visualization be-\ntween target protein and peptides) of the reference and gen-\nerated structures by MMCD and baselines. Thick lines rep-\nresent the residues of peptides, and the thin lines show the\nbinding residues for protein-peptide complexes.\nConclusion\nIn this work, we propose a multi-modal contrastive dif-\nfusion model for the co-generation of peptide sequences\nand structures, named MMCD. MMCD is dedicated to\nleveraging a multi-modal contrastive learning strategy to\ncapture consensus-related and difference-related informa-\ntion behind the sequences/structures and therapeutic/non-\ntherapeutic peptides, enhancing the diffusion model to gen-\nerate high-quality therapeutic peptides. The experimental\nresults unequivocally demonstrate the capability of our\nmethod in co-generating peptide sequence and structure,\nsurpassing state-of-the-art baseline methods with advanta-\ngeous performance.Acknowledgments\nThis work was supported by the National Natural Sci-\nence Foundation of China (62372204, 62072206, 61772381,\n62102158); Huazhong Agricultural University Scien-\ntific & Technological Self-innovation Foundation; Fun-\ndamental Research Funds for the Central Universities\n(2662021JC008, 2662022JC004). The funders have no role\nin study design, data collection, data analysis, data interpre-\ntation, or writing of the manuscript.\nReferences\nAnand, N.; and Achim, T. 2022. Protein Structure and\nSequence Generation with Equivariant Denoising Diffusion\nProbabilistic Models. arxiv:2205.15019.\nAustin, J.; Johnson, D. D.; Ho, J.; Tarlow, D.; and van den\nBerg, R. 2021. Structured Denoising Diffusion Models in\nDiscrete State-Spaces. In Advances in Neural Information\nProcessing Systems , volume 34, 17981–17993. Curran As-\nsociates, Inc.\nCao, H.; Tan, C.; Gao, Z.; Xu, Y .; Chen, G.; Heng, P.-A.; and\nLi, S. Z. 2023. A Survey on Generative Diffusion Model.\narxiv:2209.02646.\nCapecchi, A.; Cai, X.; Personne, H.; K ¨ohler, T.; van Delden,\nC.; and Reymond, J.-L. 2021. Machine Learning Designs\nNon-Hemolytic Antimicrobial Peptides. Chem Sci , 12(26):\n9221–9232.\nChaudhury, S.; Lyskov, S.; and Gray, J. J. 2010. PyRosetta:\nA Script-Based Interface for Implementing Molecular Mod-\neling Algorithms Using Rosetta. Bioinformatics , 26(5):\n689–691.\nChen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020.\nA simple framework for contrastive learning of visual repre-\nsentations. In International conference on machine learning ,\n1597–1607. PMLR.\nFl´orez-Castillo, J. M.; Rond ´on-Villareal, P.; Ropero-Vega,\nJ. L.; Mendoza-Espinel, S. Y .; Moreno-Am ´ezquita, J. A.;\nM´endez-Jaimes, K. D.; Farf ´an-Garc ´ıa, A. E.; G ´omez-\nRangel, S. Y .; and G ´omez-Duarte, O. G. 2020. Ib-M6 An-\ntimicrobial Peptide: Antibacterial Activity against Clinical\nIsolates of Escherichia Coli and Molecular Docking. Antibi-\notics , 9(2): 79.\nGhorbani, M.; Prasad, S.; Brooks, B. R.; and Klauda, J. B.\n2022. Deep Attention Based Variational Autoencoder for\nAntimicrobial Peptide Discovery.\nHo, J.; Jain, A.; and Abbeel, P. 2020. Denoising Diffusion\nProbabilistic Models. In Advances in Neural Information\nProcessing Systems , volume 33, 6840–6851. Curran Asso-\nciates, Inc.\nHollingsworth, S. A.; and Karplus, P. A. 2010. A Fresh Look\nat the Ramachandran Plot and the Occurrence of Standard\nStructures in Proteins. 1(3-4): 271–283.\nHoogeboom, E.; Satorras, V . G.; Vignac, C.; and Welling,\nM. 2022. Equivariant Diffusion for Molecule Generation in\n3D. In Proceedings of the 39th International Conference on\nMachine Learning , 8867–8887. PMLR.Huang, Y .; Du, C.; Xue, Z.; Chen, X.; Zhao, H.; and Huang,\nL. 2021. What makes multi-modal learning better than sin-\ngle (provably). Advances in Neural Information Processing\nSystems , 34: 10944–10956.\nIqbal, T.; and Qureshi, S. 2022. The Survey: Text Generation\nModels in Deep Learning. J King Saud Univ-com , 34(6, Part\nA): 2515–2528.\nJakubczyk, A.; Kara ´s, M.; Rybczy ´nska-Tkaczyk, K.;\nZieli ´nska, E.; and Zieli ´nski, D. 2020. Current Trends of\nBioactive Peptides—New Sources and Therapeutic Effect.\nFoods , 9(7): 846.\nLee, E. Y .; Lee, M. W.; Fulan, B. M.; Ferguson, A. L.; and\nWong, G. C. L. 2017. What Can Machine Learning Do\nfor Antimicrobial Peptides, and What Can Antimicrobial\nPeptides Do for Machine Learning? Interface Focus , 7(6):\n20160153.\nLee, E. Y .; Wong, G. C. L.; and Ferguson, A. L.\n2018. Machine Learning-Enabled Discovery and Design of\nMembrane-Active Peptides. Bioorgan Med Chem , 26(10):\n2708–2718.\nLi, Y .; Orlando, B. J.; and Liao, M. 2019. Structural Basis of\nLipopolysaccharide Extraction by the LptB2FGC Complex.\nNature , 567(7749): 486–490.\nLin, E.; Lin, C.-H.; and Lane, H.-Y . 2022. De novo peptide\nand protein design using generative adversarial networks:\nan update. Journal of Chemical Information and Modeling ,\n62(4): 761–774.\nLiu, S.; Zhu, Y .; Lu, J.; Xu, Z.; Nie, W.; Gitter, A.; Xiao, C.;\nTang, J.; Guo, H.; and Anandkumar, A. 2023. A Text-guided\nProtein Design Framework. arxiv:2302.04611.\nLiu, V .; and Chilton, L. B. 2022. Design Guidelines for\nPrompt Engineering Text-to-Image Generative Models. In\nProceedings of the 2022 CHI Conference on Human Fac-\ntors in Computing Systems , CHI ’22, 1–23. New York, NY ,\nUSA: Association for Computing Machinery. ISBN 978-1-\n4503-9157-3.\nLuo, S.; Su, Y .; Peng, X.; Wang, S.; Peng, J.; and Ma, J.\n2022. Antigen-Specific Antibody Design and Optimization\nwith Diffusion-Based Generative Models for Protein Struc-\ntures.\nMiller, E. B.; Murphy, R. B.; Sindhikara, D.; Borrelli, K. W.;\nGrisewood, M. J.; Ranalli, F.; Dixon, S. L.; Jerome, S.;\nBoyles, N. A.; Day, T.; Ghanakota, P.; Mondal, S.; Rafi,\nS. B.; Troast, D. M.; Abel, R.; and Friesner, R. A. 2021.\nReliable and Accurate Solution to the Induced Fit Docking\nProblem for Protein–Ligand Binding. J Chem Theory Com-\nput, 17(4): 2630–2639.\nM¨uller, A. T.; Gabernet, G.; Hiss, J. A.; and Schneider, G.\n2017. modlAMP: Python for Antimicrobial Peptides. Bioin-\nformatics , 33(17): 2753–2755.\nM¨uller, A. T.; Hiss, J. A.; and Schneider, G. 2018. Recurrent\nNeural Network Model for Constructive Peptide Design. J\nChem Inf Model , 58(2): 472–479.\nMuttenthaler, M.; King, G. F.; Adams, D. J.; and Alewood,\nP. F. 2021. Trends in peptide drug discovery. Nature reviews\nDrug discovery , 20(4): 309–325.Oort, C. M. V .; Ferrell, J. B.; Remington, J. M.; Wshah, S.;\nand Li, J. 2021. AMPGAN v2: Machine Learning Guided\nDesign of Antimicrobial Peptides.\nSatorras, V . G.; Hoogeboom, E.; and Welling, M. 2021. E(n)\nEquivariant Graph Neural Networks. In Proceedings of the\n38th International Conference on Machine Learning , 9323–\n9332. PMLR.\nShi, C.; Wang, C.; Lu, J.; Zhong, B.; and Tang, J. 2023.\nProtein Sequence and Structure Co-Design with Equivariant\nTranslation. arxiv:2210.08761.\nSohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; and\nGanguli, S. 2015. Deep Unsupervised Learning Using\nNonequilibrium Thermodynamics. In Proceedings of the\n32nd International Conference on Machine Learning , 2256–\n2265. PMLR.\nSong, Y .; and Ermon, S. 2019. Generative Modeling by Es-\ntimating Gradients of the Data Distribution. In Advances in\nNeural Information Processing Systems , volume 32. Curran\nAssociates, Inc.\nSzymczak, P.; Mo ˙zejko, M.; Grzegorzek, T.; Jurczak, R.;\nBauer, M.; Neubauer, D.; Sikora, K.; Michalski, M.; Sroka,\nJ.; Setny, P.; Kamysz, W.; and Szczurek, E. 2023a. Discov-\nering Highly Potent Antimicrobial Peptides with Deep Gen-\nerative Model HydrAMP. Nat Commun , 14(1): 1453.\nSzymczak, P.; Mo ˙zejko, M.; Grzegorzek, T.; Jurczak, R.;\nBauer, M.; Neubauer, D.; Sikora, K.; Michalski, M.; Sroka,\nJ.; Setny, P.; et al. 2023b. Discovering highly potent an-\ntimicrobial peptides with deep generative model HydrAMP.\nNature Communications , 14(1): 1453.\nThi Phan, L.; Woo Park, H.; Pitti, T.; Madhavan, T.; Jeon,\nY .-J.; and Manavalan, B. 2022. MLACP 2.0: An Updated\nMachine Learning Tool for Anticancer Peptide Prediction.\nComput Struct Biotec , 20: 4473–4480.\nTimmons, P. B.; and Hewage, C. M. 2021. APPTEST Is\na Novel Protocol for the Automatic Prediction of Peptide\nTertiary Structures. Brief Bioinform , 22(6): bbab308.\nTrippe, B. L.; Yim, J.; Tischer, D.; Baker, D.; Broderick,\nT.; Barzilay, R.; and Jaakkola, T. 2023. Diffusion Proba-\nbilistic Modeling of Protein Backbones in 3D for the Motif-\nScaffolding Problem. arxiv:2206.04119.\nTucs, A.; Tran, D. P.; Yumoto, A.; Ito, Y .; Uzawa, T.; and\nTsuda, K. 2020. Generating Ampicillin-Level Antimicrobial\nPeptides with Activity-Aware Generative Adversarial Net-\nworks. ACS Omega , 5(36): 22847–22851.\nVan der Maaten, L.; and Hinton, G. 2008. Visualizing data\nusing t-SNE. Journal of machine learning research , 9(11).\nVignac, C.; Krawczuk, I.; Siraudin, A.; Wang, B.; Cevher,\nV .; and Frossard, P. 2023. DiGress: Discrete Denoising Dif-\nfusion for Graph Generation. arxiv:2209.14734.\nWan, F.; Kontogiorgos, H. D.; and Fuente, d. l. N. C. 2022.\nDeep Generative Models for Peptide Design. Digital Dis-\ncovery , 1(3): 195–208.\nWebster, R.; Rabin, J.; Simon, L.; and Jurie, F. 2019. Detect-\ning overfitting of deep generative networks via latent recov-\nery. In Proceedings of the IEEE/CVF Conference on Com-\nputer Vision and Pattern Recognition , 11273–11282.Wu, K. E.; Yang, K. K.; van den Berg, R.; Zou, J. Y .; Lu,\nA. X.; and Amini, A. P. 2022. Protein Structure Generation\nvia Folding Diffusion. arxiv:2209.15611.\nWu, X.; Luu, A. T.; and Dong, X. 2022. Mitigating Data\nSparsity for Short Text Topic Modeling by Topic-Semantic\nContrastive Learning. In Proceedings of the 2022 Confer-\nence on Empirical Methods in Natural Language Process-\ning, 2748–2760.\nWu, Z.; Johnston, K. E.; Arnold, F. H.; and Yang, K. K.\n2021. Protein sequence design with deep generative mod-\nels.Current Opinion in Chemical Biology , 65: 18–27.\nXia, C.; Feng, S.-H.; Xia, Y .; Pan, X.; and Shen, H.-B. 2022.\nFast protein structure comparison through effective repre-\nsentation learning with contrastive graph neural networks.\nPLoS computational biology , 18(3): e1009986.\nYadav, N. S.; Kumar, P.; and Singh, I. 2022. Structural and\nfunctional analysis of protein. In Bioinformatics , 189–206.\nElsevier.\nYang, L.; Yang, G.; Bing, Z.; Tian, Y .; Huang, L.; Niu, Y .;\nand Yang, L. 2022. Accelerating the Discovery of Anti-\ncancer Peptides Targeting Lung and Breast Cancers with the\nWasserstein Autoencoder Model and PSO Algorithm. Brief\nBioinform , 23(5): bbac320.\nYang, L.; Zhang, Z.; Song, Y .; Hong, S.; Xu, R.; Zhao, Y .;\nZhang, W.; Cui, B.; and Yang, M.-H. 2023. Diffusion Mod-\nels: A Comprehensive Survey of Methods and Applications.\narxiv:2209.00796.\nYuan, X.; Lin, Z.; Kuen, J.; Zhang, J.; Wang, Y .; Maire, M.;\nKale, A.; and Faieta, B. 2021. Multimodal contrastive train-\ning for visual representation learning. In Proceedings of\nthe IEEE/CVF Conference on Computer Vision and Pattern\nRecognition , 6995–7004.\nZhang, H.; Saravanan, K. M.; Wei, Y .; Jiao, Y .; Yang,\nY .; Pan, Y .; Wu, X.; and Zhang, J. Z. H. 2023a. Deep\nLearning-Based Bioactive Therapeutic Peptide Generation\nand Screening. J Chem Inf Model , 63(3): 835–845.\nZhang, Z.; Xu, M.; Lozano, A.; Chenthamarakshan, V .; Das,\nP.; and Tang, J. 2023b. Pre-Training Protein Encoder via\nSiamese Sequence-Structure Diffusion Trajectory Predic-\ntion. arxiv:2301.12068.\nZhang, Z.; Zhao, Y .; Chen, M.; and He, X. 2022. Label An-\nchored Contrastive Learning for Language Understanding.\nInProceedings of the 2022 Conference of the North Ameri-\ncan Chapter of the Association for Computational Linguis-\ntics: Human Language Technologies , 1437–1449.\nZheng, M.; Wang, F.; You, S.; Qian, C.; Zhang, C.; Wang,\nX.; and Xu, C. 2021. Weakly Supervised Contrastive Learn-\ning. In Proceedings of the IEEE/CVF International Confer-\nence on Computer Vision , 10042–10051.\nZhu, Y .; Wu, Y .; Olszewski, K.; Ren, J.; Tulyakov, S.;\nand Yan, Y . 2022. Discrete contrastive diffusion for\ncross-modal and conditional generation. arXiv preprint\narXiv:2206.07771 .\nZhu, Y .; Wu, Y .; Olszewski, K.; Ren, J.; Tulyakov, S.; and\nYan, Y . 2023. Discrete Contrastive Diffusion for Cross-\nModal Music and Image Generation. arxiv:2206.07771." }, { "content": "Inception-v4, Inception-ResNet and\nthe Impact of Residual Connections on Learning\nChristian Szegedy\nGoogle Inc.\n1600 Amphitheatre Pkwy, Mountain View, CA\nszegedy@google.comSergey Ioffe\nsioffe@google.comVincent Vanhoucke\nvanhoucke@google.com\nAlex Alemi\nalemi@google.com\nAbstract\nVery deep convolutional networks have been central to\nthe largest advances in image recognition performance in\nrecent years. One example is the Inception architecture that\nhas been shown to achieve very good performance at rel-\natively low computational cost. Recently, the introduction\nof residual connections in conjunction with a more tradi-\ntional architecture has yielded state-of-the-art performance\nin the 2015 ILSVRC challenge; its performance was similar\nto the latest generation Inception-v3 network. This raises\nthe question of whether there are any benefit in combining\nthe Inception architecture with residual connections. Here\nwe give clear empirical evidence that training with residual\nconnections accelerates the training of Inception networks\nsignificantly. There is also some evidence of residual Incep-\ntion networks outperforming similarly expensive Inception\nnetworks without residual connections by a thin margin. We\nalso present several new streamlined architectures for both\nresidual and non-residual Inception networks. These varia-\ntions improve the single-frame recognition performance on\nthe ILSVRC 2012 classification task significantly. We fur-\nther demonstrate how proper activation scaling stabilizes\nthe training of very wide residual Inception networks. With\nan ensemble of three residual and one Inception-v4, we\nachieve 3.08% top-5 error on the test set of the ImageNet\nclassification (CLS) challenge.\n1. Introduction\nSince the 2012 ImageNet competition [11] winning en-\ntry by Krizhevsky et al [8], their network “AlexNet” has\nbeen successfully applied to a larger variety of computer\nvision tasks, for example to object-detection [4], segmen-\ntation [10], human pose estimation [17], video classifica-tion [7], object tracking [18], and superresolution [3]. These\nexamples are but a few of all the applications to which deep\nconvolutional networks have been very successfully applied\never since.\nIn this work we study the combination of the two most\nrecent ideas: Residual connections introduced by He et al.\nin [5] and the latest revised version of the Inception archi-\ntecture [15]. In [5], it is argued that residual connections are\nof inherent importance for training very deep architectures.\nSince Inception networks tend to be very deep, it is natu-\nral to replace the filter concatenation stage of the Inception\narchitecture with residual connections. This would allow\nInception to reap all the benefits of the residual approach\nwhile retaining its computational efficiency.\nBesides a straightforward integration, we have also stud-\nied whether Inception itself can be made more efficient by\nmaking it deeper and wider. For that purpose, we designed\na new version named Inception-v4 which has a more uni-\nform simplified architecture and more inception modules\nthan Inception-v3. Historically, Inception-v3 had inherited\na lot of the baggage of the earlier incarnations. The techni-\ncal constraints chiefly came from the need for partitioning\nthe model for distributed training using DistBelief [2]. Now,\nafter migrating our training setup to TensorFlow [1] these\nconstraints have been lifted, which allowed us to simplify\nthe architecture significantly. The details of that simplified\narchitecture are described in Section 3.\nIn this report, we will compare the two pure Inception\nvariants, Inception-v3 and v4, with similarly expensive hy-\nbrid Inception-ResNet versions. Admittedly, those mod-\nels were picked in a somewhat ad hoc manner with the\nmain constraint being that the parameters and computa-\ntional complexity of the models should be somewhat similar\nto the cost of the non-residual models. In fact we have tested\nbigger and wider Inception-ResNet variants and they per-\nformed very similarly on the ImageNet classification chal-\n1arXiv:1602.07261v2 [cs.CV] 23 Aug 2016lenge [11] dataset.\nThe last experiment reported here is an evaluation of an\nensemble of all the best performing models presented here.\nAs it was apparent that both Inception-v4 and Inception-\nResNet-v2 performed similarly well, exceeding state-of-\nthe art single frame performance on the ImageNet valida-\ntion dataset, we wanted to see how a combination of those\npushes the state of the art on this well studied dataset. Sur-\nprisingly, we found that gains on the single-frame perfor-\nmance do not translate into similarly large gains on ensem-\nbled performance. Nonetheless, it still allows us to report\n3.1% top-5 error on the validation set with four models en-\nsembled setting a new state of the art, to our best knowl-\nedge.\nIn the last section, we study some of the classification\nfailures and conclude that the ensemble still has not reached\nthe label noise of the annotations on this dataset and there\nis still room for improvement for the predictions.\n2. Related Work\nConvolutional networks have become popular in large\nscale image recognition tasks after Krizhevsky et al. [8].\nSome of the next important milestones were Network-in-\nnetwork [9] by Lin et al., VGGNet [12] by Simonyan et al.\nand GoogLeNet (Inception-v1) [14] by Szegedy et al.\nResidual connection were introduced by He et al. in [5]\nin which they give convincing theoretical and practical ev-\nidence for the advantages of utilizing additive merging of\nsignals both for image recognition, and especially for object\ndetection. The authors argue that residual connections are\ninherently necessary for training very deep convolutional\nmodels. Our findings do not seem to support this view, at\nleast for image recognition. However it might require more\nmeasurement points with deeper architectures to understand\nthe true extent of beneficial aspects offered by residual con-\nnections. In the experimental section we demonstrate that\nit is not very difficult to train competitive very deep net-\nworks without utilizing residual connections. However the\nuse of residual connections seems to improve the training\nspeed greatly, which is alone a great argument for their use.\nThe Inception deep convolutional architecture was intro-\nduced in [14] and was called GoogLeNet or Inception-v1 in\nour exposition. Later the Inception architecture was refined\nin various ways, first by the introduction of batch normaliza-\ntion [6] (Inception-v2) by Ioffe et al. Later the architecture\nwas improved by additional factorization ideas in the third\niteration [15] which will be referred to as Inception-v3 in\nthis report.\nConv +Relu activation \nRelu activation Conv Figure 1. Residual connections as introduced in He et al. [5].\nConv +Relu activation \nRelu activation 1x1 Conv \nFigure 2. Optimized version of ResNet connections by [5] to shield\ncomputation.\n3. Architectural Choices\n3.1. Pure Inception blocks\nOur older Inception models used to be trained in a par-\ntitioned manner, where each replica was partitioned into a\nmultiple sub-networks in order to be able to fit the whole\nmodel in memory. However, the Inception architecture is\nhighly tunable, meaning that there are a lot of possible\nchanges to the number of filters in the various layers that\ndo not affect the quality of the fully trained network. In\norder to optimize the training speed, we used to tune the\nlayer sizes carefully in order to balance the computation be-\ntween the various model sub-networks. In contrast, with the\nintroduction of TensorFlow our most recent models can be\ntrained without partitioning the replicas. This is enabled in\npart by recent optimizations of memory used by backprop-\nagation, achieved by carefully considering what tensors are\nneeded for gradient computation and structuring the compu-tation to reduce the number of such tensors. Historically, we\nhave been relatively conservative about changing the archi-\ntectural choices and restricted our experiments to varying\nisolated network components while keeping the rest of the\nnetwork stable. Not simplifying earlier choices resulted in\nnetworks that looked more complicated that they needed to\nbe. In our newer experiments, for Inception-v4 we decided\nto shed this unnecessary baggage and made uniform choices\nfor the Inception blocks for each grid size. Plase refer to\nFigure 9 for the large scale structure of the Inception-v4 net-\nwork and Figures 3, 4, 5, 6, 7 and 8 for the detailed struc-\nture of its components. All the convolutions not marked\nwith “V” in the figures are same-padded meaning that their\noutput grid matches the size of their input. Convolutions\nmarked with “V” are valid padded, meaning that input patch\nof each unit is fully contained in the previous layer and the\ngrid size of the output activation map is reduced accord-\ningly.\n3.2. Residual Inception Blocks\nFor the residual versions of the Inception networks, we\nuse cheaper Inception blocks than the original Inception.\nEach Inception block is followed by filter-expansion layer\n(1\u00021convolution without activation) which is used for\nscaling up the dimensionality of the filter bank before the\naddition to match the depth of the input. This is needed to\ncompensate for the dimensionality reduction induced by the\nInception block.\nWe tried several versions of the residual version of In-\nception. Only two of them are detailed here. The first\none “Inception-ResNet-v1” roughly the computational cost\nof Inception-v3, while “Inception-ResNet-v2” matches the\nraw cost of the newly introduced Inception-v4 network. See\nFigure 15 for the large scale structure of both varianets.\n(However, the step time of Inception-v4 proved to be signif-\nicantly slower in practice, probably due to the larger number\nof layers.)\nAnother small technical difference between our resid-\nual and non-residual Inception variants is that in the case\nof Inception-ResNet, we used batch-normalization only on\ntop of the traditional layers, but not on top of the summa-\ntions. It is reasonable to expect that a thorough use of batch-\nnormalization should be advantageous, but we wanted to\nkeep each model replica trainable on a single GPU. It turned\nout that the memory footprint of layers with large activa-\ntion size was consuming disproportionate amount of GPU-\nmemory. By omitting the batch-normalization on top of\nthose layers, we were able to increase the overall number\nof Inception blocks substantially. We hope that with bet-\nter utilization of computing resources, making this trade-off\nwill become unecessary.\n3x3 Conv \n(32 stride 2 V )\nInput \n(299x299x3) 3x3 Conv \n(32 V) 3x3 Conv \n(64)3x3 MaxPool \n(stride 2 V) 3x3 Conv \n(96 stride 2 V) Filter concat 1x1 Conv \n(64)3x3 Conv \n(96 V) \n1x1 Conv \n(64)7x1 Conv \n(64)1x7 Conv \n(64)Filter concat \n3x3 Conv \n(96 V) MaxPool \n(stride=2 V) 3x3 Conv \n(192 V) Filter concat \n299x299x3 149x149x32 147x147x32 147x147x64 73x73x160 71x71x192 35x35x384 Figure 3. The schema for stem of the pure Inception-v4 and\nInception-ResNet-v2 networks. This is the input part of those net-\nworks. Cf. Figures 9 and 151x1 Conv \n(96)1x1 Conv \n(64)1x1 Conv \n(64)3x3 Conv \n(96)3x3 Conv \n(96)3x3 Conv \n(96)Filter concat \nFilter concat Avg Pooling 1x1 Conv \n(96)Figure 4. The schema for 35\u000235grid modules of the pure\nInception-v4 network. This is the Inception-A block of Figure 9.\n1x1 Conv \n(384) \n1x1 Conv \n(192) \n1x1 Conv \n(192) 1x7 Conv \n(224) \n1x7 Conv \n(192) 7x1 Conv \n(224) Filter concat \nFilter concat Avg Pooling 1x1 Conv \n(128) 1x7 Conv \n(256) 1x7 Conv \n(224) 7x1 Conv \n(256) \nFigure 5. The schema for 17\u000217grid modules of the pure\nInception-v4 network. This is the Inception-B block of Figure 9.\n1x1 Conv \n(256) \n1x1 Conv \n(384) \n1x1 Conv \n(384) 3x1 Conv \n(256) \n1x3 Conv \n(448) 3x1 Conv \n(512) Filter concat \nFilter concat Avg Pooling 1x1 Conv \n(256) \n1x3 Conv \n(256) 1x3 Conv \n(256) 3x1 Conv \n(256) \nFigure 6. The schema for 8\u00028grid modules of the pure Inception-\nv4 network. This is the Inception-C block of Figure 9.\n1x1 Conv \n(k)3x3 Conv \n(n stride 2 V) 3x3 Conv \n(l)3x3 Conv \n(m stride 2 V) Filter concat \nFilter concat 3x3 MaxPool \n(stride 2 V) Figure 7. The schema for 35\u000235to17\u000217reduction module.\nDifferent variants of this blocks (with various number of filters)\nare used in Figure 9, and 15 in each of the new Inception(-v4, -\nResNet-v1, -ResNet-v2) variants presented in this paper. The k,l,\nm,nnumbers represent filter bank sizes which can be looked up\nin Table 1.\n1x1 Conv \n(256) 1x1 Conv \n(192) 1x7 Conv \n(256) 3x3 Conv \n(320 stride 2 V) Filter concat \nFilter concat 3x3 MaxPool \n(stride 2 V) 3x3 Conv \n(192 stride 2 V) \n7x1 Conv \n(320) \nFigure 8. The schema for 17\u000217to8\u00028grid-reduction mod-\nule. This is the reduction module used by the pure Inception-v4\nnetwork in Figure 9.Stem \nInput (299x299x3) 299x299x3 4 x Inception-A \nOutput: 35x35x384 Output: 35x35x384 Reduction-A Output: 17x17x1024 7 x Inception-B 3 x Inception-C \nReduction-B Avarage Pooling Dropout (keep 0.8) \nOutput: 17x17x1024 Output: 8x8x1536 Output: 8x8x1536 Output: 1536 Softmax \nOutput: 1536 Output: 1000 Figure 9. The overall schema of the Inception-v4 network. For the\ndetailed modules, please refer to Figures 3, 4, 5, 6, 7 and 8 for the\ndetailed structure of the various components.\n1x1 Conv \n(32)\n1x1 Conv \n(32)1x1 Conv \n(32)3x3 Conv \n(32)3x3 Conv \n(32)3x3 Conv \n(32)1x1 Conv \n(256 Linear) +Relu activation \nRelu activation \nFigure 10. The schema for 35\u000235grid (Inception-ResNet-A)\nmodule of Inception-ResNet-v1 network.\n1x1 Conv \n(128) \n1x1 Conv \n(128) 1x7 Conv \n(128) 7x1 Conv \n(128) 1x1 Conv \n(896 Linear) +Relu activation \nRelu activation Figure 11. The schema for 17\u000217grid (Inception-ResNet-B)\nmodule of Inception-ResNet-v1 network.\n1x1 Conv \n(256) 3x3 Conv \n(256 stride 2 V) Filter concat \nPrevious \nLayer 3x3 MaxPool \n(stride 2 V) 3x3 Conv \n(384 stride 2 V) \n3x3 Conv \n(256) \n1x1 Conv \n(256) 3x3 Conv \n(256 stride 2 V) \n1x1 Conv \n(256) \nFigure 12. “Reduction-B” 17\u000217to8\u00028grid-reduction module.\nThis module used by the smaller Inception-ResNet-v1 network in\nFigure 15.1x1 Conv \n(192) \n1x1 Conv \n(192) 1x3 Conv \n(192) 3x1 Conv \n(192) 1x1 Conv \n(1792 Linear) +Relu activation \nRelu activation Figure 13. The schema for 8\u00028grid (Inception-ResNet-C) module\nof Inception-ResNet-v1 network.\n3x3 Conv \n(32 stride 2 V )\nInput \n(299x299x3) 3x3 Conv \n(32 V) 3x3 Conv \n(64)3x3 MaxPool \n(stride 2 V) 1x1 Conv \n(80)\n299x299x3 149x149x32 147x147x32 147x147x64 73x73x64 73x73x80 3x3 Conv \n(192 V) 71x71x192 3x3 Conv \n(256 stride 2 V) 35x35x256 \nFigure 14. The stem of the Inception-ResNet-v1 network.Stem \nInput (299x299x3) 299x299x3 5 x Inception-resnet-A \nOutput: 35x35x256 Output: 35x35x256 Reduction-A Output: 17x17x896 10 x\nInception-resnet-B 5 x Inception-resnet-C \nReduction-B Average Pooling Dropout (keep 0.8) \nOutput: 17x17x896 Output: 8x8x1792 Output: 8x8x1792 Output: 1792 Softmax \nOutput: 1792 Output: 1000 Figure 15. Schema for Inception-ResNet-v1 and Inception-\nResNet-v2 networks. This schema applies to both networks but\nthe underlying components differ. Inception-ResNet-v1 uses the\nblocks as described in Figures 14, 10, 7, 11, 12 and 13. Inception-\nResNet-v2 uses the blocks as described in Figures 3, 16, 7,17, 18\nand 19. The output sizes in the diagram refer to the activation\nvector tensor shapes of Inception-ResNet-v1.1x1 Conv \n(32)\n1x1 Conv \n(32)1x1 Conv \n(32)3x3 Conv \n(32)3x3 Conv \n(48)3x3 Conv \n(64)1x1 Conv \n(384 Linear) +Relu activation \nRelu activation Figure 16. The schema for 35\u000235grid (Inception-ResNet-A)\nmodule of the Inception-ResNet-v2 network.\n1x1 Conv \n(192) \n1x1 Conv \n(128) 1x7 Conv \n(160) 7x1 Conv \n(192) 1x1 Conv \n(1154 Linear) +Relu activation \nRelu activation \nFigure 17. The schema for 17\u000217grid (Inception-ResNet-B)\nmodule of the Inception-ResNet-v2 network.\n1x1 Conv \n(256) 3x3 Conv \n(320 stride 2 V) Filter concat \nPrevious \nLayer 3x3 MaxPool \n(stride 2 V) 3x3 Conv \n(384 stride 2 V) \n3x3 Conv \n(288) \n1x1 Conv \n(256) 3x3 Conv \n(288 stride 2 V) \n1x1 Conv \n(256) Figure 18. The schema for 17\u000217to8\u00028grid-reduction mod-\nule. Reduction-B module used by the wider Inception-ResNet-v1\nnetwork in Figure 15.\n1x1 Conv \n(192) \n1x1 Conv \n(192) 1x3 Conv \n(224) 3x1 Conv \n(256) 1x1 Conv \n(2048 Linear) +Relu activation \nRelu activation \nFigure 19. The schema for 8\u00028grid (Inception-ResNet-C) module\nof the Inception-ResNet-v2 network.\nNetwork k l m n\nInception-v4 192 224 256 384\nInception-ResNet-v1 192 192 256 384\nInception-ResNet-v2 256 256 384 384\nTable 1. The number of filters of the Reduction-A module for the\nthree Inception variants presented in this paper. The four numbers\nin the colums of the paper parametrize the four convolutions of\nFigure 7Activation \nScaling +Relu activation \nRelu activation Inception Figure 20. The general schema for scaling combined Inception-\nresnet moduels. We expect that the same idea is useful in the gen-\neral resnet case, where instead of the Inception block an arbitrary\nsubnetwork is used. The scaling block just scales the last linear\nactivations by a suitable constant, typically around 0.1.\n3.3. Scaling of the Residuals\nAlso we found that if the number of filters exceeded\n1000, the residual variants started to exhibit instabilities and\nthe network has just “died” early in the training, meaning\nthat the last layer before the average pooling started to pro-\nduce only zeros after a few tens of thousands of iterations.\nThis could not be prevented, neither by lowering the learn-\ning rate, nor by adding an extra batch-normalization to this\nlayer.\nWe found that scaling down the residuals before adding\nthem to the previous layer activation seemed to stabilize the\ntraining. In general we picked some scaling factors between\n0.1 and 0.3 to scale the residuals before their being added to\nthe accumulated layer activations (cf. Figure 20).\nA similar instability was observed by He et al. in [5] in\nthe case of very deep residual networks and they suggested a\ntwo-phase training where the first “warm-up” phase is done\nwith very low learning rate, followed by a second phase\nwith high learning rata. We found that if the number of\nfilters is very high, then even a very low (0.00001) learning\nrate is not sufficient to cope with the instabilities and the\ntraining with high learning rate had a chance to destroy its\neffects. We found it much more reliable to just scale the\nresiduals.\nEven where the scaling was not strictly necessary, it\nnever seemed to harm the final accuracy, but it helped to\nstabilize the training.\n4. Training Methodology\nWe have trained our networks with stochastic gradient\nutilizing the TensorFlow [1] distributed machine learning\nsystem using 20replicas running each on a NVidia Kepler\nGPU. Our earlier experiments used momentum [13] with a\ndecay of 0:9, while our best models were achieved using\n20 40 60 80 100 120 140 160 180 200\nEpoch151617181920212223242526272829Error (top-1) %\ninception-v3\ninception-resnet-v1Figure 21. Top-1 error evolution during training of pure Inception-\nv3 vs a residual network of similar computational cost. The eval-\nuation is measured on a single crop on the non-blacklist images of\nthe ILSVRC-2012 validation set. The residual model was train-\ning much faster, but reached slightly worse final accuracy than the\ntraditional Inception-v3.\nRMSProp [16] with decay of 0:9and\u000f= 1:0. We used a\nlearning rate of 0:045, decayed every two epochs using an\nexponential rate of 0:94. Model evaluations are performed\nusing a running average of the parameters computed over\ntime.\n5. Experimental Results\nFirst we observe the top-1 and top-5 validation-error evo-\nlution of the four variants during training. After the exper-\niment was conducted, we have found that our continuous\nevaluation was conducted on a subset of the validation set\nwhich omitted about 1700 blacklisted entities due to poor\nbounding boxes. It turned out that the omission should\nhave been only performed for the CLSLOC benchmark, but\nyields somewhat incomparable (more optimistic) numbers\nwhen compared to other reports including some earlier re-\nports by our team. The difference is about 0.3% for top- 1\nerror and about 0.15% for the top- 5error. However, since\nthe differences are consistent, we think the comparison be-\ntween the curves is a fair one.\nOn the other hand, we have rerun our multi-crop and en-\nsemble results on the complete validation set consisting of\n50000 images. Also the final ensemble result was also per-\nformed on the test set and sent to the ILSVRC test server\nfor validation to verify that our tuning did not result in an\nover-fitting. We would like to stress that this final validation\nwas done only once and we have submitted our results only\ntwice in the last year: once for the BN-Inception paper and\nlater during the ILSVR-2015 CLSLOC competition, so we\nbelieve that the test set numbers constitute a true estimate\nof the generalization capabilities of our model.\nFinally, we present some comparisons, between various\nversions of Inception and Inception-ResNet. The models\nInception-v3 and Inception-v4 are deep convolutional net-20 40 60 80 100 120 140 160 180 200\nEpoch3.03.54.04.55.05.56.06.57.07.58.08.59.09.5Error (top-5) %\ninception-v3\ninception-resnet-v1Figure 22. Top-5 error evolution during training of pure Inception-\nv3 vs a residual Inception of similar computational cost. The eval-\nuation is measured on a single crop on the non-blacklist images of\nthe ILSVRC-2012 validation set. The residual version has trained\nmuch faster and reached slightly better final recall on the valida-\ntion set.\n20 40 60 80 100 120 140 160\nEpoch1516171819202122232425262728293031323334Error (top-1) %\ninception-v4\ninception-resnet-v2\nFigure 23. Top-1 error evolution during training of pure Inception-\nv3 vs a residual Inception of similar computational cost. The eval-\nuation is measured on a single crop on the non-blacklist images of\nthe ILSVRC-2012 validation set. The residual version was train-\ning much faster and reached slightly better final accuracy than the\ntraditional Inception-v4.\nNetwork Top-1 Error Top-5 Error\nBN-Inception [6] 25.2% 7.8%\nInception-v3 [15] 21.2% 5.6%\nInception-ResNet-v1 21.3% 5.5%\nInception-v4 20.0% 5.0%\nInception-ResNet-v2 19.9% 4.9%\nTable 2. Single crop - single model experimental results. Reported\non the non-blacklisted subset of the validation set of ILSVRC\n2012.\nworks not utilizing residual connections while Inception-\nResNet-v1 and Inception-ResNet-v2 are Inception style net-\nworks that utilize residual connections instead of filter con-\ncatenation.\nTable 2 shows the single-model, single crop top-1 and\ntop-5 error of the various architectures on the validation set.\n20 40 60 80 100 120 140 160\nEpoch3456789Error (top-5) %\ninception-v4\ninception-resnet-v2Figure 24. Top-5 error evolution during training of pure Inception-\nv4 vs a residual Inception of similar computational cost. The eval-\nuation is measured on a single crop on the non-blacklist images\nof the ILSVRC-2012 validation set. The residual version trained\nfaster and reached slightly better final recall on the validation set.\n20 40 60 80 100 120 140 160\nEpoch2.53.03.54.04.55.05.56.06.57.07.58.08.59.09.5Error (top-5) %\ninception-v4\ninception-resnet-v2\ninception-v3\ninception-resnet-v1\nFigure 25. Top-5 error evolution of all four models (single model,\nsingle crop). Showing the improvement due to larger model size.\nAlthough the residual version converges faster, the final accuracy\nseems to mainly depend on the model size.\n20 40 60 80 100 120 140 160\nEpoch181920212223242526272829Error (top-1) %\ninception-v4\ninception-resnet-v2\ninception-v3\ninception-resnet-v1\nFigure 26. Top-1 error evolution of all four models (single model,\nsingle crop). This paints a similar picture as the top-5 evaluation.\nTable 3 shows the performance of the various models\nwith a small number of crops: 10 crops for ResNet as was\nreported in [5]), for the Inception variants, we have used the\n12 crops evaluation as as described in [14].Network Crops Top-1 Error Top-5 Error\nResNet-151 [5] 10 21.4% 5.7%\nInception-v3 [15] 12 19.8% 4.6%\nInception-ResNet-v1 12 19.8% 4.6%\nInception-v4 12 18.7% 4.2%\nInception-ResNet-v2 12 18.7% 4.1%\nTable 3. 10/12 crops evaluations - single model experimental re-\nsults. Reported on the all 50000 images of the validation set of\nILSVRC 2012.\nNetwork Crops Top-1 Error Top-5 Error\nResNet-151 [5] dense 19.4% 4.5%\nInception-v3 [15] 144 18.9% 4.3%\nInception-ResNet-v1 144 18.8% 4.3%\nInception-v4 144 17.7% 3.8%\nInception-ResNet-v2 144 17.8% 3.7%\nTable 4. 144 crops evaluations - single model experimental results.\nReported on the all 50000 images of the validation set of ILSVRC\n2012.\nNetwork Models Top-1 Error Top-5 Error\nResNet-151 [5] 6 – 3.6%\nInception-v3 [15] 4 17.3% 3.6%\nInception-v4 +\n3\u0002Inception-ResNet-v24 16.5% 3.1%\nTable 5. Ensemble results with 144 crops/dense evaluation. Re-\nported on the all 50000 images of the validation set of ILSVRC\n2012. For Inception-v4(+Residual), the ensemble consists of one\npure Inception-v4 and three Inception-ResNet-v2 models and were\nevaluated both on the validation and on the test-set. The test-set\nperformance was 3:08% top-5 error verifying that we don’t over-\nfit on the validation set.\nTable 4 shows the single model performance of the var-\nious models using. For residual network the dense evalua-\ntion result is reported from [5]. For the inception networks,\nthe 144 crops strategy was used as described in [14].\nTable 5 compares ensemble results. For the pure resid-\nual network the 6 models dense evaluation result is reported\nfrom [5]. For the inception networks 4 models were ensem-\nbled using the 144 crops strategy as described in [14].\n6. Conclusions\nWe have presented three new network architectures in\ndetail:\n\u000fInception-ResNet-v1: a hybrid Inception version that\nhas a similar computational cost to Inception-v3\nfrom [15].\n\u000fInception-ResNet-v2: a costlier hybrid Inception ver-\nsion with significantly improved recognition perfor-\nmance.\u000fInception-v4: a pure Inception variant without residual\nconnections with roughly the same recognition perfor-\nmance as Inception-ResNet-v2.\nWe studied how the introduction of residual connections\nleads to dramatically improved training speed for the Incep-\ntion architecture. Also our latest models (with and without\nresidual connections) outperform all our previous networks,\njust by virtue of the increased model size.\nReferences\n[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen,\nC. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe-\nmawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y . Jia,\nR. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man ´e,\nR. Monga, S. Moore, D. Murray, C. Olah, M. Schuster,\nJ. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker,\nV . Vanhoucke, V . Vasudevan, F. Vi ´egas, O. Vinyals, P. War-\nden, M. Wattenberg, M. Wicke, Y . Yu, and X. Zheng. Tensor-\nFlow: Large-scale machine learning on heterogeneous sys-\ntems, 2015. Software available from tensorflow.org.\n[2] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao,\nA. Senior, P. Tucker, K. Yang, Q. V . Le, et al. Large scale dis-\ntributed deep networks. In Advances in Neural Information\nProcessing Systems , pages 1223–1231, 2012.\n[3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep\nconvolutional network for image super-resolution. In Com-\nputer Vision–ECCV 2014 , pages 184–199. Springer, 2014.\n[4] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea-\nture hierarchies for accurate object detection and semantic\nsegmentation. In Proceedings of the IEEE Conference on\nComputer Vision and Pattern Recognition (CVPR) , 2014.\n[5] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn-\ning for image recognition. arXiv preprint arXiv:1512.03385 ,\n2015.\n[6] S. Ioffe and C. Szegedy. Batch normalization: Accelerating\ndeep network training by reducing internal covariate shift. In\nProceedings of The 32nd International Conference on Ma-\nchine Learning , pages 448–456, 2015.\n[7] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar,\nand L. Fei-Fei. Large-scale video classification with con-\nvolutional neural networks. In Computer Vision and Pat-\ntern Recognition (CVPR), 2014 IEEE Conference on , pages\n1725–1732. IEEE, 2014.\n[8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet\nclassification with deep convolutional neural networks. In\nAdvances in neural information processing systems , pages\n1097–1105, 2012.\n[9] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv\npreprint arXiv:1312.4400 , 2013.\n[10] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional\nnetworks for semantic segmentation. In Proceedings of the\nIEEE Conference on Computer Vision and Pattern Recogni-\ntion, pages 3431–3440, 2015.\n[11] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh,\nS. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,et al. Imagenet large scale visual recognition challenge.\n2014.\n[12] K. Simonyan and A. Zisserman. Very deep convolutional\nnetworks for large-scale image recognition. arXiv preprint\narXiv:1409.1556 , 2014.\n[13] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the\nimportance of initialization and momentum in deep learning.\nInProceedings of the 30th International Conference on Ma-\nchine Learning (ICML-13) , volume 28, pages 1139–1147.\nJMLR Workshop and Conference Proceedings, May 2013.\n[14] C. Szegedy, W. Liu, Y . Jia, P. Sermanet, S. Reed,\nD. Anguelov, D. Erhan, V . Vanhoucke, and A. Rabinovich.\nGoing deeper with convolutions. In Proceedings of the IEEE\nConference on Computer Vision and Pattern Recognition ,\npages 1–9, 2015.\n[15] C. Szegedy, V . Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.\nRethinking the inception architecture for computer vision.\narXiv preprint arXiv:1512.00567 , 2015.\n[16] T. Tieleman and G. Hinton. Divide the gradient by a run-\nning average of its recent magnitude. COURSERA: Neural\nNetworks for Machine Learning, 4, 2012. Accessed: 2015-\n11-05.\n[17] A. Toshev and C. Szegedy. Deeppose: Human pose estima-\ntion via deep neural networks. In Computer Vision and Pat-\ntern Recognition (CVPR), 2014 IEEE Conference on , pages\n1653–1660. IEEE, 2014.\n[18] N. Wang and D.-Y . Yeung. Learning a deep compact image\nrepresentation for visual tracking. In Advances in Neural\nInformation Processing Systems , pages 809–817, 2013." }, { "content": "Neural Embeddings for kNN Search in Biological Sequence\nZhihao Chang1, Linzhu Yu2, Yanchao Xu2, Wentao Hu3\n1The State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China\n2College of Computer Science and Technology, Zhejiang University, Hangzhou, China\n3Zhejiang Police College, Hangzhou, China\n{changzhihao, linzhu, xuyanchao, wthu}@zju.edu.cn\nAbstract\nBiological sequence nearest neighbor search plays a fun-\ndamental role in bioinformatics. To alleviate the pain of\nquadratic complexity for conventional distance computa-\ntion, neural distance embeddings, which project sequences\ninto geometric space, have been recognized as a promising\nparadigm. To maintain the distance order between sequences,\nthese models all deploy triplet loss and use intuitive methods\nto select a subset of triplets for training from a vast selection\nspace. However, we observed that such training often enables\nmodels to distinguish only a fraction of distance orders, leav-\ning others unrecognized. Moreover, naively selecting more\ntriplets for training under the state-of-the-art network not only\nadds costs but also hampers model performance.\nIn this paper, we introduce Bio-kNN: a kNN search frame-\nwork for biological sequences. It includes a systematic triplet\nselection method and a multi-head network, enhancing the\ndiscernment of all distance orders without increasing training\nexpenses. Initially, we propose a clustering-based approach\nto partition all triplets into several clusters with similar prop-\nerties, and then select triplets from these clusters using an\ninnovative strategy. Meanwhile, we noticed that simultane-\nously training different types of triplets in the same network\ncannot achieve the expected performance, thus we propose\na multi-head network to tackle this. Our network employs\na convolutional neural network (CNN) to extract local fea-\ntures shared by all clusters, and then learns a multi-layer per-\nception (MLP) head for each cluster separately. Besides, we\ntreat CNN as a special head, thereby integrating crucial lo-\ncal features which are neglected in previous models into our\nmodel for similarity recognition. Extensive experiments show\nthat our Bio-kNN significantly outperforms the state-of-the-\nart methods on two large-scale datasets without increasing the\ntraining cost.\nIntroduction\nBiological sequence nearest neighbor search plays a fun-\ndamental role in bioinformatics research and serves as the\ncornerstone for numerous tasks, including gene predic-\ntion (Chothia and Lesk 1986), homology analysis (Sander\nand Schneider 1991), sequence clustering (Steinegger and\nS¨oding 2018; Li and Godzik 2021), etc. Traditional methods\nfor measuring global or local similarity between sequences\nCopyright © 2024, Association for the Advancement of Artificial\nIntelligence (www.aaai.org). All rights reserved.rely on alignment based on dynamic programming. In this\npaper, we focus on the global similarity between sequences,\nevaluated by the widely used Needleman-Wunsch (NW) al-\ngorithm (Needleman and Wunsch 1970). While the NW al-\ngorithm is proficient in calculating sequence similarity with\nprecision, its inherent quadratic complexity poses signifi-\ncant challenges for rapid analysis, particularly when dealing\nwith large-scale datasets comprising sequences that extend\nto hundreds or even thousands of amino acids or nucleotides.\nIn recent years, embedding-based approaches have\nemerged as a promising paradigm for expediting sequence\nsimilarity analysis. These approaches involve projecting se-\nquences into a geometric embedding space through an em-\nbedding function, such that the distance between sequences\ncan be approximated by the distance in the embedding\nspace, which offers a computationally efficient alternative.\nThese approaches can be broadly divided into two categories\nbased on the core idea of the embedding function: rule-based\nand neural network-based. Rule-based approaches (Sims\net al. 2009; Gao and Qi 2007; Ulitsky et al. 2006; Haubold\net al. 2009; Leimeister and Morgenstern 2014) often rely\non some predefined encoding rules. Several studies (Corso\net al. 2021; Chen et al. 2022) have indicated that, in multiple\ntasks, these approaches exhibit inferior performance com-\npared to neural network-based ones. Given this context, we\nwill not delve into rule-based approaches, and instead con-\ncentrate on exploring neural network-based approaches.\nExisting research on neural network-based meth-\nods (Zheng et al. 2019; Chen et al. 2022; Zhang, Yuan, and\nIndyk 2019; Dai et al. 2020; Corso et al. 2021) primarily\nfocused on various components such as encoding models\nand loss functions. These components are tailored to the\ntask for which the learned embeddings are used. Notably,\ncertain approaches (Zhang, Yuan, and Indyk 2019; Dai et al.\n2020) focus on the learning objective aimed at preserving\ndistance orders within the embedding space to facilitate\nkNN searches. To achieve this goal, these approaches\nemploy triplet loss (Weinberger and Saul 2009; Hermans,\nBeyer, and Leibe 2017) and use intuitive methods to select\ntriplets in the form (Sacr, Spos, Sneg)for training, in which\nSacris the anchor sequence, Sposis the positive sequence\nthat has smaller distance to Sacrthan the negative sequence\nSneg. However, we found that the models trained by these\nmethods exhibit proficiency in distance order recognition\nThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)\n38A1\n2\n3\n4\n5Triplet-Selection: GRU\nEmbedding\nEmbedding6\nA1\n2\n3\n4\n56A12\n3\n4 56\nA12\n3\n4 56\nAnchor\nPositiveNegative\nNot SelectedPull\nPushTrain-Pair\nWrong\n-Rank-PairTriplet-Selection: CNNED\nThe white numbers in the point indicate the order of distance from the anchorFigure 1: The triplet selection methods used by\nGRU (Zhang, Yuan, and Indyk 2019) and CNNED (Dai\net al. 2020). For GRU, the Top-N closest to the anchor\nis positive and the others are negative; for CNNED, two\nsequences are randomly selected from the Top-K closest to\nthe anchor, the closer is positive, and the farther is negative.\nIn this example we set N equal to 2 and K equal to 4.\nfor only a limited subset of triplets, rather than the entire\nset. As illustrated in Figure 1, while certain order relations\nmay be accurately identified after encoding, overlooked\nrelations during training can substantially compromise the\nresults. Such complications stem from that each sequence\nlacks a definitive category label, rendering existing tech-\nniques ineffective in this context. It might be hypothesized\nthat increasing the number of triplets for training could\nameliorate this issue. However, our assessments within a\nstate-of-the-art network indicate that such problem has not\nbeen alleviated while suffering additional training expanses.\nIn this paper, we introduce Bio-kNN, a biological se-\nquence kNN search framework. Bio-kNN aims to notably\nimprove the recognition accuracy of distance order dis-\ntributed throughout the whole space without augmenting\ntraining expenses. The core idea of Bio-kNN is to par-\ntition all triplets into several clusters based on certain\nproperties and learn a feature extraction network for each\ncluster. Specifically, Bio-kNN features two main modules:\n(1)Triplet selection method. A notable limitation of pre-\nvious models is that only a subset of the triplets is consid-\nered during training. In this module, we consider all possi-\nble combinations of triplets. We partition the selection space\ninto small cells and merge cells with similar distance distri-\nbutions into several clusters. We then employ an innovative\nstrategy to select training triplets from these clusters with-out external samples. (2)Multi-head network. We noticed\nthat merely adding more triplets in the SOTA network does\nnot improve the performance, we thus propose a multi-head\nnetwork to address it. Our network uses CNN as the back-\nbone to extract local features, and learns a multi-layer per-\nception head for each cluster to extract global features. Fur-\nthermore, we integrate previously overlooked local features\nderived from the CNN, which are crucial in discernment.\nTo summarize, we made four contributions in this paper.\n1. We consider the entire selection space instead of subsets,\nand propose a clustering-based triplet selection method.\n2. We notice that the performance of SOTA network de-\ngrades when simultaneously training different types of\ntriplets. A multi-head network is designed to alleviate it.\n3. We treat CNN as a special head and integrate crucial local\nfeatures into our model for sequence similarity.\n4. We conduct extensive experiments on two large-scale\ndatasets, and the results show that our method signifi-\ncantly outperforms the state-of-the-art methods.\nRelated Work\nRule-Based Approaches. Numerous rule-based approaches\nhave been proposed over the past few decades, which can\nbe broadly classified into two categories. The first cate-\ngory typically utilizes word frequency statistics with a pre-\ndefined length (Kariin and Burge 1995) or the information\ncontent of word frequency distribution (Sims et al. 2009;\nGao and Qi 2007) as features to characterize sequence sim-\nilarity. On the other hand, the second category of meth-\nods is based on the concept of sub-strings (Ulitsky et al.\n2006; Haubold et al. 2009; Leimeister and Morgenstern\n2014). However, it should be noted that all these approaches\nare data-independent, and their distance measures rely on\nheuristic rules. Several studies have shown that these ap-\nproaches exhibit weaker performance compared to neural\nnetwork-based approaches across various tasks.\nNeural Network-Based Approaches. Notable efforts in\nneural networks have been made to approximate distances\nfor biological sequences in recent years. SENSE (Zheng\net al. 2019) is the first attempt to employ neural networks\nfor comparison-free sequence analysis by utilizing a con-\nvolutional neural network. However, SENSE is restricted to\nhandling sequences of the same length. To address it, As-\nMac (Chen et al. 2022) was proposed, which employs an ap-\nproximate string matching algorithm to extract relevant fea-\ntures through neural network. Regrettably, the performance\nof this approach degrades when dealing with protein se-\nquences, primarily due to the massive search space involved.\nA research domain closely aligned with our work fo-\ncuses on edit distance embedding. The distinction lies in\nthe NW algorithm’s requirement to normalize the edit dis-\ntance by a dynamically varying length, thereby amplifying\nthe complexity of discerning similarities. CGK (Ostrovsky\nand Rabani 2007) embeds the edit distance into the ham-\nming space with a distortion of 2O(√logllog log l), however,\nthis algorithm is excessively intricate for practical applica-\ntion. Zhang et al. (Zhang, Yuan, and Indyk 2019) propose a\nThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)\n39Figure 2: Motivating example. For the convenience of observation, the bottom subfigures are the results after comparing with\nthe model trained by randomly selecting triplets, i.e., for each Sacr, two sequences are randomly selected from the training set,\nthe closer to SacrisSpos, and the farther is Sneg.\ntwo-layer GRU structure to encode sequences, dividing the\ntraining process into three stages and utilizing three differ-\nent loss functions. Nonetheless, the embedding dimension\ngenerated by this method is relatively high, resulting in sub-\nstantial memory consumption. CNNED (Dai et al. 2020) dis-\ncovers that an untrained random CNN performs comparable\nto GRU models, leading to the belief that the CNN is more\nsuitable for the edit distance embedding than RNN-based\nmodels. NeuroSEED (Corso et al. 2021) explores the poten-\ntial of employing global and local transformers to encode\nbiological sequences, and experimental results also affirm\nthat convolutional models surpass feedforward and recurrent\nmodels for biological sequence edit distance tasks. Further-\nmore, NeuroSEED proposes that the hyperbolic space can\nbetter capture the data dependencies among biological se-\nquences from the perspective of embedded geometry.\nMotivating Example\nIn this section, we use an example to reveal the limitations of\nexisting methods. We first model the entire selection space\nas an upper triangular area. Then we visualize the distribu-\ntion of training triplets and the performance of the trained\nmodel, thus we can easily observe the relationship between\nthem. Example details are as follows.\nExample Setting\nWe first randomly select 3000 sequences from UniProtKB1\nand use 1500 of them as the training set, while the remain-\ning1500 as the test set. Then, we employ the state-of-the-\nart pipeline proposed by CNNED (Dai et al. 2020) as the\ncommon training framework, and replace the triplet selec-\ntion method with five other methods respectively during\ntraining, including two methods adopted by previous mod-\nels: the methods used by CNNED (Dai et al. 2020) and\nGRU (Zhang, Yuan, and Indyk 2019), and three methods de-\nsigned for comparison: Method-3, Method-4, and Method-5.\nIn Figure 2, we plot the distribution of triplets selected\nby these five methods on the training set (top subfigures)\nand the distance order recognition results on the test set\n(bottom subfigures) respectively. The horizontal and verti-\ncal coordinates (i, j)of each subfigure in Figure 2 are all\n1https://www.uniprot.org/determined by the triplet (Sacr, Spos, Sneg). For each Sacr,\nwe first sort other sequences according to the distance be-\ntween them and Sacrfrom small to large to form a list, and\nthe indices iandjofSposandSnegin the list are used as\nthe abscissa and ordinate, respectively. The difference be-\ntween the top and the bottom subfigures is the triplets used\nfor visualization: (1) We plot top subfigures according to the\ntriplets obtained in the training set by the five triplet se-\nlection methods. The depth of the color indicates the fre-\nquency of the corresponding triplet is selected. (2) For the\nbottom subfigures, the triplets are all triplets combinations\nin the test set, and these subfigures are used to visualize the\nresults of distance order recognition in the test set. We iter-\nated all triplet combinations in the test set to check whether\nthe distance between SacrandSposis smaller than the dis-\ntance between SacrandSnegafter encoding by model f,\ni.e.,diste(f(Sacr), f(Spos))< dist e(f(Sacr), f(Sneg)),\nthe more frequency of the match, the more vivid the color.\nPhenomenon\nFrom Figure 2, we can observe three following phenomena,\nincluding one expected and two inconsistent with the expec-\ntation but interesting:\n1.Expected. Figure 2(a)-(d) illustrate that sequence dis-\ntance order recognition in the test set is highly correlated\nwith the training triplets. This phenomenon is expected,\nas the more triplets the model learns for a region in the\ntraining set, the better it helps distinguish the order of\nthat region in the test set. However, we can clearly ob-\nserve that the model trained by these methods can only\nrecognize the order of a small part of the whole area.\nThis observation shows that the model is very limited in\nidentifying crucial regions that lie beyond its training re-\ngion(e.g., let the model in Figure 2(a) recognize the or-\nder of the region determined by Method-3). Such limita-\ntion greatly affects the effectiveness of the model.\n2.UnExpected. Inspired by phenomenon 1, an intuitive\nidea is to select more training regions. We thus trained\nMethod-5, which simultaneously trains the regions se-\nlected by Method-1, Method-3, and Method-4. However,\nthe recognition results are not consistent with our expec-\ntation, as shown in the Figure 2(e), although certain re-\nThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)\n40gions have been trained, the corresponding regions in the\ntest set do not have better distance order recognition.\n3.UnExpected. These figures also illustrate that the model\nhas a radiation effect on regions outside the training re-\ngion, i.e. even if some regions are not selected, the model\nis also better able to recognize the order of this region.\nFurthermore, the radiation region produced by the train-\ning region in different positions varies greatly.\nMethod\nTo address the issues arising in phenomenon 2, we propose\nBio-kNN, which includes a triplet selection method and a\nmulti-head network. Its framework is shown in Figure 3.\nTriplet Selection Method\nPartition Selection Space. As shown in Figure 3(b), we par-\ntition the entire selection space formed by the training set\ninto small cells. Specifically, we use the same setting as that\nin the motivating example to model the entire selection space\nas an upper triangular area, and the length of two legs of the\ntriangle is the number of sequences in the training set. In this\nsetting, for each Sacr, each point in the triangle represents a\ntriplet, where the abscissa represents the index of the Spos,\nand the ordinate represents the index of the Sneg. Then,\nwe divide the horizontal and vertical axes into Bgroups\nrespectively based on an equal interval δ, where the hori-\nzontal axis is divided into [[x0, x1),[x1, x2), ...,[xB−1, xB)]\nand the vertical is [[y0, y1),[y1, y2), ...,[yB−1, yB)]. Thus\nthe upper triangular area is divided intoPB\ni=1ismall cells,\nwhere most of the cells are grids and few are triangles. Then,\nthe coordinates of each cell can be described as (Xi, Yj),\nwhere Ximeans [xi, xi+1), and Yjmeans [yj, yj+1).\nDistribution Statistics in Cell. For each cell after parti-\ntioning, we use the interval of the coordinates to count the\nhorizontal and vertical distributions. This step is inspired by\nphenomenon 3 in the motivating example, which shows that\nsome properties between adjacent regions may be similar.\nIn this step, we try to use the intuitive distance distribu-\ntion as this property. It is worth noting that the possibility\nof other properties is not ruled out, which can be studied in\nthe future. Next, we use an example to illustrate the details\nof our approach. Suppose there is a cell with coordinates\n(X500,600, Y700,800), we use each sequence in the training\nset as Sacrin turn. For each Sacr, we sort other sequences in\nthe training set according to the NW distance between them\nandSacrfrom small to large to form a list l. We then count\nthe horizontal distance distribution between all sequences\nin the list l[500 : 600] andSacrforX500,600, while count\nl[700 : 800] forY700,800. In this way, the coordinates of each\ncell can be further described as (Xi, Yj), where Ximeans\ncount ([xi, xi+1)), and Yjmeans count ([yj, yj+1)). Subse-\nquent cell coordinates will use this definition by default.\nDistance Measurement between Cells. How to measure\nthe distance between cells with distributions as coordinates\nbecomes a new problem. Currently, there are many functions\nto measure the distance between two distributions, such as\nKullback–Leibler divergence (Kullback and Leibler 1951),\nJensen-Shannon divergence (Fuglede and Topsøe 2004),\n\u0001\u0002\u0003(a) Multi-Head Network (Training)\n(b) Triplet Selection MethodDistance\nMeasurementClusteringGrid\n&\nCounting\u0001\u0002\u0004\u0001\u0002\u0005One-hot…\n… …\n…\nIndex of \u0006\u0002\u0007\b\nTriplet\nSelection\nFor \n\u0006\t\u0004\u0006\t\u0005\u0006\t\u0003\nIndex of\u0006\n\u000b\u0001CNNFigure 3: The Framework of Bio-kNN.\nEarth mover’s distance (EMD) (Rubner, Tomasi, and Guibas\n2000), etc. However, we noticed that when two distributions\ndo not overlap, the KL divergence is meaningless, and the\nJS divergence is a constant, so neither of these functions is\nsuitable for measuring the distance between cells in our ap-\nplication scenario. Considering that EMD as a metric sat-\nisfies non-negativity, symmetry, and triangle inequality, we\ndefine the distance between two cells on the basis of EMD.\nSpecifically, given any two cells pandq, their coordinates\nare(Xpi, Ypj)and(Xqi, Yqj)respectively, then we define\nthe distance dcell(p, q)between pandqis:\ndcell(p, q) =EMD (Xpi, Xqi) +EMD (Ypj, Yqj)(1)\nWe prove that dcell(p, q)between cells is still a metric.\nTheorem 1 The distance dcellcomputed by Equation 1 is a\nmetric. Given three any cells p,q, and r, we have:\n(1) Non-negativity. Ifp!=q, then dcell(p, q)>0.\n(2) Symmetry. dcell(p, q)=dcell(p, q).\n(3) Triangle inequality. dcell(p, r)≤dcell(p, q)+dcell(q, r)\nProof 1 According to the non-negativity and symmetry of\nthe EMD, it can be easily obtained that dcellalso satisfies\nthe non-negativity and symmetry, so we will only prove the\ntriangle inequality of dcell.\ndcell(p, r) =EMD (Xpi, Xri) +EMD (Ypj, Yrj)\n≤(EMD (Xpi, Xqi) +EMD (Xqi, Xri))\n+ (EMD (Ypj, Yqj) +EMD (Yqj, Yrj))\n= (EMD (Xpi, Xqi) +EMD (Ypj, Yqj)\n+ (EMD (Xqi, Xri) +EMD (Yqj, Yrj))\n=dcell(p, q) +dcell(q, r)\nCell Clustering. Our last step is to merge those cells that\nhave a similar distance distribution. We achieve this using\nunsupervised clustering, which is naturally suited to distin-\nguishing similar items such that distributions vary widely\nacross clusters, while the distribution of cells within a single\ncluster is very close. In this paper, we do not propose a new\nclustering algorithm, but directly deploy existing cluster-\ning algorithms. In following, we evaluated the performance\nof commonly used clustering algorithms such as k-means\nThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)\n41(Forgy 1965), agglomerative(Murtagh and Contreras 2012),\nand spectral clustering(von Luxburg 2007). Subsequent ex-\nperiments will show more detailed results.\nSelection Strategy. Suppose there are mtraining se-\nquences and nclusters are obtained based on the above\nmethod. An intuitive selection strategy is that for each Sacr,\nWe randomly select one point from each of the nclusters at\neach epoch, and the abscissa of these npoints is the index\nof the Spos, the ordinate is the index of the Sneg. However,\nthis strategy needs to select m∗ntriplets for training at each\nepoch. Clearly, the training cost of this strategy increases lin-\nearly with mandn, and it will bring burden to the expansion\nof the dataset when nhave a large value.\nWe employ a novel selection strategy that can achieve\ngood performance without adding more cost. Specifically,\nbefore each epoch of training, we first randomly shuffle all\nanchor sequences. Then for each batch, we divide all anchor\nsequences in the current batch evenly into nlists, and assign\nthenclusters to the nlists as candidate clusters respectively.\nThe strategy at this time is that for each Sacr, we only ran-\ndomly select a point from its corresponding candidate clus-\nters instead of all clusters, and the number of training triplets\nfor each epoch is also changed from m∗nton.\nMulti-Head Network\nNetwork Structure. In recent years, several works (Dai\net al. 2020; Corso et al. 2021) have shown that convolutional\nmodels outperform feedforward and recurrent models for se-\nquence embedding, so our learning model utilizes the CNN\nsubmodule in CNNED (Dai et al. 2020) as a general back-\nbone. Subsequently, multiple multi-layer perceptron (MLP)\nheads are deployed in parallel following the convolutional\nlayers, thereby facilitating the fusion of local features from\ndifferent perspectives to extract global features. In this struc-\nture, the number of heads is the same as the number of can-\ndidate clusters k. Each head has exactly the same structure\nand is trained in parallel without communicating with each\nother. The core idea of our multi-head model is that we hope\nto learn one head for each candidate cluster, thus avoiding\npotential contradictions between candidate selection clusters\nduring training. It is imperative to highlight that our model\nexhibits an obvious distinction between the training and in-\nference phases, we introduce them separately below.\nTraining Phase. During the training phase as shown in\nFigure 3(a), we first use the selection method introduced\nin the previous section to select a triplet (Si\nacr, Si\npos, Si\nneg)\nfor each anchor sequence in a batch. Then, the one-hot em-\nbedding representations (Xi\nacr,Xi\npos,Xi\nneg)of all these\ntriplets are simultaneously fed into the CNN, which is en-\ncoded as (yi\nacr,yi\npos,yi\nneg). After CNN encoding, the flow\nof these triplets starts to fork, and triplets selected from dif-\nferent clusters are fed to different MLP heads. Specifically,\nThe embedding function of our multi-head network during\nthe training phase can be expressed as follows:\nyi\nacr,yi\npos,yi\nneg=CNN (Xi\nacr,Xi\npos,Xi\nneg)\nzi\nacr,zi\npos,zi\nneg=MLPi(yi\nacr,yi\npos,yi\nneg)\nseq……\n…CNNFigure 4: Multi-head Network (Inference).\nafter all triplets are encoded by the model, the final loss is:\nloss=kX\ni=1Loss(zi\nacr,zi\npos,zi\nneg) (2)\nwhere krepresents both the number of candidate selection\nclusters and the number of heads, and Loss is the combina-\ntion of triplet loss and MSE loss.\nInference Phase. As depicted in Figure 4, we feed all se-\nquences into the trained neutral network one by one during\nthe inference phase. For each sequence, we use the one-hot\nrepresentation Xof this sequence and encode it through\nCNN, then feed the feature youtput by CNN to all the\nMLP heads simultaneously. The outputs [z1, ...,zk]of these\nheads are then all cascaded together. In addition, we treat\nCNN as a special head and concatenate the feature youtput\nby CNN to the end. We will explain the reason for cascad-\ning CNN features below. The embedding function during the\ninference phase can be expressed as follows:\ny=CNN (X) (3)\nzi=MLPi(y) (4)\nthe representation of the sequence in embedding space is:\nEmbedding = [z1, ...,zk,y] (5)\nCNN Serves as a Special Head. The core idea of our net-\nwork is to train distinct MLP heads for each candidate clus-\nter. Each of these heads aims to learn unique weights for the\nlocal features extracted by the CNN, essentially learning the\nmost discriminative features that can distinguish different\nsequences within each cluster. However, fine-grained details\ncan easily be ignored during learning. To alleviate the po-\ntential impact of these fine-grained feature losses, we intro-\nduce a compensation measure using CNN as a special head\nin the inference stage. Specifically, we concatenate local fea-\ntures with the final embedding, which is similar to the effect\nof fully connected layers with identity matrix and frozen\nweights. This approach effectively counteracts the adverse\nconsequences of fine-grained features being ignored.\nEmbedding Geometry. There are many studies using\nvarious functions to calculate the distance between two em-\nbedding vectors, including Euclidean distance (Dai et al.\n2020), Jaccard distance (Zheng et al. 2019), Hyperbolic dis-\ntance (Corso et al. 2021), etc. However, for the multi-head\nnetwork we designed, the final embedding of the sequence\nis the concatenation of vectors output by multiple heads. In\norder to make the features of each head play a bigger role\nin the distance calculation, we use a new metric instead of\nThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)\n42directly using the Euclidean distance to calculate the dis-\ntance between vectors. Specifically, we first calculate the Eu-\nclidean distance between the vectors output by a single head,\nand then sum the Euclidean distances of multiple heads as\nthe final distance. Suppose there are two embedding vectors\nx= [x1, ...,xk,xcnn]andy= [y1, ...,yk,ycnn], then\nthe distance between them can be described as:\ndiste(x,y) =Euc(xcnn,ycnn) +kX\ni=1Euc(xi,yi)(6)\nExperiments\nExperimental Settings\nDatasets. We evaluate our neural embeddings through the\nutilization of two extensively recognized datasets(Dai et al.\n2020; Zhang, Yuan, and Indyk 2019), i.e., the Uniprot and\nUniref. These datasets exhibit varying sizes and sequence\nlengths, and their properties are shown in the Table 1. Con-\nsistent with existing works, we partition each dataset into\ndistinct subsets, namely the training set, query set, and base\nset. Both the training set and the query set are composed of\n1,000 sequences, and the other items belong to the base set.\nDataset Uniprot Uniref\nAlphabet Size 25 24\n# Items 474741 395869\nAvg-Length 376.47 442.84\nMin-Length 2 201\nMax-Length 4998 4998\nTable 1: Dataset Statistics\nMetrics. We follow existing works (Zhang, Yuan, and In-\ndyk 2019; Dai et al. 2020) and use the task of nearest neigh-\nbor search to evaluate the effectiveness of our model, i.e.\nwhether the distance order in the embedding space still pre-\nserves. Specifically, we use: (1) Top-k hitting ratio (HR@k ).\nThis metric is used to detect the overlap percentage of the\ntop-k results and the ground truth. (2) Top-1 Recall. This\none evaluates the performance of finding the most similar\nsequence to the query sequence by different methods.\nBaselines. We adopt previous network-based approaches\nas baselines, including GRU (Zhang, Yuan, and Indyk 2019),\nCNNED (Dai et al. 2020), NeuroSEED (Corso et al. 2021),\nAsMac (Chen et al. 2022), where NeuroSEED can be fur-\nther divided into Global (Global T.) and Local Transformer\n(Local T.). Since SENSE (Zheng et al. 2019) cannot be used\nfor unequal-length datasets, and its performance has been\nproven to be weaker than AsMac, we will not use it as a\nbaseline. To demonstrate the effectiveness of the selection\nmethod and multi-head network, we use Bio-kNN-Base to\ndenote the method without cascading CNN features, and re-\nfer to the complete method as Bio-kNN.\nImplementation Details. We use the EMBOSS1to com-\npute the NW distance between sequences. In our implemen-\n1https://www.ebi.ac.uk/Tools/emboss/tation, we set the split interval δ= 100 and experimen-\ntally tested the effect of various clustering algorithms and\nthe number of clusters. Besides, we directly used the CNN\nsubmodule in CNNED. Code and datasets are available at\nhttps://github.com/Proudc/Bio-KNN.\nExperimental results\nClustering-Based Triplet Selection. Tables 2 and 3 show\nthe performance of Bio-kNN-Base under various cluster-\ning algorithms and the number of clusters, including k-\nmeans, agglomerative (HAC), spectral clustering, and non-\nclustering. These results show that:(1) With a fixed output\ndimension (128), the performance of Bio-kNN-Base con-\nsistently surpasses the non-clustering counterpart in various\nalgorithms and the number of clusters, reaffirming the in-\ndispensability of segmenting the selection space. (2) HAC\nshows superior performance within certain configurations in\ncontrast to the other two methods. This may be attributed to\nthe ability of HAC to handle outlier cells more efficiently\nrelative to other techniques, which also prompted us to use\nthe HAC by default in subsequent experiments.\n#Clusters*(D/h)\nMethod HR@1 HR@10 HR@50\n1∗128 None 48.30 35.48 24.21\n2∗64 K-Means 48.60 36\n.51 25. 19\n2∗64 HAC 48.60 36. 51 25. 19\n2∗64 Spectral 48.60 36. 51 25. 19\n4∗32 K-Means 49.90 38. 58 26. 98\n4 * 32 HAC 50.50 39.13 27.28\n4∗32 Spectral 49.00 36. 60 25. 23\n8∗16 K-Means 49.70 37. 90 26. 00\n8∗16 HAC 48.80 37. 52 25. 70\n8∗16 Spectral 48.30 36. 23 24. 86\nNote: D/h indicates the output dimension of each head.\nTable 2: Uniprot: various clustering methods and # clusters\n# Clusters*(D/h)\nMethod HR@1 HR@10 HR@50\n1∗128 None 28.30 24.39 15.60\n2∗64 K-Means 31.10 26\n.91 17. 54\n2 * 64 HAC 33.90 29.88 19.58\n2∗64 Spectral 29.30 25. 88 16. 84\n4∗32 K-Means 30.70 25. 83 16. 63\n4∗32 HAC 32.40 26. 92 17. 42\n4∗32 Spectral 30.00 25. 93 16. 91\n8∗16 K-Means 31.70 26. 80 17. 41\n8∗16 HAC 32.20 26. 57 17. 34\n8∗16 Spectral 31.20 25. 67 16. 90\nTable 3: Uniref: various clustering methods and # clusters\nEmbedding Effectiveness. Table 4 presents an overview\nof the performance exhibited by different methods concern-\ning the top-k similarity search task. As shown, on both\ndatasets, our method Bio-kNN significantly outperforms all\nmethods on all metrics. Using the Uniprot dataset as an ex-\nample, Bio-kNN yields a remarkable enhancement across\nThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)\n43Unipr ot Uniref\nModel HR@1 HR@5\nHR@10 HR@50 HR@1 HR@5\nHR@10 HR@50\nAsMac 47.07 32\n.60 24. 25 9. 93 20.57 11\n.93 8. 08 2. 68\nGRU 40.83 40\n.05 34. 53 23. 16 30.73 26\n.53 22. 73 13. 62\nCNNED 47.70 40\n.43 34. 58 23. 37 35.13 32\n.51 28. 55 18. 72\nGlobal T. 48.76 39\n.97 34. 16 22. 29 27.80 22\n.38 18. 67 10. 47\nLocal T. 49.10 40\n.11 34. 27 22. 43 27.07 21\n.23 17. 94 10. 20\nBio-kNN 54.00 48.31\n42.69 30.28 37.60 36.18\n32.51 21.13\nGap With SOTA +4.90 +7.88\n+8.11 +6 .91 +2.47 +3.67\n+3.96 +2 .41\nTable 4: Embedding Results (repeat three times and report average results)\n100101102103104\n# Items [k]405060708090100 T op-1 Recall\n(a) Uniprot100101102103104\n# Items [k]20406080100 T op-1 Recall\n(b) UnirefAsMac\nGRUGlocal T.\nLocal T.CNNED Bio-kNN\nFigure 5: Top-1 Recall curves for multiple methods.\nmetrics, ranging from 4.9% to 8.11% when compared to\nthe state-of-the-art counterparts. Notably, a substantial ma-\njority of metrics experience an augmentation of over 6%.\nThis non-negligible improvement is impressive given the\nfact that, unlike previous methods that only focus on partial\nsubsets of triplets, Bio-kNN essentially partitions the entire\nselection space and learns individual heads for each distinct\nsubspace. Besides, Bio-kNN incorporates the fine-grained\nlocal features extracted by CNN, which further improves\nits ability to distinguish similarities between sequences. We\nplot the curves of Top-1 recall for various methods on dif-\nferent datasets in Figure 5. We observe that our model also\nachieves significant performance gains on the task of finding\nthe most similar sequence compared to other methods.\nAblation Studies. Our Bio-kNN comprises three mod-\nules: clustering-based triplet selection, a multi-head net-\nwork, and CNN features. We conduct the following exper-\niments to validate the contributions of these modules: (1)\nConsidering that the necessity of segmenting the space has\nbeen verified in Table 2 and 3, we exclusively explore spe-\ncific segmentation methods. We thus independently evaluate\nthe segmentation outcomes on both sides of Figure 6. (2) Re-\nplacing the multi-head (M) network with a single-head (S)\nnetwork. (3) Omitting the features extracted by CNN.\nThe results in Table 5 demonstrate that neglecting any\nof the three modules leads to a reduction in performance.\nThe reason is that we take into account the distance distri-\nbution among cells when segmenting the selection space.\n0 200 400 600 800 1000\nIndex of Spos02004006008001000 Index of Sneg\n(a) Uniprot: HAC-Based0 200 400 600 800 1000\nIndex of Spos02004006008001000 Index of Sneg\n(b) Uniprot: Average-Based\n0 200 400 600 800 1000\nIndex of Spos02004006008001000 Index of Sneg\n(c) Uniref: HAC-Based0 200 400 600 800 1000\nIndex of Spos02004006008001000 Index of Sneg\n(d) Uniref: Average-BasedFigure 6: Segmentation Results of HAC(H) and Average(A)\nDatasets Method HR@1\nHR@10 HR@50\nUniprotH+\nS + CNN 53.20 41. 02 28. 56\nA+\nM + CNN 52.03 40. 31 27. 93\nH+\nM 50.40 39. 01 27. 26\nH+\nM + CNN 54.00 42.69 30.28\nUnirefH+\nS + CNN 35.63 30. 31 19. 75\nA+\nM + CNN 35.43 30. 19 19. 60\nH+\nM 33.67 28. 95 18. 86\nH+\nM + CNN 37.60 32.51 21.13\nTable 5: Ablation Studies Results\nSeparate heads are assembled for clusters with large dif-\nferences in distribution, making training more targeted. The\nfine-grained features extracted by CNN also effectively en-\nhance the model’s ability to distinguish sequence similarity.\nConclusion\nWe propose Bio-kNN for biological nearest neighbor search,\nwhich includes a clustering-based triplet selection method\nand a CNN-based multi-head network. It also incorporates\nlocal features extracted by CNN. Experimental results show\nthat Bio-kNN outperforms the state-of-the-art.\nThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)\n44Acknowledgments\nThis work is supported by the Fundamental Research Funds\nfor the Central Universities(No.226-2022-00028). The au-\nthors would like to thank Zepeng Li for his help with this\nwork, including analysis and discussions.\nReferences\nChen, J.; Yang, L.; Li, L.; Goodison, S.; and Sun, Y . 2022.\nAlignment-free comparison of metagenomics sequences via\napproximate string matching. Bioinformatics Advances,\n2(1): vbac077.\nChothia, C.; and Lesk, A. M. 1986. The relation between\nthe divergence of sequence and structure in proteins. The\nEMBO journal, 5(4): 823–826.\nCorso, G.; Ying, Z.; P ´andy, M.; Velickovic, P.; Leskovec, J.;\nand Li `o, P. 2021. Neural Distance Embeddings for Biologi-\ncal Sequences. In NeurIPS, 18539–18551.\nDai, X.; Yan, X.; Zhou, K.; Wang, Y .; Yang, H.; and Cheng,\nJ. 2020. Convolutional Embedding for Edit Distance. In\nACM SIGIR, 599–608. ACM.\nForgy, E. W. 1965. Cluster analysis of multivariate data: ef-\nficiency versus interpretability of classifications. biometrics,\n21: 768–769.\nFuglede, B.; and Topsøe, F. 2004. Jensen-Shannon diver-\ngence and Hilbert space embedding. In ISIT, 31. IEEE.\nGao, L.; and Qi, J. 2007. Whole genome molecular phy-\nlogeny of large dsDNA viruses using composition vector\nmethod. BMC evolutionary biology, 7(1): 1–7.\nHaubold, B.; Pfaffelhuber, P.; Domazet-Los˘ o, M.; and\nWiehe, T. 2009. Estimating mutation distances from un-\naligned genomes. Journal of Computational Biology,\n16(10): 1487–1500.\nHermans, A.; Beyer, L.; and Leibe, B. 2017. In defense of\nthe triplet loss for person re-identification. arXiv preprint\narXiv:1703.07737.\nKariin, S.; and Burge, C. 1995. Dinucleotide relative abun-\ndance extremes: a genomic signature. Trends in genetics,\n11(7): 283–290.\nKullback, S.; and Leibler, R. A. 1951. On information and\nsufficiency. The annals of mathematical statistics, 22(1):\n79–86.\nLeimeister, C.-A.; and Morgenstern, B. 2014. Kmacs:\nthe k-mismatch average common substring approach to\nalignment-free sequence comparison. Bioinformatics,\n30(14): 2000–2008.\nLi, W.; and Godzik, A. 2021. Cd-hit: a fast program for\nclustering and comparing large sets of protein or nucleotide\nsequences. Bioinformatics 22, 1658–1659 (2006). Scientific\nReports, 11: 3702.\nMurtagh, F.; and Contreras, P. 2012. Algorithms for hierar-\nchical clustering: an overview. WIREs Data Mining Knowl.\nDiscov., 2(1): 86–97.\nNeedleman, S. B.; and Wunsch, C. D. 1970. A general\nmethod applicable to the search for similarities in the amino\nacid sequence of two proteins. Journal of molecular biology,\n48(3): 443–453.Ostrovsky, R.; and Rabani, Y . 2007. Low distortion embed-\ndings for edit distance. J. ACM, 54(5): 23.\nRubner, Y .; Tomasi, C.; and Guibas, L. J. 2000. The Earth\nMover’s Distance as a Metric for Image Retrieval. IJCV,\n40(2): 99–121.\nSander, C.; and Schneider, R. 1991. Database of homology-\nderived protein structures and the structural meaning of se-\nquence alignment. Proteins: Structure, Function, and Bioin-\nformatics, 9(1): 56–68.\nSims, G. E.; Jun, S.-R.; Wu, G. A.; and Kim, S.-H. 2009.\nAlignment-free genome comparison with feature frequency\nprofiles (FFP) and optimal resolutions. Proceedings of the\nNational Academy of Sciences, 106(8): 2677–2682.\nSteinegger, M.; and S ¨oding, J. 2018. Clustering huge protein\nsequence sets in linear time. Nature communications, 9(1):\n1–8.\nUlitsky, I.; Burstein, D.; Tuller, T.; and Chor, B. 2006. The\naverage common substring approach to phylogenomic re-\nconstruction. Journal of Computational Biology, 13(2):\n336–350.\nvon Luxburg, U. 2007. A tutorial on spectral clustering. Stat.\nComput., 17(4): 395–416.\nWeinberger, K. Q.; and Saul, L. K. 2009. Distance met-\nric learning for large margin nearest neighbor classification.\nJournal of machine learning research, 10(2).\nZhang, X.; Yuan, Y .; and Indyk, P. 2019. Neural embeddings\nfor nearest neighbor search under edit distance.\nZheng, W.; Yang, L.; Genco, R. J.; Wactawski-Wende, J.;\nBuck, M.; and Sun, Y . 2019. SENSE: Siamese neural net-\nwork for sequence embedding and alignment-free compari-\nson. Bioinform., 35(11): 1820–1828.\nThe Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)\n45" }, { "content": "Provided proper attribution is provided, Google hereby grants permission to\nreproduce the tables and figures in this paper solely for use in journalistic or\nscholarly works.\nAttention Is All You Need\nAshish Vaswani∗\nGoogle Brain\navaswani@google.comNoam Shazeer∗\nGoogle Brain\nnoam@google.comNiki Parmar∗\nGoogle Research\nnikip@google.comJakob Uszkoreit∗\nGoogle Research\nusz@google.com\nLlion Jones∗\nGoogle Research\nllion@google.comAidan N. Gomez∗ †\nUniversity of Toronto\naidan@cs.toronto.eduŁukasz Kaiser∗\nGoogle Brain\nlukaszkaiser@google.com\nIllia Polosukhin∗ ‡\nillia.polosukhin@gmail.com\nAbstract\nThe dominant sequence transduction models are based on complex recurrent or\nconvolutional neural networks that include an encoder and a decoder. The best\nperforming models also connect the encoder and decoder through an attention\nmechanism. We propose a new simple network architecture, the Transformer,\nbased solely on attention mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show these models to\nbe superior in quality while being more parallelizable and requiring significantly\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-\nto-German translation task, improving over the existing best results, including\nensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task,\nour model establishes a new single-model state-of-the-art BLEU score of 41.8 after\ntraining for 3.5 days on eight GPUs, a small fraction of the training costs of the\nbest models from the literature. We show that the Transformer generalizes well to\nother tasks by applying it successfully to English constituency parsing both with\nlarge and limited training data.\n∗Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started\nthe effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and\nhas been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head\nattention and the parameter-free position representation and became the other person involved in nearly every\ndetail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and\ntensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and\nefficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and\nimplementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating\nour research.\n†Work performed while at Google Brain.\n‡Work performed while at Google Research.\n31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.arXiv:1706.03762v7 [cs.CL] 2 Aug 20231 Introduction\nRecurrent neural networks, long short-term memory [ 13] and gated recurrent [ 7] neural networks\nin particular, have been firmly established as state of the art approaches in sequence modeling and\ntransduction problems such as language modeling and machine translation [ 35,2,5]. Numerous\nefforts have since continued to push the boundaries of recurrent language models and encoder-decoder\narchitectures [38, 24, 15].\nRecurrent models typically factor computation along the symbol positions of the input and output\nsequences. Aligning the positions to steps in computation time, they generate a sequence of hidden\nstates ht, as a function of the previous hidden state ht−1and the input for position t. This inherently\nsequential nature precludes parallelization within training examples, which becomes critical at longer\nsequence lengths, as memory constraints limit batching across examples. Recent work has achieved\nsignificant improvements in computational efficiency through factorization tricks [ 21] and conditional\ncomputation [ 32], while also improving model performance in case of the latter. The fundamental\nconstraint of sequential computation, however, remains.\nAttention mechanisms have become an integral part of compelling sequence modeling and transduc-\ntion models in various tasks, allowing modeling of dependencies without regard to their distance in\nthe input or output sequences [ 2,19]. In all but a few cases [ 27], however, such attention mechanisms\nare used in conjunction with a recurrent network.\nIn this work we propose the Transformer, a model architecture eschewing recurrence and instead\nrelying entirely on an attention mechanism to draw global dependencies between input and output.\nThe Transformer allows for significantly more parallelization and can reach a new state of the art in\ntranslation quality after being trained for as little as twelve hours on eight P100 GPUs.\n2 Background\nThe goal of reducing sequential computation also forms the foundation of the Extended Neural GPU\n[16], ByteNet [ 18] and ConvS2S [ 9], all of which use convolutional neural networks as basic building\nblock, computing hidden representations in parallel for all input and output positions. In these models,\nthe number of operations required to relate signals from two arbitrary input or output positions grows\nin the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes\nit more difficult to learn dependencies between distant positions [ 12]. In the Transformer this is\nreduced to a constant number of operations, albeit at the cost of reduced effective resolution due\nto averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as\ndescribed in section 3.2.\nSelf-attention, sometimes called intra-attention is an attention mechanism relating different positions\nof a single sequence in order to compute a representation of the sequence. Self-attention has been\nused successfully in a variety of tasks including reading comprehension, abstractive summarization,\ntextual entailment and learning task-independent sentence representations [4, 27, 28, 22].\nEnd-to-end memory networks are based on a recurrent attention mechanism instead of sequence-\naligned recurrence and have been shown to perform well on simple-language question answering and\nlanguage modeling tasks [34].\nTo the best of our knowledge, however, the Transformer is the first transduction model relying\nentirely on self-attention to compute representations of its input and output without using sequence-\naligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate\nself-attention and discuss its advantages over models such as [17, 18] and [9].\n3 Model Architecture\nMost competitive neural sequence transduction models have an encoder-decoder structure [ 5,2,35].\nHere, the encoder maps an input sequence of symbol representations (x1, ..., x n)to a sequence\nof continuous representations z= (z1, ..., z n). Given z, the decoder then generates an output\nsequence (y1, ..., y m)of symbols one element at a time. At each step the model is auto-regressive\n[10], consuming the previously generated symbols as additional input when generating the next.\n2Figure 1: The Transformer - model architecture.\nThe Transformer follows this overall architecture using stacked self-attention and point-wise, fully\nconnected layers for both the encoder and decoder, shown in the left and right halves of Figure 1,\nrespectively.\n3.1 Encoder and Decoder Stacks\nEncoder: The encoder is composed of a stack of N= 6 identical layers. Each layer has two\nsub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-\nwise fully connected feed-forward network. We employ a residual connection [ 11] around each of\nthe two sub-layers, followed by layer normalization [ 1]. That is, the output of each sub-layer is\nLayerNorm( x+ Sublayer( x)), where Sublayer( x)is the function implemented by the sub-layer\nitself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding\nlayers, produce outputs of dimension dmodel = 512 .\nDecoder: The decoder is also composed of a stack of N= 6identical layers. In addition to the two\nsub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head\nattention over the output of the encoder stack. Similar to the encoder, we employ residual connections\naround each of the sub-layers, followed by layer normalization. We also modify the self-attention\nsub-layer in the decoder stack to prevent positions from attending to subsequent positions. This\nmasking, combined with fact that the output embeddings are offset by one position, ensures that the\npredictions for position ican depend only on the known outputs at positions less than i.\n3.2 Attention\nAn attention function can be described as mapping a query and a set of key-value pairs to an output,\nwhere the query, keys, values, and output are all vectors. The output is computed as a weighted sum\n3Scaled Dot-Product Attention\n Multi-Head Attention\nFigure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several\nattention layers running in parallel.\nof the values, where the weight assigned to each value is computed by a compatibility function of the\nquery with the corresponding key.\n3.2.1 Scaled Dot-Product Attention\nWe call our particular attention \"Scaled Dot-Product Attention\" (Figure 2). The input consists of\nqueries and keys of dimension dk, and values of dimension dv. We compute the dot products of the\nquery with all keys, divide each by√dk, and apply a softmax function to obtain the weights on the\nvalues.\nIn practice, we compute the attention function on a set of queries simultaneously, packed together\ninto a matrix Q. The keys and values are also packed together into matrices KandV. We compute\nthe matrix of outputs as:\nAttention( Q, K, V ) = softmax(QKT\n√dk)V (1)\nThe two most commonly used attention functions are additive attention [ 2], and dot-product (multi-\nplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor\nof1√dk. Additive attention computes the compatibility function using a feed-forward network with\na single hidden layer. While the two are similar in theoretical complexity, dot-product attention is\nmuch faster and more space-efficient in practice, since it can be implemented using highly optimized\nmatrix multiplication code.\nWhile for small values of dkthe two mechanisms perform similarly, additive attention outperforms\ndot product attention without scaling for larger values of dk[3]. We suspect that for large values of\ndk, the dot products grow large in magnitude, pushing the softmax function into regions where it has\nextremely small gradients4. To counteract this effect, we scale the dot products by1√dk.\n3.2.2 Multi-Head Attention\nInstead of performing a single attention function with dmodel-dimensional keys, values and queries,\nwe found it beneficial to linearly project the queries, keys and values htimes with different, learned\nlinear projections to dk,dkanddvdimensions, respectively. On each of these projected versions of\nqueries, keys and values we then perform the attention function in parallel, yielding dv-dimensional\n4To illustrate why the dot products get large, assume that the components of qandkare independent random\nvariables with mean 0and variance 1. Then their dot product, q·k=Pdk\ni=1qiki, has mean 0and variance dk.\n4output values. These are concatenated and once again projected, resulting in the final values, as\ndepicted in Figure 2.\nMulti-head attention allows the model to jointly attend to information from different representation\nsubspaces at different positions. With a single attention head, averaging inhibits this.\nMultiHead( Q, K, V ) = Concat(head 1, ...,head h)WO\nwhere head i= Attention( QWQ\ni, KWK\ni, V WV\ni)\nWhere the projections are parameter matrices WQ\ni∈Rdmodel×dk,WK\ni∈Rdmodel×dk,WV\ni∈Rdmodel×dv\nandWO∈Rhdv×dmodel.\nIn this work we employ h= 8 parallel attention layers, or heads. For each of these we use\ndk=dv=dmodel/h= 64 . Due to the reduced dimension of each head, the total computational cost\nis similar to that of single-head attention with full dimensionality.\n3.2.3 Applications of Attention in our Model\nThe Transformer uses multi-head attention in three different ways:\n•In \"encoder-decoder attention\" layers, the queries come from the previous decoder layer,\nand the memory keys and values come from the output of the encoder. This allows every\nposition in the decoder to attend over all positions in the input sequence. This mimics the\ntypical encoder-decoder attention mechanisms in sequence-to-sequence models such as\n[38, 2, 9].\n•The encoder contains self-attention layers. In a self-attention layer all of the keys, values\nand queries come from the same place, in this case, the output of the previous layer in the\nencoder. Each position in the encoder can attend to all positions in the previous layer of the\nencoder.\n•Similarly, self-attention layers in the decoder allow each position in the decoder to attend to\nall positions in the decoder up to and including that position. We need to prevent leftward\ninformation flow in the decoder to preserve the auto-regressive property. We implement this\ninside of scaled dot-product attention by masking out (setting to −∞) all values in the input\nof the softmax which correspond to illegal connections. See Figure 2.\n3.3 Position-wise Feed-Forward Networks\nIn addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully\nconnected feed-forward network, which is applied to each position separately and identically. This\nconsists of two linear transformations with a ReLU activation in between.\nFFN( x) = max(0 , xW 1+b1)W2+b2 (2)\nWhile the linear transformations are the same across different positions, they use different parameters\nfrom layer to layer. Another way of describing this is as two convolutions with kernel size 1.\nThe dimensionality of input and output is dmodel = 512 , and the inner-layer has dimensionality\ndff= 2048 .\n3.4 Embeddings and Softmax\nSimilarly to other sequence transduction models, we use learned embeddings to convert the input\ntokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor-\nmation and softmax function to convert the decoder output to predicted next-token probabilities. In\nour model, we share the same weight matrix between the two embedding layers and the pre-softmax\nlinear transformation, similar to [ 30]. In the embedding layers, we multiply those weights by√dmodel.\n5Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations\nfor different layer types. nis the sequence length, dis the representation dimension, kis the kernel\nsize of convolutions and rthe size of the neighborhood in restricted self-attention.\nLayer Type Complexity per Layer Sequential Maximum Path Length\nOperations\nSelf-Attention O(n2·d) O(1) O(1)\nRecurrent O(n·d2) O(n) O(n)\nConvolutional O(k·n·d2) O(1) O(logk(n))\nSelf-Attention (restricted) O(r·n·d) O(1) O(n/r)\n3.5 Positional Encoding\nSince our model contains no recurrence and no convolution, in order for the model to make use of the\norder of the sequence, we must inject some information about the relative or absolute position of the\ntokens in the sequence. To this end, we add \"positional encodings\" to the input embeddings at the\nbottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel\nas the embeddings, so that the two can be summed. There are many choices of positional encodings,\nlearned and fixed [9].\nIn this work, we use sine and cosine functions of different frequencies:\nPE(pos,2i)=sin(pos/100002i/d model)\nPE(pos,2i+1)=cos(pos/100002i/d model)\nwhere posis the position and iis the dimension. That is, each dimension of the positional encoding\ncorresponds to a sinusoid. The wavelengths form a geometric progression from 2πto10000 ·2π. We\nchose this function because we hypothesized it would allow the model to easily learn to attend by\nrelative positions, since for any fixed offset k,PEpos+kcan be represented as a linear function of\nPEpos.\nWe also experimented with using learned positional embeddings [ 9] instead, and found that the two\nversions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version\nbecause it may allow the model to extrapolate to sequence lengths longer than the ones encountered\nduring training.\n4 Why Self-Attention\nIn this section we compare various aspects of self-attention layers to the recurrent and convolu-\ntional layers commonly used for mapping one variable-length sequence of symbol representations\n(x1, ..., x n)to another sequence of equal length (z1, ..., z n), with xi, zi∈Rd, such as a hidden\nlayer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we\nconsider three desiderata.\nOne is the total computational complexity per layer. Another is the amount of computation that can\nbe parallelized, as measured by the minimum number of sequential operations required.\nThe third is the path length between long-range dependencies in the network. Learning long-range\ndependencies is a key challenge in many sequence transduction tasks. One key factor affecting the\nability to learn such dependencies is the length of the paths forward and backward signals have to\ntraverse in the network. The shorter these paths between any combination of positions in the input\nand output sequences, the easier it is to learn long-range dependencies [ 12]. Hence we also compare\nthe maximum path length between any two input and output positions in networks composed of the\ndifferent layer types.\nAs noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially\nexecuted operations, whereas a recurrent layer requires O(n)sequential operations. In terms of\ncomputational complexity, self-attention layers are faster than recurrent layers when the sequence\n6length nis smaller than the representation dimensionality d, which is most often the case with\nsentence representations used by state-of-the-art models in machine translations, such as word-piece\n[38] and byte-pair [ 31] representations. To improve computational performance for tasks involving\nvery long sequences, self-attention could be restricted to considering only a neighborhood of size rin\nthe input sequence centered around the respective output position. This would increase the maximum\npath length to O(n/r). We plan to investigate this approach further in future work.\nA single convolutional layer with kernel width k < n does not connect all pairs of input and output\npositions. Doing so requires a stack of O(n/k)convolutional layers in the case of contiguous kernels,\norO(logk(n))in the case of dilated convolutions [ 18], increasing the length of the longest paths\nbetween any two positions in the network. Convolutional layers are generally more expensive than\nrecurrent layers, by a factor of k. Separable convolutions [ 6], however, decrease the complexity\nconsiderably, to O(k·n·d+n·d2). Even with k=n, however, the complexity of a separable\nconvolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer,\nthe approach we take in our model.\nAs side benefit, self-attention could yield more interpretable models. We inspect attention distributions\nfrom our models and present and discuss examples in the appendix. Not only do individual attention\nheads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic\nand semantic structure of the sentences.\n5 Training\nThis section describes the training regime for our models.\n5.1 Training Data and Batching\nWe trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million\nsentence pairs. Sentences were encoded using byte-pair encoding [ 3], which has a shared source-\ntarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT\n2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece\nvocabulary [ 38]. Sentence pairs were batched together by approximate sequence length. Each training\nbatch contained a set of sentence pairs containing approximately 25000 source tokens and 25000\ntarget tokens.\n5.2 Hardware and Schedule\nWe trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using\nthe hyperparameters described throughout the paper, each training step took about 0.4 seconds. We\ntrained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the\nbottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps\n(3.5 days).\n5.3 Optimizer\nWe used the Adam optimizer [ 20] with β1= 0.9,β2= 0.98andϵ= 10−9. We varied the learning\nrate over the course of training, according to the formula:\nlrate =d−0.5\nmodel·min(step_num−0.5, step _num·warmup _steps−1.5) (3)\nThis corresponds to increasing the learning rate linearly for the first warmup _steps training steps,\nand decreasing it thereafter proportionally to the inverse square root of the step number. We used\nwarmup _steps = 4000 .\n5.4 Regularization\nWe employ three types of regularization during training:\n7Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the\nEnglish-to-German and English-to-French newstest2014 tests at a fraction of the training cost.\nModelBLEU Training Cost (FLOPs)\nEN-DE EN-FR EN-DE EN-FR\nByteNet [18] 23.75\nDeep-Att + PosUnk [39] 39.2 1.0·1020\nGNMT + RL [38] 24.6 39.92 2.3·10191.4·1020\nConvS2S [9] 25.16 40.46 9.6·10181.5·1020\nMoE [32] 26.03 40.56 2.0·10191.2·1020\nDeep-Att + PosUnk Ensemble [39] 40.4 8.0·1020\nGNMT + RL Ensemble [38] 26.30 41.16 1.8·10201.1·1021\nConvS2S Ensemble [9] 26.36 41.29 7.7·10191.2·1021\nTransformer (base model) 27.3 38.1 3.3·1018\nTransformer (big) 28.4 41.8 2.3·1019\nResidual Dropout We apply dropout [ 33] to the output of each sub-layer, before it is added to the\nsub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the\npositional encodings in both the encoder and decoder stacks. For the base model, we use a rate of\nPdrop= 0.1.\nLabel Smoothing During training, we employed label smoothing of value ϵls= 0.1[36]. This\nhurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.\n6 Results\n6.1 Machine Translation\nOn the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big)\nin Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0\nBLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is\nlisted in the bottom line of Table 3. Training took 3.5days on 8P100 GPUs. Even our base model\nsurpasses all previously published models and ensembles, at a fraction of the training cost of any of\nthe competitive models.\nOn the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0,\noutperforming all of the previously published single models, at less than 1/4the training cost of the\nprevious state-of-the-art model. The Transformer (big) model trained for English-to-French used\ndropout rate Pdrop= 0.1, instead of 0.3.\nFor the base models, we used a single model obtained by averaging the last 5 checkpoints, which\nwere written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We\nused beam search with a beam size of 4and length penalty α= 0.6[38]. These hyperparameters\nwere chosen after experimentation on the development set. We set the maximum output length during\ninference to input length + 50, but terminate early when possible [38].\nTable 2 summarizes our results and compares our translation quality and training costs to other model\narchitectures from the literature. We estimate the number of floating point operations used to train a\nmodel by multiplying the training time, the number of GPUs used, and an estimate of the sustained\nsingle-precision floating-point capacity of each GPU5.\n6.2 Model Variations\nTo evaluate the importance of different components of the Transformer, we varied our base model\nin different ways, measuring the change in performance on English-to-German translation on the\n5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.\n8Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base\nmodel. All metrics are on the English-to-German translation development set, newstest2013. Listed\nperplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to\nper-word perplexities.\nN d model dff h d k dvPdrop ϵlstrain PPL BLEU params\nsteps (dev) (dev) ×106\nbase 6 512 2048 8 64 64 0.1 0.1 100K 4.92 25.8 65\n(A)1 512 512 5.29 24.9\n4 128 128 5.00 25.5\n16 32 32 4.91 25.8\n32 16 16 5.01 25.4\n(B)16 5.16 25.1 58\n32 5.01 25.4 60\n(C)2 6.11 23.7 36\n4 5.19 25.3 50\n8 4.88 25.5 80\n256 32 32 5.75 24.5 28\n1024 128 128 4.66 26.0 168\n1024 5.12 25.4 53\n4096 4.75 26.2 90\n(D)0.0 5.77 24.6\n0.2 4.95 25.5\n0.0 4.67 25.3\n0.2 5.47 25.7\n(E) positional embedding instead of sinusoids 4.92 25.7\nbig 6 1024 4096 16 0.3 300K 4.33 26.4 213\ndevelopment set, newstest2013. We used beam search as described in the previous section, but no\ncheckpoint averaging. We present these results in Table 3.\nIn Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions,\nkeeping the amount of computation constant, as described in Section 3.2.2. While single-head\nattention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.\nIn Table 3 rows (B), we observe that reducing the attention key size dkhurts model quality. This\nsuggests that determining compatibility is not easy and that a more sophisticated compatibility\nfunction than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected,\nbigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our\nsinusoidal positional encoding with learned positional embeddings [ 9], and observe nearly identical\nresults to the base model.\n6.3 English Constituency Parsing\nTo evaluate if the Transformer can generalize to other tasks we performed experiments on English\nconstituency parsing. This task presents specific challenges: the output is subject to strong structural\nconstraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence\nmodels have not been able to attain state-of-the-art results in small-data regimes [37].\nWe trained a 4-layer transformer with dmodel = 1024 on the Wall Street Journal (WSJ) portion of the\nPenn Treebank [ 25], about 40K training sentences. We also trained it in a semi-supervised setting,\nusing the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences\n[37]. We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens\nfor the semi-supervised setting.\nWe performed only a small number of experiments to select the dropout, both attention and residual\n(section 5.4), learning rates and beam size on the Section 22 development set, all other parameters\nremained unchanged from the English-to-German base translation model. During inference, we\n9Table 4: The Transformer generalizes well to English constituency parsing (Results are on Section 23\nof WSJ)\nParser Training WSJ 23 F1\nVinyals & Kaiser el al. (2014) [37] WSJ only, discriminative 88.3\nPetrov et al. (2006) [29] WSJ only, discriminative 90.4\nZhu et al. (2013) [40] WSJ only, discriminative 90.4\nDyer et al. (2016) [8] WSJ only, discriminative 91.7\nTransformer (4 layers) WSJ only, discriminative 91.3\nZhu et al. (2013) [40] semi-supervised 91.3\nHuang & Harper (2009) [14] semi-supervised 91.3\nMcClosky et al. (2006) [26] semi-supervised 92.1\nVinyals & Kaiser el al. (2014) [37] semi-supervised 92.1\nTransformer (4 layers) semi-supervised 92.7\nLuong et al. (2015) [23] multi-task 93.0\nDyer et al. (2016) [8] generative 93.3\nincreased the maximum output length to input length + 300. We used a beam size of 21andα= 0.3\nfor both WSJ only and the semi-supervised setting.\nOur results in Table 4 show that despite the lack of task-specific tuning our model performs sur-\nprisingly well, yielding better results than all previously reported models with the exception of the\nRecurrent Neural Network Grammar [8].\nIn contrast to RNN sequence-to-sequence models [ 37], the Transformer outperforms the Berkeley-\nParser [29] even when training only on the WSJ training set of 40K sentences.\n7 Conclusion\nIn this work, we presented the Transformer, the first sequence transduction model based entirely on\nattention, replacing the recurrent layers most commonly used in encoder-decoder architectures with\nmulti-headed self-attention.\nFor translation tasks, the Transformer can be trained significantly faster than architectures based\non recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014\nEnglish-to-French translation tasks, we achieve a new state of the art. In the former task our best\nmodel outperforms even all previously reported ensembles.\nWe are excited about the future of attention-based models and plan to apply them to other tasks. We\nplan to extend the Transformer to problems involving input and output modalities other than text and\nto investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs\nsuch as images, audio and video. Making generation less sequential is another research goals of ours.\nThe code we used to train and evaluate our models is available at https://github.com/\ntensorflow/tensor2tensor .\nAcknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful\ncomments, corrections and inspiration.\nReferences\n[1]Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint\narXiv:1607.06450 , 2016.\n[2]Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly\nlearning to align and translate. CoRR , abs/1409.0473, 2014.\n[3]Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V . Le. Massive exploration of neural\nmachine translation architectures. CoRR , abs/1703.03906, 2017.\n[4]Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine\nreading. arXiv preprint arXiv:1601.06733 , 2016.\n10[5]Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk,\nand Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical\nmachine translation. CoRR , abs/1406.1078, 2014.\n[6]Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv\npreprint arXiv:1610.02357 , 2016.\n[7]Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation\nof gated recurrent neural networks on sequence modeling. CoRR , abs/1412.3555, 2014.\n[8]Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural\nnetwork grammars. In Proc. of NAACL , 2016.\n[9]Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolu-\ntional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2 , 2017.\n[10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint\narXiv:1308.0850 , 2013.\n[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for im-\nage recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern\nRecognition , pages 770–778, 2016.\n[12] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in\nrecurrent nets: the difficulty of learning long-term dependencies, 2001.\n[13] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation ,\n9(8):1735–1780, 1997.\n[14] Zhongqiang Huang and Mary Harper. Self-training PCFG grammars with latent annotations\nacross languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural\nLanguage Processing , pages 832–841. ACL, August 2009.\n[15] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring\nthe limits of language modeling. arXiv preprint arXiv:1602.02410 , 2016.\n[16] Łukasz Kaiser and Samy Bengio. Can active memory replace attention? In Advances in Neural\nInformation Processing Systems, (NIPS) , 2016.\n[17] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference\non Learning Representations (ICLR) , 2016.\n[18] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Ko-\nray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099v2 ,\n2017.\n[19] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks.\nInInternational Conference on Learning Representations , 2017.\n[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR , 2015.\n[21] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint\narXiv:1703.10722 , 2017.\n[22] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen\nZhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint\narXiv:1703.03130 , 2017.\n[23] Minh-Thang Luong, Quoc V . Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task\nsequence to sequence learning. arXiv preprint arXiv:1511.06114 , 2015.\n[24] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-\nbased neural machine translation. arXiv preprint arXiv:1508.04025 , 2015.\n11[25] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated\ncorpus of english: The penn treebank. Computational linguistics , 19(2):313–330, 1993.\n[26] David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In\nProceedings of the Human Language Technology Conference of the NAACL, Main Conference ,\npages 152–159. ACL, June 2006.\n[27] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention\nmodel. In Empirical Methods in Natural Language Processing , 2016.\n[28] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive\nsummarization. arXiv preprint arXiv:1705.04304 , 2017.\n[29] Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning accurate, compact,\nand interpretable tree annotation. In Proceedings of the 21st International Conference on\nComputational Linguistics and 44th Annual Meeting of the ACL , pages 433–440. ACL, July\n2006.\n[30] Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv\npreprint arXiv:1608.05859 , 2016.\n[31] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words\nwith subword units. arXiv preprint arXiv:1508.07909 , 2015.\n[32] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,\nand Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts\nlayer. arXiv preprint arXiv:1701.06538 , 2017.\n[33] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi-\nnov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine\nLearning Research , 15(1):1929–1958, 2014.\n[34] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory\nnetworks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors,\nAdvances in Neural Information Processing Systems 28 , pages 2440–2448. Curran Associates,\nInc., 2015.\n[35] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural\nnetworks. In Advances in Neural Information Processing Systems , pages 3104–3112, 2014.\n[36] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.\nRethinking the inception architecture for computer vision. CoRR , abs/1512.00567, 2015.\n[37] Vinyals & Kaiser, Koo, Petrov, Sutskever, and Hinton. Grammar as a foreign language. In\nAdvances in Neural Information Processing Systems , 2015.\n[38] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang\nMacherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine\ntranslation system: Bridging the gap between human and machine translation. arXiv preprint\narXiv:1609.08144 , 2016.\n[39] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with\nfast-forward connections for neural machine translation. CoRR , abs/1606.04199, 2016.\n[40] Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. Fast and accurate\nshift-reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the ACL (Volume\n1: Long Papers) , pages 434–443. ACL, August 2013.\n12Attention Visualizations\nInput-Input Layer5\nIt\nis\nin\nthis\nspirit\nthat\na\nmajority\nof\nAmerican\ngovernments\nhave\npassed\nnew\nlaws\nsince\n2009\nmaking\nthe\nregistration\nor\nvoting\nprocess\nmore\ndifficult\n.\n\n\n\n\n\n\n\nIt\nis\nin\nthis\nspirit\nthat\na\nmajority\nof\nAmerican\ngovernments\nhave\npassed\nnew\nlaws\nsince\n2009\nmaking\nthe\nregistration\nor\nvoting\nprocess\nmore\ndifficult\n.\n\n\n\n\n\n\n\nFigure 3: An example of the attention mechanism following long-distance dependencies in the\nencoder self-attention in layer 5 of 6. Many of the attention heads attend to a distant dependency of\nthe verb ‘making’, completing the phrase ‘making...more difficult’. Attentions here shown only for\nthe word ‘making’. Different colors represent different heads. Best viewed in color.\n13Input-Input Layer5\nThe\nLaw\nwill\nnever\nbe\nperfect\n,\nbut\nits\napplication\nshould\nbe\njust\n-\nthis\nis\nwhat\nwe\nare\nmissing\n,\nin\nmy\nopinion\n.\n\n\nThe\nLaw\nwill\nnever\nbe\nperfect\n,\nbut\nits\napplication\nshould\nbe\njust\n-\nthis\nis\nwhat\nwe\nare\nmissing\n,\nin\nmy\nopinion\n.\n\n\nInput-Input Layer5\nThe\nLaw\nwill\nnever\nbe\nperfect\n,\nbut\nits\napplication\nshould\nbe\njust\n-\nthis\nis\nwhat\nwe\nare\nmissing\n,\nin\nmy\nopinion\n.\n\n\nThe\nLaw\nwill\nnever\nbe\nperfect\n,\nbut\nits\napplication\nshould\nbe\njust\n-\nthis\nis\nwhat\nwe\nare\nmissing\n,\nin\nmy\nopinion\n.\n\nFigure 4: Two attention heads, also in layer 5 of 6, apparently involved in anaphora resolution. Top:\nFull attentions for head 5. Bottom: Isolated attentions from just the word ‘its’ for attention heads 5\nand 6. Note that the attentions are very sharp for this word.\n14Input-Input Layer5\nThe\nLaw\nwill\nnever\nbe\nperfect\n,\nbut\nits\napplication\nshould\nbe\njust\n-\nthis\nis\nwhat\nwe\nare\nmissing\n,\nin\nmy\nopinion\n.\n\n\nThe\nLaw\nwill\nnever\nbe\nperfect\n,\nbut\nits\napplication\nshould\nbe\njust\n-\nthis\nis\nwhat\nwe\nare\nmissing\n,\nin\nmy\nopinion\n.\n\n\nInput-Input Layer5\nThe\nLaw\nwill\nnever\nbe\nperfect\n,\nbut\nits\napplication\nshould\nbe\njust\n-\nthis\nis\nwhat\nwe\nare\nmissing\n,\nin\nmy\nopinion\n.\n\n\nThe\nLaw\nwill\nnever\nbe\nperfect\n,\nbut\nits\napplication\nshould\nbe\njust\n-\nthis\nis\nwhat\nwe\nare\nmissing\n,\nin\nmy\nopinion\n.\n\nFigure 5: Many of the attention heads exhibit behaviour that seems related to the structure of the\nsentence. We give two such examples above, from two different heads from the encoder self-attention\nat layer 5 of 6. The heads clearly learned to perform different tasks.\n15" } ]