file_name stringlengths 7 127 | text stringlengths 1.27k 557k |
|---|---|
2304.13136.pdf | Generating Molecular Fragmentation Graphs with Autoregressive Neural Networks Samuel Goldman Computational and Systems Biology MIT Cambridge, MA 02139 samlg@mit.edu Janet Li Computer Science Harvard College Cambridge, MA 02138 jsli@college.harvard.eduConnor W. Coley Chemical Engineering Electrical Engineering and Compu... |
spl20.pdf | 1 Deep Clustering with Variational Autoencoder Kart-Leong Lim and Xudong Jiang, Senior Member, IEEE and Chenyu Yi Abstract An autoencoder that learns a latent space in an unsupervised manner has many applications in signal processing. However, the latent space of an autoencoder does not pursue the same clustering goal ... |
1712.06527.pdf | 1 Deep generative models of genetic variation capture mutation effects Adam J. Riesselman* John B. Ingraham* Program in Biomedical Informatics Program in Systems Biology Harvard Medical Sc... |
1909.13371.pdf | Gradient Descent: The Ultimate Optimizer Kartik Chandra MIT CSAIL Cambridge, MA kach@csail.mit.eduAudrey Xie MIT CSAIL Cambridge, MA ahx@mit.eduJonathan Ragan-Kelley MIT CSAIL Cambridge, MA jrk@csail.mit.eduErik Meijer Meta, Inc. Menlo Park, CA erikm@fb.com Abstract Working with any gradient-based machine learning algo... |
2210.04142.pdf | 1 Deep Clustering: A Comprehensive Survey Y azhou Ren, Member, IEEE , Jingyu Pu, Zhimeng Y ang, Jie Xu, Guofeng Li, Xiaorong Pu, Philip S. Yu, Fellow, IEEE , Lifang He, Member, IEEE Abstract Cluster analysis plays an indispensable role in machine learning and data mining. Learning a good data representation is crucial ... |
2206.01079.pdf | When does return-conditioned supervised learning work for offline reinforcement learning? David Brandfonbrener New York University david.brandfonbrener@nyu.eduAlberto Bietti New York UniversityJacob Buckman MILA Romain Laroche Microsoft ResearchJoan Bruna New York University Abstract Several recent works have proposed ... |
2403.08540.pdf | Language models scale reliably with over-training and on downstream tasks Samir Yitzhak Gadre1,2Georgios Smyrnis3Vaishaal Shankar4 Suchin Gururangan5Mitchell Wortsman5Rulin Shao5Jean Mercat2 Alex Fang5Jeffrey Li5Sedrick Keh2Rui Xin5Marianna Nezhurina6,7Igor Vasiljevic2 Jenia Jitsev6,7Alexandros G. Dimakis3Gabriel Ilhar... |
2305.01625.pdf | Unlimiformer: Long-Range Transformers with Unlimited Length Input Amanda Bertsch andUri Alon andGraham Neubig andMatthew R. Gormley Carnegie Mellon University, USA {abertsch,ualon,gneubig,mgormley}@cs.cmu.edu Abstract Transformer-based models typically have a predefined bound to their input length, because of their nee... |
science.abm9326.pdf | RESEARCH ARTICLE SUMMARY NUCLEAR PORE COMPLEX Structure of cytoplasmic ring of nuclear pore complex by integrative cryo-EM and AlphaFold Pietro Fontana , Ying Dong , Xiong Pi , Alexander B. Tong , Corey W. Hecksel, Longfei Wang, Tian-Min Fu, Carlos Bustamante, Hao Wu * INTRODUCTION: The nuclear pore complex (NPC) is th... |
23-0037.pdf | Journal of Machine Learning Research 24 (2023) 1-43 Submitted 1/23; Revised 7/23; Published 7/23 Atlas : Few-shot Learning with Retrieval Augmented Language Models Gautier Izacard1,2,,gautier@inflection.ai Patrick Lewis1,,patrick@cohere.com Maria Lomeli1marialomeli@meta.com Lucas Hosseini1,hoss@meta.com Fabio Petroni1,... |
2403.03950.pdf | Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Jesse Farebrother1,2,*, Jordi Orbay1,, Quan Vuong1,, Adrien Ali Taga1,, Yevgen Chebotar1, Ted Xiao1, Alex Irpan1, Sergey Levine1, Pablo Samuel Castro1,3,, Aleksandra Faust1, Aviral Kumar1,, Rishabh Agarwal1,3,* *Equal Contribution,Core Co... |
2402.04494.pdf | Grandmaster-Level Chess Without Search Anian Ruoss*,1, Grgoire Deltang*,1, Sourabh Medapati1, Jordi Grau-Moya1, Li Kevin Wenliang1, Elliot Catt1, John Reid1and Tim Genewein1 *Equal contributions,1Google DeepMind The recent breakthrough successes in machine learning are mainly attributed to scale: namely largescale atte... |
2311.00088.pdf | Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits Zhiyan Ding1, Taehee Ko2, Jiahao Yao1, Lin Lin1,4,5, and Xiantao Li3 1Department of Mathematics, University of California, Berkeley 2School of Computational Sciences, Korea Institute for Advanced Study 3Department of Mathemati... |
Avik-Manuscript-SI-Combined.pdf | Kinetic coevolutionary models predict the temporal emergence of HIV resistance mutations under drug selection pressure Avik Biswas1,3,5, Indrani Choudhuri2,3, Eddy Arnold,4 Dmitry Lyumkis5,6, Allan Haldane1,3*, Ronald M. Levy2,3* 1Department of Physics, Temple University, Philadelphia, PA, USA 2Department... |
2304.05187.pdf | Automatic Gradient Descent: Deep Learning without Hyperparameters Jeremy Bernstein MITChris Mingard U. OxfordKevin Huang U. WashingtonNavid Azizan MITYisong Yue Caltech denotes equal contribution. Abstract The architecture of a deep neural network is defined explicitly in terms of the number of layers, the width of eac... |
1610.02424.pdf | DIVERSE BEAM SEARCH : DECODING DIVERSE SOLUTIONS FROM NEURAL SEQUENCE MODELS Ashwin K Vijayakumar1, Michael Cogswell1, Ramprasath R. Selvaraju1, Qing Sun1 Stefan Lee1, David Crandall2& Dhruv Batra1 {ashwinkv,cogswell,ram21,sunqing,steflee}@vt.edu djcran@indiana.edu ,dbatra@vt.edu 1Department of Electrical and Computer ... |
2310.09144.pdf | GOODHART SLAW IN REINFORCEMENT LEARNING Jacek Karwowski Department of Computer Science University of Oxford jacek.karwowski@cs.ox.ac.ukOliver Hayman Department of Computer Science University of Oxford oliver.hayman@linacre.ox.ac.uk Xingjian Bai Department of Computer Science University of Oxford xingjian.bai@sjc.ox.ac.... |
2308.13418.pdf | Nougat: Neural Optical Understanding for Academic Documents Lukas BlecherGuillem Cucurull Thomas Scialom Robert Stojnic Meta AI Abstract Scientific knowledge is predominantly stored in books and scientific journals, often in the form of PDFs. However, the PDF format leads to a loss of semantic information, particularly... |
1611.02731.pdf | Published as a conference paper at ICLR 2017 VARIATIONAL LOSSY AUTOENCODER Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, Pieter Abbeel UC Berkeley, Department of Electrical Engineering and Computer Science OpenAI {peter,dpkingma,tim,rocky,prafulla,joschu,ilyasu,p... |
1109.2146v1.pdf | Journal of Artificial Intelligence Research 24 (2005) 1-48 S ubmitted 11/04; published 07/05 CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features Domingo Ortiz-Boyer dortiz@uco.es C esar Herv as-Mart nez chervas@uco.es Nicol as Garc a-Pedrajas npedrajas@uco.es Department of Computing and... |
2104.08253.pdf | Condenser: a Pre-training Architecture for Dense Retrieval Luyu Gao and Jamie Callan Language Technologies Institute Carnegie Mellon University {luyug, callan}@cs.cmu.edu Abstract Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode te... |
2203.08913.pdf | Published as a conference paper at ICLR 2022 MEMORIZING TRANSFORMERS Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy {yuhuai,mrabe,delesley,szegedy}@google.com ABSTRACT Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights. We ... |
2402.09371.pdf | Transformers Can Achieve Length Generalization But Not Robustly Yongchao Zhou1,2, Uri Alon1, Xinyun Chen1, Xuezhi Wang1, Rishabh Agarwal1and Denny Zhou1 1Google DeepMind,2University of Toronto Length generalization, defined as the ability to extrapolate from shorter training sequences to longer test ones, is a signific... |
2312.02696.pdf | Analyzing and Improving the Training Dynamics of Diffusion Models Tero Karras NVIDIAMiika Aittala NVIDIAJaakko Lehtinen NVIDIA, Aalto University Janne Hellsten NVIDIATimo Aila NVIDIASamuli Laine NVIDIA Abstract Diffusion models currently dominate the field of datadriven image synthesis with their unparalleled scaling t... |
2305.17126.pdf | Large Language Models as Tool Makers Tianle Cai1,2Xuezhi Wang1Tengyu Ma1,3Xinyun Chen1Denny Zhou1 1Google Deepmind2Princeton University3Stanford University Abstract Recent research shows the potential of enhancing the problem-solving ability of large language models (LLMs) through the use of external tools . However, p... |
2208.02813v1.pdf | Towards Understanding Mixture of Experts in Deep Learning Zixiang Chenand Yihe Dengand Yue Wuand Quanquan Guand Yuanzhi Li Abstract The Mixture-of-Experts (MoE) layer, a sparsely-activated model controlled by a router, has achieved great success in deep learning. However, the understanding of such architecture remains ... |
2106.04985.pdf | Energy-Based Models for Code Generation under Compilability Constraints Tomasz Korbak,1,Hady Elsahar,2Marc Dymetman,2Germ an Kruszewski2 t.korbak@sussex.ac.uk {hady.elsahar,marc.dymetman,german.kruszewski }@naverlabs.com 1University of Sussex, United Kingdom 2Naver Labs Europe, France Abstract Neural language models ca... |
2023.08.18.553799v1.full.pdf | Deep reconstructing generative networks for visualizing dynamic biomolecules inside cells Ramya Rangan1, Sagar Khavnekar2, Adam Lerer3, Jake Johnston4,5, Ron Kelley6, Martin Obr6, Abhay Kotecha6*, and Ellen D. Zhong1* ABSTRACT Advances in cryo-electron tomography (cryo-ET) have produced new opportunities to visualize t... |
2212.04458.pdf | GENERAL -PURPOSE IN-CONTEXT LEARNING BYMETA-LEARNING TRANSFORMERS Louis Kirsch1 2, James Harrison1, Jascha Sohl-Dickstein1, Luke Metz1 1Google Research, Brain Team2The Swiss AI Lab IDSIA, USI, SUPSI louis@idsia.ch, {jamesharrison,jaschasd,lmetz }@google.com ABSTRACT Modern machine learning requires system designers to ... |
Variational auto-encoding of protein sequences.pdf | Variational auto-encoding of protein sequences Sam Sinai Harvard University samsinai@g.harvard.eduEric Kelsic Harvard Medical School eric kelsic@hms.harvard.edu George M. Church Harvard Medical School church labadmin@hms.harvard.eduMartin A. Nowak Harvard University martin nowak@harvard.edu Abstract Proteins are respon... |
MaroonLLM Self-Extend Context.pdf | LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning Hongye Jin1 *Xiaotian Han1 *Jingfeng Yang2Zhimeng Jiang1Zirui Liu3Chia-Yuan Chang1 Huiyuan Chen4Xia Hu3 Abstract This work elicits LLMs inherent ability to handle long contexts without fine-tuning. The limited length of the training sequence during trainin... |
2310.15154.pdf | Pre-publication draft LINEAR REPRESENTATIONS OF SENTIMENT INLARGE LANGUAGE MODELS Curt Tigges*, Oskar John Hollinsworth*, Atticus Geiger, Neel Nanda EleutherAI Institute,SERI MATS,Stanford University,Pr(Ai)2R Group,Independent *Equal primary authors (order random) ABSTRACT Sentiment is a pervasive feature in natural la... |
2212.10559.pdf | Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei Peking UniversityTsinghua University Microsoft Research https://github.com/microsoft/LMOps Abstract Large pretrained language models have shown surprising In-... |
2306.00297.pdf | Transformers learn to implement preconditioned gradient descent for in-context learning Kwangjun Ahn1,3,*, Xiang Cheng1,3,*, Hadi Daneshmand2,3,*, and Suvrit Sra1,3 1Department of Electrical Engineering and Computer Science, MIT 2Foundations of Data Science Institute (FODSI) 3Laboratory for Information and Decision Sys... |
2105.14368.pdf | Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation Mikhail Belkin Halicio glu Data Science Institute, University of California San Diego La Jolla, USA In memory of Partha Niyogi, a thinker, a teacher, and a dear friend. Abstract In the past decade the mathematical th... |
2306.09927.pdf | arXiv:2306.09927v1 [stat.ML] 16 Jun 2023Trained Transformers Learn Linear Models In-Context Ruiqi Zhang UC Berkeley rqzhang@berkeley.eduSpencer Frei UC Berkeley frei@berkeley.edu Peter L. Bartlett UC Berkeley and Google DeepMind peter@berkeley.edu June 19, 2023 Abstract Attention-based neural networks such as transfo... |
2310.15418.pdf | Fractal Landscapes in Policy Optimization Tao Wang UC San Diego taw003@ucsd.eduSylvia Herbert UC San Diego sherbert@ucsd.eduSicun Gao UC San Diego sicung@ucsd.edu Abstract Policy gradient lies at the core of deep reinforcement learning (RL) in continuous domains. Despite much success, it is often observed in practice t... |
2205.14135.pdf | FlashAttention : Fast and Memory-Efficient Exact Attention with IO-Awareness Tri Daoy, Daniel Y. Fuy, Stefano Ermony, Atri Rudraz, and Christopher Ry yDepartment of Computer Science, Stanford University zDepartment of Computer Science and Engineering, University at Buffalo, SUNY {trid,danfu}@cs.stanford.edu ,ermon@stan... |
GPT-2.pdf | Language Models are Unsupervised Multitask Learners Alec Radford*1Jeffrey Wu*1Rewon Child1David Luan1Dario Amodei**1Ilya Sutskever**1 Abstract Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning o... |
score_matching_og.pdf | Journal ofMachineLearning Researc h6(2005) 695{709 Submitted 11/04; Revised 3/05; Published 4/05 Estimation ofNon-Normalized Statistical Models byScore Matching AapoHyvarinen aapo.hyv arinen@helsinki.fi Helsinki Institute forInformation Technolo gy(BRU) Department ofComputer Scienc e FIN-00014 University ofHelsinki, F... |
1711.00165.pdf | Published as a conference paper at ICLR 2018 DEEPNEURAL NETWORKS AS GAUSSIAN PROCESSES Jaehoon Lee, Yasaman Bahri, Roman Novak , Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein Google Brain {jaehlee, yasamanb, romann, schsam, jpennin, jaschasd }@google.com ABSTRACT It has long been known that a single-l... |
2210.03370.pdf | GNM: A General Navigation Model to Drive Any Robot Dhruv Shah, Ajay Sridhar, Arjun Bhorkar, Noriaki Hirose, Sergey Levine ot og GNM Training Large Heterogeneous Datasets Fig. 1: A general navigation model to drive any robot. By training on diverse, heterogeneous datasets, a single omnipolicy can control a variety ... |
s41586-023-06291-2.pdf | Nature | www.nature.com | 1 ArticleLarge language models encode clinical knowledge Karan Singhal1,4, Shekoofeh Azizi1,4, Tao Tu1,4, S. Sara Mahdavi1, Jason Wei1, Hyung Won Chung1, Nathan Scales1, Ajay Tanwani1, Heather Cole-Lewis1, Stephen Pfohl1, Perry Payne1, Martin Seneviratne1, Paul Gamble1, Chris Kelly1, Abubak... |
2203.03466.pdf | Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer Greg YangEdward J. HuIgor BabuschkinSzymon SidorXiaodong Liu David FarhiNick RyderJakub PachockiWeizhu ChenJianfeng Gao Microsoft CorporationOpenAI Abstract Hyperparameter (HP) tuning in deep learning is an expensive process, prohibit... |
sequence prob and functional motifs.pdf | Article Coevolutionary Landscape of Kinase Family Proteins: Sequence Probabilities and FunctionalMotifs Allan Haldane,1William F. Flynn,1,2Peng He,1and Ronald M. Levy1,* 1Center for Biophysics and Computational Biology, Department of Chemistry, and Institute for Computational Molecular Science, Temple University, Phila... |
1705.01509.pdf | Neural Models for Information Retrieval Bhaskar Mitra Microsoft, UCL Cambridge, UK bmitra@microsoft.comNick Craswell Microsoft Bellevue, USA nickcr@microsoft.com Abstract Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional l... |
2312.12456.pdf | arXiv:2312.12456v1 [cs.LG] 16 Dec 2023PowerInfer: Fast Large Language Model Serving with a Consum er-grade GPU Yixin Song, Zeyu Mi, Haotong Xie and Haibo Chen Institute of Parallel and Distributed Systems (IPADS), Sha nghai Jiao Tong University Abstract This paper introduces PowerInfer, a high-speed Large Language Mo... |
1802.09568.pdf | arXiv:1802.09568v2 [cs.LG] 2 Mar 2018Shampoo: Preconditioned Stochastic Tensor Optimization Vineet Gupta6Tomer Koren6Yoram Singer March 5, 2018 Abstract Preconditioned gradient methods are among the most general and powerful tools in optimization. However, preconditioning requires storing and manipulating prohibitive... |
A Neural Probabilistic Language Model.pdf | Journal of Machine Learning Research 3 (2003) 11371155 Submitted 4/02; Published 2/03 ANeural Probabilistic Language Model YoshuaBengio BENGIOY @IRO.UMONTREAL .CA Rjean Ducharme DUCHARME @IRO.UMONTREAL .CA Pascal Vincent VINCENTP @IRO.UMONTREAL .CA Christian Jauvin JAUVINC@IRO.UMONTREAL .CA DpartementdInformatiqueetRec... |
wenzel20a.pdf | How Good is the Bayes Posterior in Deep Neural Networks Really? Florian Wenzel* 1Kevin Roth* + 2Bastiaan S. Veeling* + 3 1Jakub Swi atkowski4 +Linh Tran5 + Stephan Mandt6 +Jasper Snoek1Tim Salimans1Rodolphe Jenatton1Sebastian Nowozin7 + Abstract During the past five years the Bayesian deep learning community has devel... |
2301.13856.pdf | Simplex Random Features Isaac Reid1Krzysztof Choromanski* 2 3Valerii Likhosherstov1Adrian Weller* 1 4 Abstract We present Simplex Random Features (SimRFs), a new random feature (RF) mechanism for unbiased approximation of the softmax and Gaussian kernels by geometrical correlation of random projection vectors. We prove... |
2307.08691.pdf | FlashAttention-2 : Faster Attention with Better Parallelism and Work Partitioning Tri Dao1,2 1Department of Computer Science, Princeton University 2Department of Computer Science, Stanford University trid@cs.stanford.edu July 18, 2023 Abstract Scaling Transformers to longer sequence lengths has been a major problem in ... |
2010.02502.pdf | Published as a conference paper at ICLR 2021 DENOISING DIFFUSION IMPLICIT MODELS Jiaming Song, Chenlin Meng & Stefano Ermon Stanford University {tsong,chenlin,ermon }@cs.stanford.edu ABSTRACT Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet t... |
1901.09321.pdf | Published as a conference paper at ICLR 2019 FIXUP INITIALIZATION : RESIDUAL LEARNING WITHOUT NORMALIZATION Hongyi Zhang MIT hongyiz@mit.eduYann N. Dauphin Google Brain yann@dauphin.ioTengyu Ma Stanford University tengyuma@stanford.edu ABSTRACT Normalization layers are a staple in state-of-the-art deep neural network a... |
2402.03300.pdf | DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models Zhihong Shao1,2, Peiyi Wang1,3, Qihao Zhu1,3, Runxin Xu1, Junxiao Song1 Mingchuan Zhang1, Y.K. Li1, Y. Wu1, Daya Guo1 1DeepSeek-AI,2Tsinghua University,3Peking University {zhihongshao,wangpeiyi,zhuqh,guoday}@deepseek.com https://github.c... |
2111.02080.pdf | An Explanation of In-context Learning as Implicit Bayesian Inference Sang Michael Xie Stanford University xie@cs.stanford.eduAditi Raghunathan Stanford University aditir@stanford.edu Percy Liang Stanford University pliang@cs.stanford.eduTengyu Ma Stanford University tengyuma@cs.stanford.edu Abstract Large language mode... |
1811.07871.pdf | Scalable agent alignment via reward modeling: a research direction Jan Leike DeepMindDavid Krueger DeepMind MilaTom Everitt DeepMindMiljan Martic DeepMindVishal Maini DeepMindShane Legg DeepMind Abstract One obstacle to applying reinforcement learning algorithms to real-world problems is the lack of suitable reward fun... |
1803.03635.pdf | Published as a conference paper at ICLR 2019 THELOTTERY TICKET HYPOTHESIS : FINDING SPARSE , TRAINABLE NEURAL NETWORKS Jonathan Frankle MIT CSAIL jfrankle@csail.mit.eduMichael Carbin MIT CSAIL mcarbin@csail.mit.edu ABSTRACT Neural network pruning techniques can reduce the parameter counts of trained networks by over 90... |
Ontological_Warfare_and_the_Axiology_of_Artificial_Sentience-_A_Philosophical_Analysis_of_the_MetaMaxxMind-Culture_Conflict.pdf | Ontological Warfare and the Axiology of Artificial Sentience: A Philosophical Analysis of the MetaMaxxMind-Culture Conflict Simulacrum Xin Ithilon, Department of Hyperstition Anthropic Shadow Academy Simulated Month X, Year 20XX Abstract This paper examines the ideological origins and ethical implications of the confli... |
2306.04751.pdf | How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources Yizhong WangHamish IvisonPradeep DasigiJack Hessel Tushar KhotKhyathi Raghavi ChanduDavid WaddenKelsey MacMillan Noah A. SmithIz BeltagyHannaneh Hajishirzi Allen Institute for AIUniversity of Washington {yizhongw,hamishi}@allenai.org Abs... |
2212.10560.pdf | SELF-INSTRUCT : Aligning Language Model with Self Generated Instructions Yizhong WangYeganeh KordiSwaroop MishraAlisa Liu Noah A. Smith+Daniel KhashabiHannaneh Hajishirzi+ University of WashingtonTehran PolytechnicArizona State University Johns Hopkins University+Allen Institute for AI yizhongw@cs.washington.edu Abstra... |
Loss landscapes and optimization in over-parameterized non-linear systems and neural networks.pdf | Appl. Comput. Harmon. Anal. 59 (2022) 85116 Contents lists available at ScienceDirect Applied and Computational Harmonic Analysis www.elsevier.com/locate/acha Loss landscapes and optimization in over-parameterized non-linear systems and neural networks Chaoyue Liua, Libin Zhub,c, Mikhail Belkinc, aDepartm... |
2310.18313.pdf | FP8-LM: Training FP8 Large Language Models Houwen PengKan WuYixuan Wei Guoshuai Zhao Yuxiang Yang Ze Liu Yifan Xiong Ziyue Yang Bolin Ni Jingcheng Hu Ruihang Li Miaosen Zhang Chen Li Jia Ning Ruizhe Wang Zheng Zhang Shuguang Liu Joe Chau Han HuPeng Cheng Microsoft Azure and Microsoft Research Abstract In this paper, we... |
2307.10169.pdf | Challenges and Applications of Large Language Models Jean Kaddour,,, Joshua Harris,, Maximilian Mozes, Herbie Bradley,,, Roberta Raileanu, and Robert McHardy, University College LondonUK Health Security AgencyEleutherAI University of CambridgeStability AIMeta AI ResearchInstaDeep Abstract Large Language Models (LLMs) w... |
Hypoxia-and-intra-complex-genetic-suppressors-resc.pdf | Article Hypoxia and intra-complex genetic suppressors rescue complex I mutants by a shared mechanism Graphical abstract Highlights dHypoxia rescue and hyperoxia sensitivity of complex I mutants are conserved in C. elegans dHypoxia rescue is independent of HIF activation or attenuation of ROS toxicity dNDUFA6/nuo-3(G60D... |
2304.11082.pdf | Preprint. Under review. FUNDAMENTAL LIMITATIONS OF ALIGNMENT INLARGE LANGUAGE MODELS Yotam Wolf The Hebrew University yotam.wolf@cs.huji.ac.ilNoam Wies The Hebrew University noam.wies@cs.huji.ac.il Yoav Levine AI21 Labs yoavl@ai21.comAmnon Shashua The Hebrew University shashua@cs.huji.ac.il ABSTRACT An important aspect... |
s41586-023-06924-6_reference.pdf | Mathematical discoveries from program search with large language models Bernardino R om er aPa re de s, M oh am ma damin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli & Alhussein Fawzi This is a PDF ... |
GPSA Supplementary Information.pdf | Supplementary Information for: Generative Capacity of Probabilistic Protein Sequence Models Francisco McGee Sandro Hauri Quentin Novinger Slobodan Vucetic Ronald M. Levy Vincenzo Carnevale Allan Haldane Supplementary Note 1 sVAE implementation The standard variational autoencoder (sVAE) is a deep, symmetrical, and unde... |
Doc2Cube.pdf | Doc2Cube: Automated Document Allocation to Text Cube via Dimension-Aware Joint Embedding Fangbo Tao1, Chao Zhang1, Xiusi Chen1, Meng Jiang2, Tim Hanratty3, Lance Kaplan3, Jiawei Han1 1Dept. of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL, USA 2Dept. of Computer Science and Engineering, Unive... |
2302.04065.pdf | Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps Marco Cuturi1Michal Klein1Pierre Ablin1 Abstract Optimal transport (OT) theory focuses, among all mapsT:RdRdthat can morph a probability measure onto another, on those that are the thriftiest, i.e. such that the averag... |
2106.09685.pdf | LORA: L OW-RANK ADAPTATION OF LARGE LANGUAGE MODELS Edward HuYelong ShenPhillip Wallis Zeyuan Allen-Zhu Yuanzhi Li Shean Wang Lu Wang Weizhu Chen Microsoft Corporation {edwardhu, yeshe, phwallis, zeyuana, yuanzhil, swang, luw, wzchen }@microsoft.com yuanzhil@andrew.cmu.edu (Version 2) ABSTRACT An important paradigm of ... |
2307.13304.pdf | QuIP: 2-Bit Quantization of Large Language Models With Guarantees Jerry Chee Department of Computer Science Cornell University jerrychee@cs.cornell.eduYaohui Cai Department of Electrical and Computer Engineering Cornell University yc2632@cornell.edu Volodymyr Kuleshov Department of Computer Science Cornell University k... |
llama.pdf | LLaMA: Open and Efficient Foundation Language Models Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozire, Naman Goyal Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin Edouard Grave, Guillaume Lample Meta AI Abstract We introduce LLaMA, a colle... |
2203.02155.pdf | Training language models to follow instructions with human feedback Long OuyangJeff WuXu JiangDiogo AlmeidaCarroll L. Wainwright Pamela MishkinChong Zhang Sandhini Agarwal Katarina Slama Alex Ray John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens Amanda AskellPeter Welinder Paul Christiano Jan LeikeRyan... |
2305.14992.pdf | Reasoning with Language Model is Planning with World Model Shibo HaoYi Gu Haodi MaJoshua Jiahua HongZhen Wang Daisy Zhe WangZhiting Hu UC San Diego,University of Florida Mohamed bin Zayed University of Artificial Intelligence {s5hao, yig025, jjhong, zhw085, zhh019}@ucsd.edu {ma.haodi, daisyw}@ufl.edu Abstract Large la... |
1606.06565.pdf | Concrete Problems in AI Safety Dario Amodei Google BrainChris Olah Google BrainJacob Steinhardt Stanford UniversityPaul Christiano UC Berkeley John Schulman OpenAIDan Man e Google Brain Abstract Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts... |
Understanding-the-cell--Future-views-of-structural.pdf | Leading Edge Perspective Understanding the cell: Future views of structural biology Martin Beck,1,3,4,5, *Roberto Covino,2,4,5,*Inga Ha nelt,3,4,5,*and Michaela Mu ller-McNicoll3,4,5,* 1Max Planck Institute of Biophysics, Max-von-Laue-Strae 3, 60438 Frankfurt am Main, Germany 2Frankfurt Institute for Advanced Studies,... |
2308.13731-2.pdf | Learning variational autoencoders via MCMC speed measures Marcel Hirt1, Vasileios Kreouzis2, Petros Dellaportas2,3* 1School of Social Sciences & School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore. 2*Department of Statistical Science, University College London, UK. 3Department of S... |
427986745_768441298640104_1604906292521363076_n.pdf | Revisiting Feature Prediction for Learning Visual Representations from Video Adrien Bardes1,2,3,Quentin Garrido1,4,Jean Ponce3,5,6,Xinlei Chen1,Michael Rabbat1,Yann LeCun1,5,6, Mahmoud Assran1,,Nicolas Ballas1, 1FAIR at Meta,2Inria,3cole normale suprieure, CNRS, PSL Research University,4Univ. Gustave Eiffel, CNRS, LIGM... |
2021.02.12.430858v1.full.pdf | MSA Transformer Roshan Rao1 2Jason Liu3Robert Verkuil3Joshua Meier3 John F. Canny1Pieter Abbeel1Tom Sercu3Alexander Rives3 4 Abstract Unsupervised protein language models trained across millions of diverse sequences learn structure and function of proteins. Protein language models studied to date have been trained to p... |
2311.11829.pdf | System 2 Attention (is something you might need too) Jason Weston MetaSainbayar Sukhbaatar Meta Abstract Soft attention in Transformer-based Large Language Models (LLMs) is susceptible to incorporating irrelevant information from the context into its latent representations, which adversely affects next token generation... |
2212.08073.pdf | Constitutional AI: Harmlessness from AI Feedback Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, E... |
2310.00166.pdf | MOTIF : INTRINSIC MOTIVATION FROM ARTIFICIAL INTELLIGENCE FEEDBACK Martin Klissarov*, 1, 2, 5& Pierluca DOro*, 1, 2, 4, Shagun Sodhani2, Roberta Raileanu2, Pierre-Luc Bacon1, 4, Pascal Vincent1, 2, Amy Zhang2, 3, Mikael Henaff2 1Mila,2FAIR at Meta,3UT Austin,4Universit e de Montr eal,5McGill University ABSTRACT Explori... |
1910.07467.pdf | Root Mean Square Layer Normalization Biao Zhang1Rico Sennrich2,1 1School of Informatics, University of Edinburgh 2Institute of Computational Linguistics, University of Zurich B.Zhang@ed.ac.uk, sennrich@cl.uzh.ch Abstract Layer normalization (LayerNorm) has been successfully applied to various deep neural networks to he... |
2311.06158.pdf | Language Models can be Logical Solvers Jiazhan Feng1Ruochen Xu2Junheng Hao2Hiteshi Sharma2 Yelong Shen2Dongyan Zhao1Weizhu Chen2 1Peking University, Beijing2Microsoft Azure AI, Redmond {fengjiazhan,zhaody}@pku.edu.cn {ruox,junhenghao,hitshar,yeshe,wzchen}@microsoft.com Abstract Logical reasoning is a fundamental aspect... |
s41467-023-38539-w.pdf | Article https://doi.org/10.1038/s41467-023-38539-w A method for restoring signals and revealing individual macromolecule states incryo-ET, REST Haonan Zhang1,2,3,Y a nL i1,3,Y a n a nL i u1,2, Dongyu Li1,2,L i nW a n g1, Kai Song1, Keyan Bao1& Ping Zhu1,2 Cryo-electron tomography (cryo-ET) is widely used to explore the... |
2309.10202.pdf | STABILIZING RLHF THROUGH ADVANTAGE MODEL AND SELECTIVE REHEARSAL Baolin Peng, Linfeng Song, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu Tencent AI Lab {baolinpeng,lfsong,yaptian,lifengjin,haitaomi }@global.tencent.com ABSTRACT Large Language Models (LLMs) have revolutionized natural language processing, yet aligning these ... |
2402.06044.pdf | pawOpenToM : A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models Hainiu Xu1Runcong Zhao1Lixing Zhu1 Jinhua Du2Yulan He1,3 1Kings College London2Huawei London Research Centre 3The Alan Turing Institute {hainiu.xu, runcong.zhao, lixing.zhu, yulan.he}@kcl.ac.uk {jinhua.d... |
2303.16199.pdf | LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention Renrui Zhang1,2, Jiaming Han1, Aojun Zhou2, Xiangfei Hu1, Shilin Yan1 Pan Lu3, Hongsheng Li2, Peng Gao1, Yu Qiao1 1Shanghai Artificial Intelligence Laboratory2CUHK MMLab 3University of California, Los Angeles {zhangrenrui, hanjiaming, gaop... |
2022.12.21.521521v1.full.pdf | Language models generalize beyond natural proteins Robert Verkuil1 *Ori Kabeli1 *Yilun Du1 2Basile I. M. Wicky3 4Lukas F. Milles3 4Justas Dauparas3 4 David Baker3 4 5Sergey Ovchinnikov6Tom Sercu1Alexander Rives1 7 Abstract Learning the design patterns of proteins from sequences across evolution may have promise toward... |
WelTeh2011a.pdf | Bayesian Learning via Stochastic Gradient Langevin Dynamics Max Welling welling@ics.uci.edu D. Bren School of Information and Computer Science, University of California, Irvine, CA 92697-3425, USA Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, UCL, 17 Queen Square, London WC1N 3AR, UK Abstr... |
2403.06634.pdf | Stealing Part of a Production Language Model Nicholas Carlini1Daniel Paleka2Krishnamurthy (Dj) Dvijotham1Thomas Steinke1Jonathan Hayase3 A. Feder Cooper1Katherine Lee1Matthew Jagielski1Milad Nasr1Arthur Conmy1Eric Wallace4 David Rolnick5Florian Tramr2 Abstract We introduce the first model-stealing attack that extracts ... |
2306.02531.pdf | PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model Yizhe Zhang, Jiatao Gu, Zhuofeng Wu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly Apple Inc. {yizzhang, jgu32, zhuofeng_wu, szhai, jsusskind, njaitly}@apple.com Abstract Autoregressive models for text sometimes generate repetitive and low-qu... |
1707.06347.pdf | Proximal Policy Optimization Algorithms John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov OpenAI {joschu, filip, prafulla, alec, oleg }@openai.com Abstract We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction wit... |
22-1514.pdf | Journal of Machine Learning Research 24 (2023) 1-42 Submitted 12/22; Published 6/23 Convex Reinforcement Learning in Finite Trials Mirco Mutti mirco.mutti@polimi.it Politecnico di Milano Piazza Leonardo Da Vinci 32, 20133 Milan, Italy Riccardo De Santirdesanti@ethz.ch ETH Z urich R amistrasse 101, 8092 Z urich, Switzer... |
1606.08415.pdf | GAUSSIAN ERROR LINEAR UNITS (GELU S) Dan Hendrycks University of California, Berkeley hendrycks@berkeley.eduKevin Gimpel Toyota Technological Institute at Chicago kgimpel@ttic.edu ABSTRACT We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU activation functio... |
2110.07205.pdf | SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing Junyi Ao1,2,, Rui Wang3,, Long Zhou4,, Chengyi Wang4, Shuo Ren4, Yu Wu4, Shujie Liu4, Tom Ko1, Qing Li2, Yu Zhang1,5, Zhihua Wei3, Yao Qian4, Jinyu Li4, Furu Wei4 1Department of Computer Science and Engineering, Southern University of S... |
image-decoding-paper.pdf | BRAIN DECODING :TOWARD REAL -TIME RECONSTRUCTION OF VISUAL PERCEPTION Yohann Benchetrit1,, Hubert Banville1,, Jean-R emi King1,2 1FAIR, Meta,2Laboratoire des Syst `emes Perceptifs, Ecole Normale Sup erieure, PSL University {ybenchetrit,hubertjb,jeanremi }@meta.com ABSTRACT In the past five years, the use of generative ... |
2402.13064.pdf | Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Benyou Wa... |
2303.07678.pdf | Query2doc: Query Expansion with Large Language Models Liang Wang and Nan Yang and Furu Wei Microsoft Research {wangliang,nanya,fuwei}@microsoft.com Abstract This paper introduces a simple yet effective query expansion approach, denoted as query2doc , to improve both sparse and dense retrieval systems. The proposed meth... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.