file_name stringlengths 7 127 | text stringlengths 1.27k 557k |
|---|---|
2212.05339.pdf | Elixir: Train a Large Language Model on a Small GPU Cluster Haichen Huang HPC-AI Technology Inc. hhc@hpcaitech.comJiarui Fang HPC-AI Technology Inc. fangjr@hpcaitech.comHongxin Liu HPC-AI Technology Inc. liuhongxin@hpcaitech.com Shenggui Li HPC-AI Technology Inc. lisg@hpcaitech.comYang You National University of Singap... |
2310.12442.pdf | Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer Qingru Zhang, Dhananjay Ram, Cole Hawkins, Sheng Zha, Tuo Zhao Georgia Institute of TechnologyAmazon Web Service {qingru.zhang,tourzhao}@gatech.edu {radhna,colehawk,zhasheng}@amazon.com Abstract Pretrained transformer models ... |
1703.03400.pdf | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Chelsea Finn1Pieter Abbeel1 2Sergey Levine1 Abstract We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning probl... |
2211.03540.pdf | Measuring Progress on Scalable Oversight for Large Language Models Samuel R. Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen,Craig Pettit,Scott Heiner,Kamil e Lukoi ute, Amanda Askell, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Christopher Olah, Daniela Amodei, Dario Amodei, Dawn Drain, Dust... |
2310.05869.pdf | HyperAttention: Long-context Attention in Near-Linear Time Insu Han Yale University insu.han@yale.eduRajesh Jayaram Google Research rkjayaram@google.comAmin Karbasi Yale University, Google Research amin.karbasi@yale.edu Vahab Mirrokni Google Research mirrokni@google.comDavid P. Woodruff CMU, Google Research dwoodruf@cs... |
2310.13548.pdf | TOWARDS UNDERSTANDING SYCOPHANCY IN LANGUAGE MODELS Mrinank Sharma, Meg Tong, Tomasz Korbak, David Duvenaud Amanda Askell, Samuel R. Bowman, Newton Cheng, Esin Durmus, Zac Hatfield-Dodds, Scott R. Johnston, Shauna Kravec, Timothy Maxwell, Sam McCandlish, Kamal Ndousse, Oliver Rausch, Nicholas Schiefer, Da Yan, Miranda ... |
2005.11401.pdf | Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kttler, Mike Lewis, Wen-tau Yih, Tim Rocktschel, Sebastian Riedel, Douwe Kiela Facebook AI Research;University College London;New York University; plew... |
2304.07313.pdf | M2T: Masking Transformers Twice for Faster Decoding Fabian Mentzer Google Research mentzer@google.comEirikur Agustsson Google Research eirikur@google.comMichael Tschannen Google Research tschannen@google.com Abstract We show how bidirectional transformers trained for masked token prediction can be applied to neural ima... |
2305.09836.pdf | Revisiting the Minimalist Approach to Offline Reinforcement Learning Denis Tarasov Vladislav Kurenkov Alexander Nikulin Sergey Kolesnikov Tinkoff {den.tarasov, v.kurenkov, a.p.nikulin, s.s.kolesnikov}@tinkoff.ai Abstract Recent years have witnessed significant advancements in offline reinforcement learning (RL), result... |
sutskever10a.pdf | 789On the Convergence Properties of Contrastive Divergence Ilya Sutskever Tijmen Tieleman University of Toronto University of Toronto Abstract Contrastive Divergence (CD) is a popular method for estimating the parameters of Markov Random Fields (MRFs) by rapidly approximating an intractable term in the gradien... |
2306.02572.pdf | Les Houches Summer School Lecture Notes 2022 Preprint Introduction to Latent Variable Energy-Based Models: A Path Towards Autonomous Machine Intelligence Anna Dawid1,2and Yann LeCun3,4 1ICFO Institut de Cincies Fotniques, The Barcelona Institute of Science and Technology, Av. Carl Friedrich Gauss 3, 08860 Castelldefels... |
2306.16922.pdf | THEEXPRESSIVE LEAKY MEMORY NEURON : ANEFFICIENT AND EXPRESSIVE PHENOMENOLOGICAL NEURON MODEL CANSOLVE LONG -HORIZON TASKS Aaron Spieler1,2, Nasim Rahaman3,2, Georg Martius1,2, Bernhard Schlkopf2, and Anna Levina1,4 1University of Tbingen 2Max Planck Institute for Intelligent Systems, Tbingen 3Mila, Quebec AI Institute ... |
2004.04906.pdf | Dense Passage Retrieval for Open-Domain Question Answering Vladimir Karpukhin, Barlas O guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih Facebook AIUniversity of WashingtonPrinceton University {vladk, barlaso, plewis, ledell, edunov, scottyih }@fb.com sewon@cs.washington.edu danqic@cs.pr... |
IN Tetramer manuscript merged with figures bioRxiv.pdf | 1 Oligomeric HIV-1 Integrase Structures Reveal Functional Plasticity for Intasome Assembly and RNA Binding Tao Jing1, Zelin Shan1, Tung Dinh4, Avik Biswas1, Sooin Jang5,6, Juliet Greenwood5, Min Li7, Zeyuan Zhang1, Gennavieve Gray1, Hye Jeong Shin1, Bo Zhou1, Dario Passos1, Sriram Aiyer1, Zhen Li5, Robert Craigie7, ... |
2206.02326.pdf | arXiv:2206.02326v1 [cs.LG] 6 Jun 2022Asymptotic Instance-Optimal Algorithms for Interactive D ecision Making Kefan Dong Stanford University kefandong@stanford.eduTengyu Ma Stanford University tengyuma@stanford.edu June 7, 2022 Abstract Past research on interactive decision making problems (ban dits, reinforcement lea... |
few_shot_clustering.pdf | Large Language Models Enable Few-Shot Clustering Vijay Viswanathan1, Kiril Gashteovski2, Carolin Lawrence2, Tongshuang Wu1, Graham Neubig1, 3 1Carnegie Mellon University,2NEC Laboratories Europe,3Inspired Cognition Abstract Unlike traditional unsupervised clustering, semi-supervised clustering allows users to provide m... |
2205.05131.pdf | UL2: Unifying Language Learning Paradigms Yi TayMostafa Dehghani Vinh Q. TranXavier GarciaJason WeiXuezhi WangHyung Won Chung Siamak ShakeriDara BahriTal SchusterHuaixiu Steven Zheng Denny ZhouNeil HoulsbyDonald Metzler Google Brain Abstract Existing pre-trained models are generally geared towards a particular class of... |
2304.01373.pdf | Pythia : A Suite for Analyzing Large Language Models Across Training and Scaling Stella Biderman* 1 2Hailey Schoelkopf* 1 3Quentin Anthony1Herbie Bradley1 4Kyle OBrien1 Eric Hallahan1Mohammad Aflah Khan5Shivanshu Purohit6 1USVSN Sai Prashanth1Edward Raff2 Aviya Skowron1Lintang Sutawika1 7Oskar van der Wal8 Abstract How... |
1-probabilistic protein sequence models.pdf | ARTICLE The generative capacity of probabilistic protein sequence models Francisco McGee1,2,3, Sandro Hauri4,5, Quentin Novinger2,5, Slobodan Vucetic4,5, Ronald M. Levy1,3,6,7, Vincenzo Carnevale2,3& Allan Haldane1,7 Potts models and variational autoencoders (VAEs) have recently gained popularity as generative protein ... |
2306.14846.pdf | ViNT: A Foundation Model for Visual Navigation Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine UC Berkeley Abstract: General-purpose pre-trained models (foundation models) have enabled practitioners to produce generalizable solutions for individual machine learning ... |
2207.08286.pdf | An Overview of Distant Supervision for Relation Extraction with a Focus on Denoising and Pre-training Methods William P Hogan Department of Computer Science & Engineering University of California, San Diego Abstract Relation Extraction (RE) is a foundational task of natural language processing. RE seeks to transform ra... |
2210.10760.pdf | Scaling Laws for Reward Model Overoptimization Leo Gao OpenAIJohn Schulman OpenAIJacob Hilton OpenAI Abstract In reinforcement learning from human feedback, it is common to optimize against a reward model trained to predict human preferences. Because the reward model is an imperfect proxy, optimizing its value too much... |
2306.01708.pdf | Resolving Interference When Merging Models Prateek Yadav1Derek Tam1 Leshem Choshen2Colin Raffel1Mohit Bansal1 1University of North Carolina at Chapel Hill2IBM Research leshem.choshen@il.ibm.com {praty,dtredsox,craffel,mbansal}@cs.unc.edu Abstract Transfer learning i.e., further fine-tuning a pre-trained model on a dow... |
6593_contrastive_preference_learnin.pdf | Under review as a conference paper at ICLR 2024 CONTRASTIVE PREFERENCE LEARNING : LEARNING FROM HUMAN FEEDBACK WITHOUT RL Anonymous authors Paper under double-blind review ABSTRACT Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF a... |
2312.10003.pdf | REST MEETS REACT: S ELF-IMPROVEMENT FOR MULTI -STEPREASONING LLM A GENT Renat Aksitov1, Sobhan Miryoosefi1, Zonglin Li1, Daliang Li1, Sheila Babayan2, Kavya Kopparapu2, Zachary Fisher1, Ruiqi Guo1, Sushant Prakash1, Pranesh Srinivasan3, Manzil Zaheer2, Felix Yu1, and Sanjiv Kumar1 1Google Research,2Google DeepMind,3Goo... |
2310.11564.pdf | PERSONALIZED SOUPS : PERSONALIZED LARGE LANGUAGE MODEL ALIGNMENT VIA POST-HOC PARAMETER MERGING Joel Jang1,2Seungone Kim3Bill Yuchen Lin2Yizhong Wang1Jack Hessel2 Luke Zettlemoyer1Hannaneh Hajishirzi1,2Yejin Choi1,2Prithviraj Ammanabrolu4 1University of Washington2Allen Institute for AI3KAIST AI4UC San Diego joeljang@c... |
2401.05300.pdf | I am a Strange Dataset: Metalinguistic Tests for Language Models Tristan Thrush, Jared Moore, Miguel Monares, Christopher Potts, Douwe Kiela Stanford University; UC San Diego; Playtest AI; Contextual AI tthrush@stanford.edu Abstract Statements involving metalinguistic selfreference (This paper has six sections.) are pr... |
Accurate transition state generation with an object-aware equivariant elementary reaction diffusion model.pdf | Accurate transition state generation with an object-aware equivariant elementary reaction diffusion model Chenru Duan1, 2, *, Yuanqi Du3, Haojun Jia1, 2, and Heather J. Kulik1, 2 1Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA, 02139 2Department of Chemical Engineering, Massachusetts Inst... |
2306.00238.pdf | Bytes Are All You Need: Transformers Operating Directly On File Bytes Maxwell Horton, Sachin Mehta, Ali Farhadi, Mohammad Rastegari Apple Abstract Modern deep learning approaches usually transform inputs into a modality-specific form. For example, the most common deep learning approach to image classification involves ... |
2305.12387.pdf | Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model Alexander Tyurin KAUST Saudi Arabia alexandertiurin@gmail.comPeter Richt arik KAUST Saudi Arabia richtarik@gmail.com Abstract Parallelization is a popular strategy for improving the performance of iterative algorithms.... |
2104.08821.pdf | SimCSE: Simple Contrastive Learning of Sentence Embeddings Tianyu GaoXingcheng YaoDanqi Chen Department of Computer Science, Princeton University Institute for Interdisciplinary Information Sciences, Tsinghua University {tianyug,danqic}@cs.princeton.edu yxc18@mails.tsinghua.edu.cn Abstract This paper presents SimCSE, a... |
1912.02292.pdf | DEEPDOUBLE DESCENT : WHERE BIGGER MODELS AND MORE DATA HURT Preetum Nakkiran Harvard UniversityGal Kaplun Harvard UniversityYamini Bansal Harvard UniversityTristan Yang Harvard University Boaz Barak Harvard UniversityIlya Sutskever OpenAI ABSTRACT We show that a variety of modern deep learning tasks exhibit a double-de... |
2401.10241.pdf | ZERO BUBBLE PIPELINE PARALLELISM Penghui Qi, Xinyi Wan, Guangxing Huang & Min Lin Sea AI Lab {qiph,wanxy,huanggx,linmin }@sea.com ABSTRACT Pipeline parallelism is one of the key components for large-scale distributed training, yet its efficiency suffers from pipeline bubbles which were deemed inevitable. In this work, ... |
2304.09871.pdf | A Theory on Adam Instability in Large-Scale Machine Learning Igor Molybog, Peter Albert, Moya Chen, Zachary DeVito, David Esiobu, Naman Goyal, Punit Singh Koura, Sharan Narang, Andrew Poulton, Ruan Silva, Binh Tang, Diana Liskovich, Puxin Xu, Yuchen Zhang, Melanie Kambadur, Stephen Roller, Susan Zhang Meta AI April 26,... |
1909.05215.pdf | Published as a conference paper at ICLR 2020 RECONSTRUCTING CONTINUOUS DISTRIBUTIONS OF 3D PROTEIN STRUCTURE FROM CRYO -EM IMAGES Ellen D. Zhong MIT zhonge@mit.eduTristan Bepler MIT tbepler@mit.eduJoseph H. Davis MIT jhdavis@mit.eduBonnie Berger MIT bab@mit.edu ABSTRACT Cryo-electron microscopy (cryo-EM) is a powerful ... |
2402.08609.pdf | 2024-2-14 Mixtures of Experts Unlock Parameter Scaling for Deep RL Johan Obando-Ceron*,1,2,3, Ghada Sokar*,1, Timon Willi*,4, Clare Lyle1, Jesse Farebrother1,2,5, Jakob Foerster4, Gintare Karolina Dziugaite1,2,5, Doina Precup1,2,5and Pablo Samuel Castro1,2,3 *Equal contributions,1Google DeepMind,2Mila Qubec AI Institut... |
2306.17563.pdf | arXiv:2306.17563v1 [cs.IR] 30 Jun 2023Preprint LARGE LANGUAGE MODELS ARE EFFECTIVE TEXT RANKERS WITH PAIRWISE RANKING PROMPTING Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, J iaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michae l Bendersky Google Research {zhenqin,jagerman,kaihuibj,... |
2310.07096.pdf | Sparse Universal Transformer Shawn Tan1 * tanjings@mila.quebecYikang Shen2 * yikang.shen@ibm.com Zhenfang Chen2 zfchen@ibm.comAaron Courville1 courvila@iro.umontreal.caChuang Gan2 chuangg@ibm.com 1Mila, University of Montreal2MIT-IBM Watson AI Lab Abstract The Universal Transformer (UT) is a variant of the Transformer ... |
1502.05767.pdf | arXiv:1502.05767v4 [cs.SC] 5 Feb 2018Automatic Differentiation in Machine Learning: a Survey Atlm G une s Baydin gunes@robots.ox.ac.uk Department of Engineering Science University of Oxford Oxford OX1 3PJ, United Kingdom Barak A. Pearlmutter barak@pearlmutter.net Department of Computer Science National University of ... |
1601.00670.pdf | Variational Inference: A Review for Statisticians David M. Blei Department of Computer Science and Statistics Columbia University Alp Kucukelbir Department of Computer Science Columbia University Jon D. McAuliffe Department of Statistics University of California, Berkeley May 11, 2018 Abstract One of the core problems ... |
2310.17722.pdf | LARGE LANGUAGE MODELS AS GENERALIZABLE POLICIES FOR EMBODIED TASKS Andrew Szot, Max Schwarzer, Harsh Agrawal, Bogdan Mazoure, Walter Talbott Katherine Metcalf, Natalie Mackraz, Devon Hjelm, Alexander Toshev Apple ABSTRACT We show that large language models (LLMs) can be adapted to be generalizable policies for embodied... |
2024.02.06.579080.full.pdf | Direct Coupling Analysis and the Attention Mechanism 1 Francesco Caredda1and Andrea Pagnani1,2,3 2 1DISAT, Politecnico di Torino, Corso Duca degli Abruzzi, 24, I-10129, Torino, Italy 3 2Italian Institute for Genomic Medicine, IRCCS Candiolo, SP-142, I-10060, 4 Candiolo, Italy 5 3INFN, Sezione di Torino, Torino, Via Pie... |
1509.02971.pdf | Published as a conference paper at ICLR 2016 CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver & Daan Wierstra Google Deepmind London, UK {countzero, jjhunt, apritzel, heess, etom, tassa, davidsilver, wierstr... |
2305.14314.pdf | QL ORA: Efficient Finetuning of Quantized LLMs Tim DettmersArtidoro PagnoniAri Holtzman Luke Zettlemoyer University of Washington {dettmers,artidoro,ahai,lsz}@cs.washington.edu Abstract We present QLORA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB... |
2312.16682.pdf | Some things are more CRINGE than others: Preference Optimization with the Pairwise Cringe Loss Jing Xu1Andrew Lee1Sainbayar Sukhbaatar1Jason Weston1 Abstract Practitioners commonly align large language models using pairwise preferences, i.e., given labels of the type response A is preferred to response B for a given in... |
2005.14165.pdf | Language Models are Few-Shot Learners Tom B. BrownBenjamin MannNick RyderMelanie Subbiah Jared KaplanPrafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell Sandhini Agarwal Ariel Herbert-Voss Gretchen Krueger Tom Henighan Rewon Child Aditya Ramesh Daniel M. Ziegler Jeffrey Wu Clemens Winter Chris... |
2304.14767.pdf | Dissecting Recall of Factual Associations in Auto-Regressive Language Models Mor Geva1Jasmijn Bastings1Katja Filippova1Amir Globerson2,3 1Google DeepMind2Tel Aviv University3Google Research {pipek, bastings, katjaf, amirg}@google.com Abstract Transformer-based language models (LMs) are known to capture factual knowledg... |
2307.00524.pdf | Large Language Models Enable Few-Shot Clustering Vijay Viswanathan1, Kiril Gashteovski2, Carolin Lawrence2, Tongshuang Wu1, Graham Neubig1, 3 1Carnegie Mellon University,2NEC Laboratories Europe,3Inspired Cognition Abstract Unlike traditional unsupervised clustering, semi-supervised clustering allows users to provide m... |
2305.16264.pdf | Scaling Data-Constrained Language Models Niklas Muennighoff1Alexander M. Rush1Boaz Barak2Teven Le Scao1 Aleksandra Piktus1Nouamane Tazi1Sampo Pyysalo3Thomas Wolf1Colin Raffel1 1Hugging Face2Harvard University3University of Turku n.muennighoff@gmail.com Abstract The current trend of scaling language models involves incr... |
103-112.pdf | Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2010) M. Otaduy and Z. Popovic (Editors) A Bayesian Interactive Optimization Approach to Procedural Animation Design Eric Brochu Tyson Brochu Nando de Freitas University of British Columbia Abstract The computer graphics and animation fields are filled with ap... |
2303.01469.pdf | Consistency Models Yang Song1Prafulla Dhariwal1Mark Chen1Ilya Sutskever1 Abstract Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models , a ne... |
2402.08797.pdf | Computing Power and the Governance of Artificial Intelligence Girish Sastry,1Lennart Heim,2Haydn Belfield,3 Markus Anderljung,2Miles Brundage,1Julian Hazell,2,4Cullen OKeefe,1,5 Gillian K. Hadfield,6,7Richard Ngo,1Konstantin Pilz,8George Gor,9 Emma Bluemke,2Sarah Shoker,1Janet Egan,10Robert F . Trager,11 Shahar Avin,12... |
2005.04613.pdf | arXiv:2005.04613v1 [cs.CV] 10 May 2020Variational Clustering: Leveraging Variational Autoencoders for Image Clustering Vignesh Prasad* TU Darmstadt Germany vignesh.prasad@tu-darmstadt.deDipanjan Das* Embedded Systems and Robotics TCS Innovation Labs , Kolkata, India dipanjan.da@tcs.comBrojeshwar Bhowmick Embedded Sys... |
2207.05221.pdf | Language Models (Mostly) Know What They Know Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav F... |
2310.17680.pdf | CODEFUSION : A Pre-trained Diffusion Model for Code Generation Mukul Singh Microsoft Delhi, IndiaJos Cambronero Sumit Gulwani Vu Le Microsoft Redmond, USCarina Negreanu Microsoft Research Cambridge, UKGust Verbruggen Microsoft Keerbergen, Belgium Abstract Imagine a developer who can only change their last line of codeh... |
2022.12.21.521526v1.full.pdf | A high-level programming language for generative protein design Brian Hie12 *Salvatore Candido1 *Zeming Lin1 3Ori Kabeli1 Roshan Rao1Nikita Smetanin1Tom Sercu1Alexander Rives1 4 Abstract Combining a basic set of building blocks into more complex forms is a universal design principle. Most protein designs have proceede... |
2209.15634v1.pdf | A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning Zixiang ChenChris Junchi LiAngela YuanQuanquan GuMichael I. Jordan, Department of Computer Sciences, University of California, Los Angeles Department of Electrical Engineering and Computer Sciences, University of California, Berke... |
The generative capacity of probabilistic protein sequence models.pdf | ARTICLE The generative capacity of probabilistic protein sequence models Francisco McGee1,2,3, Sandro Hauri4,5, Quentin Novinger2,5, Slobodan Vucetic4,5, Ronald M. Levy1,3,6,7, Vincenzo Carnevale2,3& Allan Haldane1,7 Potts models and variational autoencoders (VAEs) have recently gained popularity as generative protein ... |
2002.05227.pdf | Variational Autoencoders with Riemannian Brownian Motion Priors Dimitris Kalatzis1David Eklund2Georgios Arvanitidis3Sren Hauberg1 Abstract Variational Autoencoders (V AEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean. This assumption naturally leads to the commo... |
flamholz2024large.pdf | Nature Microbiology nature microbiologyhttps://doi.org/10.1038/s41564-023-01584-8 Analysis Large language models improve annotation of prokaryotic viral proteins Zachary N. Flamholz 1, Steven J. Biller 2 & Libusha Kelly 1,3 Viral genomes are poorly annotated in metagenomic samples, representing an obstacle to und... |
121_Testing_Manifold.pdf | JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 29, Number 4, October 2016, Pages 9831049 http://dx.doi.org/10.1090/jams/852Article electronically published on February 9, 2016 TESTING THE MANIFOLD HYPOTHESIS CHARLES FEFFERMAN, SANJOY MITTER, AND HARIHARAN NARAYANAN Contents 1. Introduction 984 1.1. Definitions 988... |
optq.pdf | Published as a conference paper at ICLR 2023 OPTQ: A CCURATE POST-TRAINING QUANTIZATION FOR GENERATIVE PRE-TRAINED TRANSFORMERS Elias Frantar IST AustriaSaleh Ashkboos ETH ZurichTorsten Hoefler ETH ZurichDan Alistarh IST Austria & NeuralMagic ABSTRACT Generative Pre-trained Transformer models, known as GPT or OPT, set ... |
2004.10188.pdf | Journal of Machine Learning Research 21 (2020) 1-41 Submitted 4/20; Revised 10/20; Published 11/20 Residual Energy-Based Models for Text Anton BakhtinYuntian DengSam GrossMyle Ott MarcAurelio RanzatoArthur Szlam {yolo,sgross,myleott,ranzato,aszlam}@fb.com dengyuntian@seas.harvard.edu Facebook AI ResearchHarvard Univers... |
2309.03409.pdf | LARGE LANGUAGE MODELS AS OPTIMIZERS Chengrun Yang*Xuezhi Wang Yifeng Lu Hanxiao Liu Quoc V . Le Denny Zhou Xinyun Chen* {chengrun, xuezhiw, yifenglu}@google.com, 6.hanxiao@gmail.com {qvl, dennyzhou, xinyunchen}@google.com Google DeepMind*Equal contribution ABSTRACT Optimization is ubiquitous. While derivative-based alg... |
1810.08575.pdf | Supervising strong learners by amplifying weak experts Paul Christiano OpenAI paul@openai.comBuck Shlegeris bshlegeris@gmail.comDario Amodei OpenAI damodei@openai.com Abstract Many real world learning tasks involve complex or hard-to-specify objectives, and using an easier-to-specify proxy can lead to poor performance ... |
jukebox.pdf | Jukebox: A Generative Model for Music Prafulla Dhariwal* 1Heewoo Jun* 1Christine Payne* 1Jong Wook Kim1Alec Radford1Ilya Sutskever1 Abstract We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-V AE to compress it to discr... |
2402.06627.pdf | Feedback Loops With Language Models Drive In-Context Reward Hacking Alexander Pan UC Berkeley aypan.17@berkeley.eduErik Jones UC Berkeley erjones@berkeley.eduMeena Jagadeesan UC Berkeley mjagadeesan@berkeley.edu Jacob Steinhardt UC Berkeley jsteinhardt@berkeley.edu Abstract Language models influence the external world:... |
2308.13731.pdf | Learning variational autoencoders via MCMC speed measures Marcel Hirt1, Vasileios Kreouzis2, Petros Dellaportas2,3* 1School of Social Sciences & School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore. 2*Department of Statistical Science, University College London, UK. 3Department of S... |
2311.11045.pdf | Orca 2: Teaching Small Language Models How to Reason Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas Clarisse Simoes, Sahaj Agrawal, Xuxi Chen, Anastasia Razdaibiedina Erik Jones, Kriti Aggarwal, Hamid Palangi, Guoqing Zheng Corby Rosset, Hamed Khanpour, Ahmed Awadallah Microsoft Research Abstract Orca 1... |
2310.02304.pdf | SELF-TAUGHT OPTIMIZER (STOP ): RECURSIVELY SELF-IMPROVING CODE GENERATION Eric Zelikman1,2, Eliana Lorch, Lester Mackey1, Adam Tauman Kalai1 1Microsoft Research,2Stanford University ABSTRACT Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a sc... |
2022.05.17.492325v2.full.pdf | Inferring Neural Activity Before Plasticity: A Foundation 1 for Learning Beyond Backpropagation 2 Yuhang Song1,2,*, Beren Millidge2, Tommaso Salvatori1, Thomas Lukasiewicz1,*, Zhenghua Xu1,3, and 3 Rafal Bogacz2,*4 1Department of Computer Science, University of Oxford, Oxford, United Kingdom 5 2Medical Research Council... |
2211.15661.pdf | Published as a conference paper at ICLR 2023 WHAT LEARNING ALGORITHM IS IN -CONTEXT LEARN ING? INVESTIGATIONS WITH LINEAR MODELS Ekin Aky urek1,2,a.Dale Schuurmans1Jacob Andreas2Tengyu Ma1,3,bDenny Zhou1 1Google Research2MIT CSAIL3Stanford Universitycollaborative advising ABSTRACT Neural sequence models, especially tra... |
2303.04671.pdf | Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models Chenfei Wu Shengming Yin Weizhen Qi Xiaodong Wang Zecheng Tang Nan Duan* Microsoft Research Asia {chewu, v-sheyin, t-weizhenqi, v-xiaodwang, v-zetang, nanduan }@microsoft.com Abstract ChatGPT is attracting a cross-field interest as it provides a... |
2310.03026.pdf | Preprint LANGUAGE MPC: L ARGE LANGUAGE MODELS AS DECISION MAKERS FOR AUTONOMOUS DRIVING Hao Sha1, Yao Mu2, Yuxuan Jiang1, Guojian Zhan1, Li Chen2, Chenfeng Xu3, Ping Luo2, Shengbo Eben Li1, Masayoshi Tomizuka3, Wei Zhan3, and Mingyu Ding3, 1Tsinghua University 2The University of Hong Kong 3University of California, Ber... |
2306.04050.pdf | arXiv:2306.04050v2 [cs.IT] 26 Jun 20231 LLMZip: Lossless Text Compression using Large Language Models Chandra Shekhara Kaushik Valmeekam, Krishna Narayanan, Di leep Kalathil, Jean-Francois Chamberland, Srinivas Shakkottai Department of Electrical and Computer Engineering Texas A&M University Email:{vcskaushik9,krn,di... |
stochastic backprop and approximate inference.pdf | Stochastic Backpropagation and Approximate Inference in Deep Generative Models Danilo J. Rezende, Shakir Mohamed, Daan Wierstra {danilor, shakir, daanw }@google.com Google DeepMind, London Abstract We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directe... |
2312.00752.pdf | Mamba: Linear-Time Sequence Modeling with Selective State Spaces Albert Gu *1and Tri Dao *2 1Machine Learning Department, Carnegie Mellon University 2Department of Computer Science, Princeton University agu@cs.cmu.edu ,tri@tridao.me Abstract Foundation models, now powering most of the exciting applications in deep lear... |
s41467-021-25756-4.pdf | ARTICLE Efficient generative modeling of protein sequences using simple autoregressive models Jeanne Trinquier1,2, Guido Uguzzoni3,4, Andrea Pagnani3,4,5, Francesco Zamponi2& Martin Weigt1 Generative models emerge as promising candidates for novel sequence-data driven approaches to protein design, and for the extractio... |
2310.14189.pdf | IMPROVED TECHNIQUES FOR TRAINING CONSISTENCY MODELS Yang Song & Prafulla Dhariwal OpenAI {songyang,prafulla}@openai.com ABSTRACT Consistency models are a nascent family of generative models that can sample high quality data in one step without the need for adversarial training. Current consistency models achieve optima... |
2309.05858.pdf | Preprint UNCOVERING MESA -OPTIMIZATION ALGORITHMS IN TRANSFORMERS Johannes von Oswald,1 ETH Zrich & Google ResearchEyvind Niklasson Google ResearchMaximilian Schlegel ETH ZrichSeijin Kobayashi ETH Zrich Nicolas Zucchet ETH ZrichNino Scherrer Independent ResearcherNolan Miller Google ResearchMark Sandler Google Research... |
1711.00937.pdf | Neural Discrete Representation Learning Aaron van den Oord DeepMind avdnoord@google.comOriol Vinyals DeepMind vinyals@google.comKoray Kavukcuoglu DeepMind korayk@google.com Abstract Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet po... |
1806.07572.pdf | Neural Tangent Kernel: Convergence and Generalization in Neural Networks Arthur Jacot Ecole Polytechnique F ederale de Lausanne arthur.jacot@netopera.net Franck Gabriel Imperial College London and Ecole Polytechnique F ederale de Lausanne franckrgabriel@gmail.com Clement Hongler Ecole Polytechnique F ederale de Lausann... |
riemann.pdf | A Selbergian Approach to the Riemann Hypothesis via Mochizukis Interuniversal Teichm uller Theory HOLOQ March 25, 2024 Abstract We present a novel approach to proving the Riemann Hypothesis by exploiting deep connections between the Selberg trace formula, the Euler-Riemann-Siegel theta function, and Mochizukis interuni... |
2010.11929.pdf | Published as a conference paper at ICLR 2021 ANIMAGE IS WORTH 16X16 W ORDS : TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE Alexey Dosovitskiy,, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Ho... |
2305.14699.pdf | Can Transformers Learn to Solve Problems Recursively? Shizhuo Dylan Zhang1Curt Tigges2Stella Biderman2,3Maxim Raginsky1Talia Ringer1 1University of Illinois Urbana-Champaign2EleutherAI3Booz Allen Hamilton {shizhuo2,maxim,tringer}@illinois.edu {curt,stella}@eleuther.ai Abstract Neural networks have in recent years shown... |
1803.03635v5.pdf | Published as a conference paper at ICLR 2019 THELOTTERY TICKET HYPOTHESIS : FINDING SPARSE , TRAINABLE NEURAL NETWORKS Jonathan Frankle MIT CSAIL jfrankle@csail.mit.eduMichael Carbin MIT CSAIL mcarbin@csail.mit.edu ABSTRACT Neural network pruning techniques can reduce the parameter counts of trained networks by over 90... |
1504.01896.pdf | The MetropolisHastings algorithm C.P. Robert1,2,3 1Universit e Paris-Dauphine,2University of Warwick, and3CREST Abstract. This article is a self-contained introduction to the MetropolisHastings algorithm, this ubiquitous tool for producing dependent simulations from an arbitrary distribution. The document illustrates t... |
2402.12479.pdf | 2024-2-21 In deep reinforcement learning, a pruned network is a good network Johan Obando-Ceron1,2,3, Aaron Courville2,3and Pablo Samuel Castro1,2,3 1Google DeepMind,2Mila Qubec AI Institute,3Universit de Montral Recent work has shown that deep reinforcement learning agents have difficulty in effectively using their ne... |
2203.11171.pdf | Published as a conference paper at ICLR 2023 SELF-CONSISTENCY IMPROVES CHAIN OF THOUGHT REASONING IN LANGUAGE MODELS Xuezhi WangJason WeiDale SchuurmansQuoc LeEd H. Chi Sharan NarangAakanksha ChowdheryDenny Zhou Google Research, Brain Team xuezhiw@google.com ,dennyzhou@google.com ABSTRACT Chain-of-thought prompting com... |
2308.07037.pdf | Bayesian Flow Networks Alex Graves, Rupesh Kumar Srivastava, Timothy Atkinson, Faustino Gomez {alex,rupesh,timothy,tino }@nnaisense.com NNAISENSE Abstract This paper introduces Bayesian Flow Networks (BFNs), a new class of generative model in which the parameters of a set of independent distributions are modified with ... |
2310.07269.pdf | Why Does Sharpness-Aware Minimization Generalize Better Than SGD? Zixiang ChenJunkai ZhangYiwen Kou Xiangning Chen Cho-Jui Hsieh Quanquan Gu Department of Computer Science University of California, Los Angeles Los Angeles, CA 90095 {chenzx19,zhang,evankou,xiangning,chohsieh,qgu}@cs.ucla.edu Abstract The challenge of ov... |
2312.11514.pdf | LLM in a flash : Efficient Large Language Model Inference with Limited Memory Keivan Alizadeh, Iman Mirzadeh, Dmitry Belenko, S. Karen Khatamifard, Minsik Cho, Carlo C Del Mundo, Mohammad Rastegari, Mehrdad Farajtabar Apple Abstract Large language models (LLMs) are central to modern natural language processing, deliver... |
2309.16039.pdf | Effective Long-Context Scaling of Foundation Models Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike... |
blei03a.pdf | Journalof Machine Learning Research 3 (2003)993-1022 Submitted 2/02; Published 1/03 Latent Dirichlet Allocation David M. Blei BLEI@CS.BERKELEY .EDU Computer Science Division University of CaliforniaBerkeley, CA 94720, USA Andrew Y. Ng ANG@CS.STANFORD .EDU Computer Science DepartmentStanford UniversityStanford, CA 94305... |
2204.06860.pdf | AlphaFold2 can predict single-mutation effects John M. McBride,1,Konstantin Polev,1, 2Amirbek Abdirasulov,3 Vladimir Reinharz,4Bartosz A. Grzybowski,1, 5, and Tsvi Tlusty1, 5, 1Center for Soft and Living Matter, Institute for Basic Science, Ulsan 44919, South Korea 2Department of Biomedical Engineering, Ulsan National... |
2103.04047.pdf | Reinforcement Learning, Bit by Bit Suggested Citation: Xiuyuan Lu, Benjamin Van Roy, Vikranth Dwaracherla, Morteza Ibrahimi, Ian Osband and Zheng Wen (2018), Reinforcement Learning, Bit by Bit, : Vol. xx, No. xx, pp 118. DOI: 10.1561/XXXXXXXXX. Xiuyuan Lu DeepMind lxlu@deepmind.comBenjamin Van Roy DeepMind benvanroy@de... |
old_school_contrastive_divergence.pdf | OnContrastiv eDivergence Learning Miguel A.Carreira-P erpi~nanGeorey E.Hinton Dept. ofComputer Science, UniversityofToronto 6King's College Road. Toronto,ONM5S3H5,Canada Email: fmiguel,hinton g@cs.toronto.edu Abstract Maxim um-lik elihood(ML) learning of Markovrandom elds ischallenging because itrequires estimates... |
s42004-024-01098-2.pdf | ARTICLE Evolution shapes interaction patterns for epistasis and speci fic protein binding in a two-component signaling system Zhiqiang Yan1& Jin Wang2 The elegant design of protein sequence/structure/function relationships arises from the interaction patterns between amino acid positions. A central question is how evol... |
2401.16405.pdf | Scaling Sparse Fine-Tuning to Large Language Models Alan Ansell1Ivan Vuli c1Hannah Sterz1Anna Korhonen1Edoardo M. Ponti2 1 Abstract Large Language Models (LLMs) are difficult to fully fine-tune (e.g., with instructions or human feedback) due to their sheer number of parameters. A family of parameter -efficient sparse f... |
Convo better than transformer for protein lm.pdf | Brief Report Convolutions are competitive with transformers for protein sequence pretraining Highlights dWe trained large-scale convolutional protein language models dConvolutions perform as well as transformers across taskswhile being more efficient dConvolutions and transformers have different inductivebiases dCurren... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.