file_name
stringlengths
7
127
text
stringlengths
1.27k
557k
2305.15717.pdf
The False Promise of Imitating Proprietary LLMs Arnav Gudibande UC Berkeley arnavg@berkeley.eduEric Wallace UC Berkeley ericwallace@berkeley.eduCharlie Snell UC Berkeley csnell22@berkeley.edu Xinyang Geng UC Berkeley young.geng@berkeley.eduHao Liu UC Berkeley hao.liu@berkeley.eduPieter Abbeel UC Berkeley pabbeel@berkel...
2306.02707.pdf
Orca: Progressive Learning from Complex Explanation Traces of GPT-4 Subhabrata Mukherjee, Arindam Mitra Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, Ahmed Awadallah Microsoft Research Abstract Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs g...
Linearizing Transformer with Key-Value Memory.pdf
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 346 359 December 7-11, 2022 2022 Association for Computational Linguistics Linearizing Transformer with Key-Value Memory Yizhe Zhang Meta AI yizhe.zhang@hotmail.comDeng Cai The Chinese University of Hong Kong thisisjcykcd@gma...
2211.06738.pdf
arXiv:2211.06738v1 [cs.AI] 12 Nov 2022Formalizing the presumption of independence Paul Christiano, Eric Neyman, Mark Xu Alignment Research Center Abstract Mathematical proof aims to deliver confident conclusions, but a ver y similar process of deduction can be used to make uncertain estimates that are open t o revisi...
1805.00899.pdf
AI safety via debate Geoffrey IrvingPaul Christiano OpenAIDario Amodei Abstract To make AI systems broadly useful for challenging real-world tasks, we need them to learn complexhumangoalsandpreferences. Oneapproachtospecifyingcomplexgoalsaskshumansto judge during training which agent behaviors are safe and useful, but ...
2401.10020.pdf
Self-Rewarding Language Models Weizhe Yuan1,2Richard Yuanzhe Pang1,2Kyunghyun Cho2 Xian Li1Sainbayar Sukhbaatar1Jing Xu1Jason Weston1,2 1Meta2NYU Abstract We posit that to achieve superhuman agents, future models require superhuman feedback in order to provide an adequate training signal. Current approaches commonly tr...
2401.12192.pdf
Text Embedding Inversion Attacks on Multilingual Language Models Yiyi Chen Heather Lent Johannes Bjerva Department of Computer Science, Aalborg University, Denmark {yiyic, hcle, jbjerva}@cs.aau.dk Abstract Representing textual information as realnumbered embeddings has become the norm in NLP. Moreover, with the rise of...
2211.07793.pdf
EXTREME GENERATIVE IMAGE COMPRESSION BY LEARNING TEXT EMBEDDING FROM DIFFUSION MODELS A P REPRINT Zhihong Pan, Xin Zhou, Hao Tian Baidu Research (USA) ABSTRACT Transferring large amount of high resolution images over limited bandwidth is an important but very challenging task. Compressing images using extremely low bit...
“Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors.pdf
Findings of the Association for Computational Linguistics: ACL 2023 , pages 68106828 July 9-14, 2023 2023 Association for Computational Linguistics Low-Resource Text Classification: A Parameter-Free Classification Method with Compressors Zhiying Jiang1,2, Matthew Y.R. Yang1, Mikhail Tsirlin1, Raphael Tang1, Yiqin Dai2a...
2023.09.11.556673v1.full.pdf
Protein generation with evolutionary diffusion: sequence is all you need Sarah Alamdari1, Nitya Thakkar2,, Rianne van den Berg3, Alex X. Lu1, Nicolo Fusi1, Ava P. Amini1, Kevin K. Yang1, 1Microsoft Research, Cambridge, MA, USA 2Brown University, Providence, RI, USA 3Microsoft Research AI4Science, Amsterdam, Netherlands...
2108.05540.pdf
Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval Luyu Gao and Jamie Callan Language Technologies Institute Carnegie Mellon University {luyug, callan}@cs.cmu.edu Abstract Recent research demonstrates the effectiveness of using fine-tuned language models (LM) for dense retrieval. However,...
1501.05014.pdf
Experimental Simulation of Closed Timelike Curves Martin Ringbauer1,2, Matthew A. Broome1,2, Casey R. Myers1, Andrew G. White1,2and Timothy C. Ralph2 1Centre for Engineered Quantum Systems,2Centre for Quantum Computer and Communication Technology, School of Mathematics and Physics, University of Queensland, Brisbane, Q...
2310.18168.pdf
PERSONAS AS A WAY TO MODEL TRUTHFULNESS IN LANGUAGE MODELS Nitish Joshi1Javier Rando2Abulhair Saparov1Najoung Kim3He He1 1New York University2ETH Zurich3Boston University {nitish}@nyu.edu {jrando}@ethz.ch ABSTRACT Large Language Models (LLMs) are trained on vast amounts of text from the internet, which contains both fa...
noise_contrastive_estimation.pdf
Journalof Machine LearningResearch 13(2012)307-361 Submi tted 12/10;Revised 11/11;Published2/12 Noise-ContrastiveEstimationof UnnormalizedStatistical Models, with Applications toNatural ImageStatistics Michael U.Gutmann MICHAEL .GUTMANN @HELSINKI .FI AapoHyv arinen AAPO.HYVARINEN @HELSINKI .FI Department of Computer Sc...
2309.16797.pdf
PROMPTBREEDER : SELF-REFERENTIAL SELF-IMPROVEMENT VIAPROMPT EVOLUTION Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rockt aschel Google DeepMind {chrisantha,dylski,henrykm,osindero,rocktaschel }@google.com ABSTRACT Popular prompt strategies like Chain-of-Thought Prompting can dramatically ...
2005.10242.pdf
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere Tongzhou Wang1Phillip Isola1 Abstract Contrastive representation learning has been outstandingly successful in practice. In this work, we identify two key properties related to the contrastive loss: (1) alignment (clos...
new_school_how_to_train_ebm.pdf
How to Train Your Energy-Based Models Yang Song yangsong@cs.stanford.edu Stanford University Diederik P. Kingma dpkingma@google.com Google Research Abstract Energy-Based Models (EBMs), also known as non-normalized probabilistic models, specify probability density or mass functions up to an unknown normalizing constant....
2402.09668.pdf
How to Train Data-Efficient LLMs Noveen Sachdeva1 2Benjamin Coleman1Wang-Cheng Kang1Jianmo Ni1Lichan Hong1Ed H. Chi1 James Caverlee1 3Julian McAuley2Derek Zhiyuan Cheng1 Abstract The training of large language models (LLMs) is expensive. In this paper, we study data-efficient approaches for pre-training LLMs, i.e., tec...
laplacian eigenmaps.pdf
LETTER Communicated by Joshua B. Tenenbaum Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Mikhail Belkin misha@math.uchicago.edu Department of Mathematics, University of Chicago, Chicago, IL 60637, U.S.A. Partha Niyogi niyogi@cs.uchicago.eduDepartment of Computer Science and Statistics, Univer...
2309.06180.pdf
Efficient Memory Management for Large Language Model Serving with PagedAttention Woosuk Kwon1,Zhuohan Li1,Siyuan Zhuang1Ying Sheng1,2Lianmin Zheng1Cody Hao Yu3 Joseph E. Gonzalez1Hao Zhang4Ion Stoica1 1UC Berkeley2Stanford University3Independent Researcher4UC San Diego Abstract High throughput serving of large language...
old_school_sg_langevin_dynamics.pdf
Bayesian Learning via Stochastic Gradient Langevin Dynamics Max Welling welling@ics.uci.edu D. Bren School of Information and Computer Science, University of California, Irvine, CA 92697-3425, USA Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, UCL, 17 Queen Square, London WC1N 3AR, UK Abstr...
mapreduce.pdf
MapReduce: Simplied Data Processing onLargeClusters JeffreyDean andSanjay Ghema wat jeff@google.com, sanjay@google.com Google,Inc. Abstract MapReduce isaprogramming model andanassociated implementation forprocessing andgenerating large data sets. Users specify amap function thatprocesses a key/valuepairtogenerate aset...
2311.00208.pdf
Transformers as Recognizers of Formal Languages: A Survey on Expressivity Lena Strobl Ume University lena.strobl@umu.seWilliam Merrill New York University willm@nyu.eduGail Weiss EPFL gail.weiss@epfl.ch David Chiang University of Notre Dame dchiang@nd.eduDana Angluin Yale University dana.angluin@yale.edu Abstract As tr...
2402.04833.pdf
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning Hao Zhao1Maksym Andriushchenko1Francesco Croce1Nicolas Flammarion1 Abstract There is a consensus that instruction fine-tuning of LLMs requires high-quality data, but what are they? LIMA (NeurIPS 2023) and AlpaGasus (ICLR 2024) a...
2647_elbo_ing_stein_mixtures.pdf
Under review as a conference paper at ICLR 2023 ELBOINGSTEIN MIXTURES Anonymous authors Paper under double-blind review ABSTRACT Stein variational gradient descent (SVGD) (Liu & Wang, 2016) is a particle-based technique for Bayesian inference. SVGD has recently gained popularity because it combines the ability of varia...
1801.05134.pdf
Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift Xiang Li1Shuo Chen1Xiaolin Hu2Jian Yang1 Abstract This paper first answers the question why do the two most powerful techniques Dropout and Batch Normalization (BN) often lead to a worse performance when they are combined together? i...
plant1-s2.0-S009286742400103X-main.pdf
Article Structure of the plant plastid-encoded RNA polymerase Graphical abstract Highlights dStructure of the chloroplast transcription complex dFifteen nuclear-encoded subunits encase the plastidencoded polymerase dSubunits PAP1 and PAP2 interact with the DNA and themRNA, respectively dStructure-guided insights into e...
2305.13301.pdf
TRAINING DIFFUSION MODELS WITH REINFORCEMENT LEARNING Kevin Black1Michael Janner1Yilun Du2Ilya Kostrikov1Sergey Levine1 1University of California, Berkeley2Massachusetts Institute of Technology {kvablack, janner, kostrikov, sergey.levine}@berkeley.edu yilundu@mit.edu ABSTRACT Diffusion models are a class of flexible ge...
2306.04488.pdf
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards Alexandre Rame1, Guillaume Couairon1,2, Mustafa Shukor1, Corentin Dancette1,Jean-Baptiste Gaya1,2,Laure Soulier1,Matthieu Cord1,3 1Sorbonne Universit, CNRS, ISIR, Paris, France2Meta AI3Valeo.ai Abstract Foundation mo...
s41593-023-01304-9.pdf
Nature Neuroscience | Volume 26 | May 2023 | 858866 858 nature neuroscience Articlehttps://doi.org/10.1038/s41593-023-01304-9 Semantic reconstruction of continuous language from non-invasive brain recordings Jerry Tang1, Amanda LeBel 2, Shailee Jain 1 & Alexander G. Huth 1,2 A braincomputer interface that decodes ...
2210.03057.pdf
LANGUAGE MODELS ARE MULTILINGUAL CHAIN -OF-THOUGHT REASONERS Freda Shi1,2,Mirac Suzgun1,3,Markus Freitag1Xuezhi Wang1 Suraj Srivats4Soroush Vosoughi4Hyung Won Chung1Yi Tay1 Sebastian Ruder1Denny Zhou1Dipanjan Das1Jason Wei1 1Google Research2Toyota Technological Institute at Chicago 3Stanford University4Dartmouth Colleg...
vae.pdf
Auto-Encoding Variational Bayes Diederik P. Kingma Machine Learning Group Universiteit van Amsterdam dpkingma@gmail.comMax Welling Machine Learning Group Universiteit van Amsterdam welling.max@gmail.com Abstract How can we perform efficient inference and learning in directed probabilistic models, in the presence of con...
2306.17806.pdf
Stay on topic with Classifier-Free Guidance Guillaume V . Sanchez* Hexaglobe EleutherAI gsanchez@hexaglobe.comHonglu Fan* University of Geneva EleutherAI honglu.fan@unige.chAlexander Spangher* Information Sciences Institute University of Southern California spangher@usc.edu Elad Levi Sightful eladlevico@gmail.comPawan ...
-em-De-novo--em--protein-design—From-new-st.pdf
Leading Edge Perspective De novo protein designFrom new structures to programmable functions Tanja Kortemme1,2,3,* 1Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, San Francisco, CA 94158, USA 2Quantitative Biosciences Institute, University of California, San Francisco, S...
2305.15348.pdf
READ: Recurrent Adaptation of Large Transformers Sid Wang John Nguyen Ke Li Carole-Jean Wu Meta AI {yuwang2020,ngjhn,kli26,carolejeanwu}@meta.com Abstract Fine-tuning large-scale Transformers has led to the explosion of many AI applications across Natural Language Processing and Computer Vision tasks. However, fine-tun...
2309.10668.pdf
Language Modeling Is Compression Grgoire Deltang*1, Anian Ruoss*1, Paul-Ambroise Duquenne2, Elliot Catt1, Tim Genewein1, Christopher Mattern1, Jordi Grau-Moya1, Li Kevin Wenliang1, Matthew Aitchison1, Laurent Orseau1, Marcus Hutter1and Joel Veness1 *Equal contributions,1Google DeepMind,2Meta AI & Inria It has long been...
2212.14024.pdf
DEMONSTRATE SEARCH PREDICT : Composing retrieval and language models for knowledge-intensive NLP Omar Khattab1Keshav Santhanam1Xiang Lisa Li1David Hall1 Percy Liang1Christopher Potts1Matei Zaharia1 Abstract Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tas...
L08_expressivity.pdf
Expressive Variational Autoencoders John Thickstun The Gaussian VAE parameterizes the prior r(z), conditional likelihood p(x|z), and posterior approximation q(x|z) with with Gaussian distributions. The in-expressivity of these Gaussian models can make it difficult to capture the distribution p(x); complaints about the ...
10356_a_path_towards_autonomous_mach.pdf
A Path Towards Autonomous Machine Intelligence Version 0.9.2, 2022-06-27 Yann LeCun Courant Institute of Mathematical Sciences, New York University yann@cs.nyu.edu Meta Fundamental AI Research yann@fb.com June 27, 2022 Abstract How could machines learn as efficiently as humans and animals? How could machines learn to r...
2302.03764.pdf
Sketchy: Memory-efficient Adaptive Regularization with Frequent Directions Vladimir Feinberg1Xinyi Chen1 2Y. Jennifer Sun2Rohan Anil1Elad Hazan1 2 Abstract Adaptive regularization methods that exploit more than the diagonal entries exhibit state of the art performance for many tasks, but can be prohibitive in terms of ...
1608.04471.pdf
Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm Qiang Liu Dilin Wang Department of Computer Science Dartmouth College Hanover, NH 03755 {qiang.liu, dilin.wang.gr}@dartmouth.edu Abstract We propose a general purpose variational inference algorithm that forms a natural counterpart of gr...
1812.11118.pdf
Reconciling modern machine learning practice and the bias-variance trade-off Mikhail Belkina, Daniel Hsub, Siyuan Maa, and Soumik Mandala aThe Ohio State University, Columbus, OH bColumbia University, New York, NY September 12, 2019 Abstract Breakthroughs in machine learning are rapidly changing science and society, ye...
2304.14802.pdf
ResiDual: Transformer with Dual Residual Connections Shufang Xie, Huishuai Zhang, Junliang Guo, Xu Tan, Jiang Bian Hany Hassan Awadalla,Arul Menezes,Tao Qin,Rui Yan Microsoft ResearchMicrosoft Azure Translation Gaoling School of Artificial Intelligence, Renmin University of China {shufangxie,ruiyan}@ruc.edu.cn , {huzha...
2403.07816.pdf
Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM Sainbayar Sukhbaatar ,Olga Golovneva ,Vasu Sharma ,Hu Xu,Xi Victoria Lin ,Baptiste Rozire ,Jacob Kahn,Daniel Li,Wen-tau Yih ,Jason Weston ,Xian Li FAIR at Meta We investigate efficient methods for training Large Language Models (LLMs) to possess capabil...
2205.13147.pdf
Matryoshka Representation Learning Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham Kakade, Prateek Jainand Ali Farhadi University of Washington,Google Research,Harvard University {kusupati,ali}@cs.washington.edu ,prajain@google....
2307.15043.pdf
Universal and Transferable Adversarial Attacks on Aligned Language Models Andy Zou1,2, Zifan Wang2, Nicholas Carlini3, Milad Nasr3, J. Zico Kolter1,4, Matt Fredrikson1 1Carnegie Mellon University,2Center for AI Safety, 3Google DeepMind,4Bosch Center for AI Abstract Because out-of-the-box large language models are capab...
2302.12441.pdf
MUX-PLMs: Data Multiplexing for High-throughput Language Models Vishvak Murahari1Ameet Deshpande1Carlos E. Jimenez1 Izhak Shafran2Mingqiu Wang2Yuan Cao2Karthik Narasimhan1 1Princeton University2Google Brain murahari@cs.princeton.edu Abstract The widespread adoption of large language models such as ChatGPT and Bard has ...
2306.03078.pdf
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression Tim Dettmers University of WashingtonRuslan Svirschevski HSE University & YandexVage Egiazarian HSE University & Yandex Denis Kuznedelev Yandex & SkoltechElias Frantar IST AustriaSaleh Ashkboos ETH ZurichAlexander Borzunov HSE University ...
1706.03741.pdf
Deep Reinforcement Learning from Human Preferences Paul F Christiano OpenAI paul@openai.comJan Leike DeepMind leike@google.comTom B Brown nottombrown@gmail.com Miljan Martic DeepMind miljanm@google.comShane Legg DeepMind legg@google.comDario Amodei OpenAI damodei@openai.com Abstract For sophisticated reinforcement lear...
karakida19a.pdf
Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach Ryo Karakida Shotaro Akaho Shun-ichi Amari AIST, Japan AIST, Japan RIKEN CBS, Japan Abstract The Fisher information matrix (FIM) is a fundamental quantity to represent the characteristics of a stochastic model, including deep neural...
2310.06816.pdf
Text Embeddings Reveal (Almost) As Much As Text John X. Morris, Volodymyr Kuleshov, Vitaly Shmatikov, Alexander M. Rush Department of Computer Science Cornell University Abstract How much private information do text embeddings reveal about the original text? We investigate the problem of embedding inversion, reconstruc...
2402.00854.pdf
SymbolicAI: A framework for logic-based approaches combining generative models and solvers MariusConstantin Dinu Claudiu LeoveanuCondreiMarkus Holzleitner Werner ZellingerSepp Hochreiter Abstract We introduce SymbolicAI , a versatile and modular framework employing a logic-based approach to concept learning and flow ma...
1907.10786.pdf
Interpreting the Latent Space of GANs for Semantic Face Editing Yujun Shen1, Jinjin Gu2, Xiaoou Tang1, Bolei Zhou1 1The Chinese University of Hong Kong2The Chinese University of Hong Kong, Shenzhen {sy116, xtang, bzhou }@ie.cuhk.edu.hk, jinjingu@link.cuhk.edu.cn Original Pose Age Gender Eyeglasses Figure 1: Manipulatin...
2107.13163.pdf
arXiv:2107.13163v3 [cs.LG] 30 Mar 2023Statistically Meaningful Approximation: a Case Study on Approximating Turing Machines with Transform ers Colin Wei Yining Chen Tengyu Ma Department of Computer Science Stanford University {colinwei,cynnjjs,tengyuma}@cs.stanford.edu March 31, 2023 Abstract A common lens to theoret...
1906.08237.pdf
XLNet: Generalized Autoregressive Pretraining for Language Understanding Zhilin Yang1, Zihang Dai12, Yiming Yang1, Jaime Carbonell1, Ruslan Salakhutdinov1, Quoc V . Le2 1Carnegie Mellon University,2Google AI Brain Team {zhiliny,dzihang,yiming,jgc,rsalakhu}@cs.cmu.edu, qvl@google.com Abstract With the capability of mode...
2206.05895.pdf
Latent Diffusion Energy-Based Model for Interpretable Text Modeling Peiyu Yu1 2Sirui Xie1Xiaojian Ma1 2Baoxiong Jia1 2Bo Pang3 Ruiqi Gao4Yixin Zhu5 6Song-Chun Zhu1 2 5 6 7 8Ying Nian Wu7 Abstract Latent space Energy-Based Models ( EBM s), also known as energy-based priors, have drawn growing interests in generative mod...
2209.13325.pdf
Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models Xiuying Wei1, 2, Yunchen Zhang2, 4, Xiangguo Zhang2, Ruihao Gong1, 2, Shanghang Zhang3, Qi Zhang2, Fengwei Yu2, Xianglong Liu1 1State Key Lab of Software Development Environment, Beihang University 2SenseTime Research,3Peking University 4Univ...
2312.17227.pdf
Gradient-based Planning with World Models Jyothir S V1Siddhartha Jalagam1Yann LeCun1, 2Vlad Sobal1, 2 1New York University2Meta AI {jyothir, scj9994, us441}@nyu.edu yann@cs.nyu.edu Abstract The enduring challenge in the field of artificial intelligence has been the control of systems to achieve desired behaviours. Whil...
Gradient Estimation Using Stochastic Computation Graphs.pdf
Gradient Estimation Using Stochastic Computation Graphs John Schulman1,2 joschu@eecs.berkeley.eduNicolas Heess1 heess@google.com Theophane Weber1 theophane@google.comPieter Abbeel2 pabbeel@eecs.berkeley.edu 1Google DeepMind2University of California, Berkeley, EECS Department Abstract In a variety of problems originatin...
Schrodinger_AlphaFold_Cell_Dec23_020324.pdf
Leading Edge Commentary Enabling structure-based drug discovery utilizing predicted models Edward B. Miller,1,*Howook Hwang,1Mee Shelley,2Andrew Placzek,2Joao P.G.L.M. Rodrigues,1Robert K. Suto,3 Lingle Wang,1Karen Akinsanya,1and Robert Abel1 1Schro dinger New York, 1540 Broadway, 24th Floor, New York, NY 10036, USA 2S...
2202.03286.pdf
Red Teaming Language Models with Language Models WARNING: This paper contains model outputs which are offensive in nature. Ethan Perez1 2Saffron Huang1Francis Song1Trevor Cai1Roman Ring1 John Aslanides1Amelia Glaese1Nat McAleese1Geoffrey Irving1 1DeepMind,2New York University perez@nyu.edu Abstract Language Models (LMs...
2401.14196.pdf
DeepSeek-Coder: When the Large Language Model Meets Programming The Rise of Code Intelligence Daya Guo*1, Qihao Zhu1,2, Dejian Yang1, Zhenda Xie1, Kai Dong1, Wentao Zhang1 Guanting Chen1, Xiao Bi1, Y. Wu1, Y.K. Li1, Fuli Luo1, Yingfei Xiong2, Wenfeng Liang1 1DeepSeek-AI 2Key Lab of HCST (PKU), MOE; SCS, Peking Universi...
dubey2022pursuit.pdf
RESEA RCH ARTICL E Thepursuit ofhappiness: Areinforcement learning perspective onhabituation and comparisons Rachit Dubey ID 1*,Thomas L.Griffiths2,Peter Dayan ID 3,4 1Department ofComputer Science, Princeton University ,Princeton, New Jersey, United States ofAmerica, 2Department ofPsychology, Prince tonUniversity, Pri...
Structural basis for strand-transfer inhibitor binding to HIV intasomes.pdf
STRUCTURAL BIOLOGY Structural basis for strand-transfer inhibitor binding to HIV intasomes Dario Oliveira Passos1*, Min Li2*, Ilona K. Jz wik1, Xue Zhi Zhao3, Diogo Santos-Martins4, Renbin Yang2, Steven J. Smith3, Youngmin Jeon1, Stefano Forli4, Stephen H. Hughes3, Terrence R. Burke Jr.3, Robert Craigie2, Dmitry Lyumki...
2310.11589.pdf
ELICITING HUMAN PREFERENCES WITH LANGUAGE MODELS Belinda Z. Li MIT CSAIL bzl@mit.eduAlex Tamkin Anthropic atamkin@cs.stanford.eduNoah Goodman Stanford ndg@stanford.eduJacob Andreas MIT CSAIL jda@mit.edu ABSTRACT Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language ...
2202.04728.pdf
Predicting Human Similarity Judgments Using Large Language Models Raja Marjieh1,*, Ilia Sucholutsky2,*, Theodore R. Sumers2, Nori Jacoby3, Thomas L. Griffiths1,2 1Department of Psychology, Princeton University 2Department of Computer Science, Princeton University 3Computational Auditory Perception Group, Max Planck Ins...
2305.13048.pdf
RWKV: Reinventing RNNs for the Transformer Era Bo Peng1Eric Alcaide2,3,4Quentin Anthony2,5 Alon Albalak2,6Samuel Arcadinho2,7Huanqi Cao8Xin Cheng9Michael Chung10 Matteo Grella11Kranthi Kiran GV12Xuzheng He2Haowen Hou13Przemysaw Kazienko14 Jan Koco n14Jiaming Kong15Bartomiej Koptyra14Hayden Lau2Krishna Sri Ipsit Mantri1...
1511.06349.pdf
Generating Sentences from a Continuous Space Samuel R. Bowman NLP Group and Dept. of Linguistics Stanford University sbowman@stanford.eduLuke Vilnis CICS University of Massachusetts Amherst luke@cs.umass.edu Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz & Samy Bengio Google Brain {vinyals, adai, rafalj, bengio }@googl...
2402.16819.pdf
Nemotron-4 15B Technical Report Jupinder Parmar*Shrimai PrabhumoyeJoseph JenningsMostofa Patwary Sandeep SubramanianDan Su Chen Zhu Deepak Narayanan Aastha Jhunjhunwala Ayush Dattagupta Vibhu Jawa Jiwei Liu Ameya Mahabaleshwarkar Osvald Nitski Annika Brundyn James Maki Miguel Martinez Jiaxuan You John Kamalu Patrick Le...
2203.05482.pdf
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time Mitchell Wortsman1Gabriel Ilharco1Samir Yitzhak Gadre2Rebecca Roelofs3Raphael Gontijo-Lopes3 Ari S. Morcos4Hongseok Namkoong2Ali Farhadi1Yair Carmon* 5Simon Kornblith* 3Ludwig Schmidt* 1 Abstract The convent...
RL_for_starcraft.pdf
350 | Nature | Vol 575 | 14 November 2019 ArticleGrandmaster level in StarCraft II using multi-agent reinforcement learning Oriol Vinyals1,3*, Igor Babuschkin1,3, Wojciech M. Czarnecki1,3, Michal Mathieu1,3, Andrew Dudzik1,3, Junyoung Chung1,3, David H. Choi1,3, Richard Powell1,3, Timo Ewalds1,3, Petko Georgiev1,...
1611.03530.pdf
UNDERSTANDING DEEP LEARNING REQUIRES RE THINKING GENERALIZATION Chiyuan Zhang Massachusetts Institute of Technology chiyuan@mit.eduSamy Bengio Google Brain bengio@goog/l.Vare.comMoritz Hardt Google Brain mrtz@goog/l.Vare.com Benjamin Recht University of California, Berkeley brecht@berke/l.Varey.eduOriol Vinyals Google ...
2310.03214.pdf
Preprint FRESH LLM S: REFRESHING LARGE LANGUAGE MODELS WITH SEARCH ENGINE AUGMENTATION Tu Vu1Mohit Iyyer2Xuezhi Wang1Noah Constant1Jerry Wei1 Jason Wei3Chris Tar1Yun-Hsuan Sung1Denny Zhou1Quoc Le1Thang Luong1 Google1University of Massachusetts Amherst2OpenAI3 freshllms@google.com ABSTRACT Most large language models ( L...
SARS-CoV-2 Protease Inhibitors.pdf
RESEARCH ARTICLE SUMMARY CORONAVIRUS Open science discovery of potent noncovalent SARS-CoV-2 main protease inhibitors Melissa L. Boby , Daren Fearon , Matteo Ferla , Mihajlo Filep , Lizb Koekemoer , Matthew C. Robinson , The COVID Moonshot Consortium, John D. Chodera *, Alpha A. Lee *, Nir London *, Annette von Delft *...
2110.04374.pdf
A Few More Examples May Be Worth Billions of Parameters Yuval KirstainPatrick LewisSebastian RiedelOmer Levy Tel-Aviv University University College London Facebook AI Research {yuval.kirstain,levyomer }@cs.tau.ac.il ,{patrick.lewis,s.riedel }@cs.ucl.ac.uk Abstract We investigate the dynamics of increasing the number of...
Evolutionary Principles in Self-Referential Learning.pdf
Evolutionary Principles inSelfReferential Learning (Diploma Thesis) Jargen Schmidhube: Technische Universitat Miinchen May 14, 1987 Evolutionary Principles inSelfReferential Learning (Diploma Thesis) Jargen Schmidhuber Technische Universitat Minchen Abstract There exists anumber ofalgorithms which encapsulate parts of...
3639_the_effects_of_reward_misspeci.pdf
THEEFFECTS OF REWARD MISSPECIFICATION : MAPPING AND MITIGATING MISALIGNED MODELS Alexander Pan CaltechKush Bhatia UC BerkeleyJacob Steinhardt UC Berkeley ABSTRACT Reward hackingwhere RL agents exploit gaps in misspecified reward functionshas been widely observed, but not yet systematically studied. To understand how re...
DNA-guided-transcription-factor-cooperativity-shap.pdf
Article DNA-guided transcription factor cooperativity shapes face and limb mesenchyme Graphical abstract Highlights dMutually dependent binding of TWIST1 and homeodomain TFs in embryonic mesenchyme dTF co-binding drives enhancer accessibility and sharedtranscriptional regulation dWeak TF-TF contacts guided by DNA media...
2402.05120.pdf
More Agents Is All You Need Junyou Li* 1Qin Zhang* 1Yangbin Yu1Qiang Fu1Deheng Ye1 Abstract We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method is orthogonal to existing complicated methods to further enh...
cdpo.pdf
A note on DPO with noisy preferences & relationship to IPO Eric Mitchell November 25, 2023 (v1.1) OG RLHF aims for reward maximization with a KL constraint to reference model ref(inputs xomitted): = argmax Ey r(y)log(y) ref(y) (1) DPO derives a loss on the current policy (where our dataset says ywis preferred to yl,...
Inference of Epistatic Effects Leading to Entrenchment and Drug Resistance in HIV-1 Protease.pdf
Inference of Epistatic Effects Leading to Entrenchment and Drug Resistance in HIV-1 Protease William F. Flynn,1,2Allan Haldane,2,3Bruce E. Torbett,4and Ronald M. Levy*,2,3 1Department of Physics and Astronomy, Ru tgers University, New Brunswick, NJ 2Center for Biophysics and Computational Bio logy, Temple University, P...
Improving Memory Search through Model-Based Cue Selection.pdf
IMPROVING MEMORY SEARCH 1 . Improving Memory Search through Model-Based Cue Selection Charlotte A. Cornell1, Kenneth A. Norman2, Thomas L. Griffiths2,3, and Qiong Zhang1,4 1Psychology Department, Rutgers UniversityNew Brunswick 2Psychology Department, Princeton University 3Computer Science Department, Princeton Univers...
2305.18290.pdf
Direct Preference Optimization: Your Language Model is Secretly a Reward Model Rafael RafailovArchit SharmaEric Mitchell Stefano ErmonChristopher D. ManningChelsea Finn Stanford UniversityCZ Biohub {rafailov,architsh,eric.mitchell}@cs.stanford.edu Abstract While large-scale unsupervised language models (LMs) learn broa...
2111.12763.pdf
Sparse is Enough in Scaling Transformers Sebastian Jaszczur University of WarsawAakanksha Chowdhery Google ResearchAfroz Mohiuddin Google Researchukasz Kaiser OpenAI Wojciech Gajewski Google ResearchHenryk Michalewski Google ResearchJonni Kanerva Google Research Abstract Large Transformer models yield impressive result...
science.aay8015.pdf
STRUCTURAL BIOLOGY Structural basis for strand-transfer inhibitor binding to HIV intasomes Dario Oliveira Passos1*, Min Li2*, Ilona K. Jz wik1, Xue Zhi Zhao3, Diogo Santos-Martins4, Renbin Yang2, Steven J. Smith3, Youngmin Jeon1, Stefano Forli4, Stephen H. Hughes3, Terrence R. Burke Jr.3, Robert Craigie2, Dmitry Lyumki...
2305.10626.pdf
Language Models Meet World Models: Embodied Experiences Enhance Language Models Jiannan Xiang, Tianhua Tao, Yi Gu, Tianmin Shu, Zirui Wang, Zichao Yang, Zhiting Hu UC San Diego,UIUC,MIT,JHU,CMU Abstract While large language models (LMs) have shown remarkable capabilities across numerous tasks, they often struggle with ...
2306.14892.pdf
Supervised Pretraining Can Learn In-Context Reinforcement Learning Jonathan N. Lee1Annie Xie1Aldo Pacchiano2Yash Chandak1 Chelsea Finn1Ofir Nachum3Emma Brunskill1 1Stanford University,2Microsoft Research,3Google DeepMind Abstract Large transformer models trained on diverse datasets have shown a remarkable ability to le...
2301.13196.pdf
Looped Transformers as Programmable Computers Angeliki Giannouw*, Shashank Rajputw, Jy-yong Sohnw, Kangwook Leew, Jason D. Leep, Dimitris Papailiopoulosw pPrinceton University wUniversity of Wisconsin-Madison January 31, 2023 Abstract We present a framework for using transformer networks as universal computers by progr...
s41586-023-06832-9.pdf
832 | Nature | Vol 625 | 25 January 2024 ArticlePredicting multiple conformations via sequence clustering and AlphaFold2 Hannah K. Wayment-Steele1,7, Adedolapo Ojoawo1,7, Renee Otten1,5, Julia M. Apitz1, Warintra Pitsawong1,6, Marc Hmberger1,5, Sergey Ovchinnikov2, Lucy Colwell3,4 & Dorothee Kern1 AlphaFold2(ref. 1)...
Immune-evasion,-infectivity,-and-fusogenicity-of-S.pdf
Article Immune evasion, infectivity, and fusogenicity of SARS-CoV-2 BA.2.86 and FLip variants Graphical abstract Highlights dBA.2.86 is less immune evasive compared to FLip and other XBB variants dBA.2.86 is antigenically more similar to BA.2 and BA.4/5 thanXBB variants dMAb S309 is unable to neutralize BA.2.86 possibl...
s41467-023-37023-9.pdf
Article https://doi.org/10.1038/s41467-023-37023-9 Observation of electron orbital signatures of single atoms within metal-phthalocyaninesusing atomic force microscopy Pengcheng Chen1,9, Dingxin Fan1,2,9, Annabella Selloni3,E m i l yA .C a r t e r4,5, Craig B. Arnold1,4, Yunlong Zhang6,A d a mS .G r o s s6, James R. Ch...
2402.07871.pdf
SCALING LAWS FOR FINE-GRAINED MIXTURE OF EXPERTS Jakub Krajewski University of Warsaw IDEAS NCBRJan Ludziejewski University of Warsaw IDEAS NCBRKamil Adamczewski IDEAS NCBRMaciej Pi oro IPPT PAN IDEAS NCBR Micha Krutul University of Warsaw IDEAS NCBRSzymon Antoniak University of Warsaw IDEAS NCBRKamil Ciebiera Universi...
2105.14111.pdf
Goal Misgeneralization in Deep Reinforcement Learning Lauro Langosco* 1Jack Koch*Lee Sharkey* 2Jacob Pfau3Laurent Orseau4David Krueger1 Abstract We study goal misgeneralization , a type of outof-distribution generalization failure in reinforcement learning (RL). Goal misgeneralization occurs when an RL agent retains it...
2402.09727.pdf
2024-02-14 A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts Kuang-Huei Lee1, Xinyun Chen1, Hiroki Furuta1, John Canny1and Ian Fischer2 1Google DeepMind,2Google Research Correspond to: {leekh, iansf}@google.com; Author contributions are stated in Appendix J. Website: read-agent.github.io Current Lar...
2305.11841.pdf
How Does Generative Retrieval Scale to Millions of Passages? Ronak Pradeep, Kai Hui, Jai Gupta, Adam D. Lelkes, Honglei Zhuang Jimmy Lin, Donald Metzler, Vinh Q. Tran Google Research,University of Waterloo rpradeep@uwaterloo.ca ,{kaihuibj,vqtran}@google.com Abstract Popularized by the Differentiable Search Index, the e...
6098_contrastive_retrospection_honi.pdf
Contrastive Retrospection: honing in on critical steps for rapid learning and generalization in RL Chen Sun Mila, Universit de Montral sunchipsster@gmail.comWannan Yang New York University winnieyangwn96@gmail.comThomas Jiralerspong Mila, Universit de Montral thomas.jiralerspong @mila.quebec Dane Malenfant McGill Unive...
2208.11970.pdf
Understanding Diffusion Models: A Unified Perspective Calvin Luo Google Research, Brain Team calvinluo@google.com August 26, 2022 Contents Introduction: Generative Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Background: ELBO, VAE, and Hierarchical VAE . . . . . . . . . . . . . . . . . ....
2303.06296.pdf
STABILIZING TRANSFORMER TRAINING BY PREVENTING ATTENTION ENTROPY COLLAPSE A P REPRINT Shuangfei Zhai, Tatiana Likhomanenko, Etai Littwin, Dan Busbridge, Jason Ramapuram, Yizhe Zhang, Jiatao Gu, Josh Susskind Apple {szhai,antares,elittwin,dbusbridge,jramapuram,yizzhang,jgu32,jsusskind}@apple.com March 14, 2023 ABSTRACT ...
2005.12320.pdf
SCAN: Learning to Classify Images without Labels Wouter Van Gansbeke1Simon Vandenhende1Stamatios Georgoulis2 Marc Proesmans1Luc Van Gool1,2 1KU Leuven/ESAT-PSI2ETH Zurich/CVL, TRACE Abstract. Can we automatically group images into semantically meaningful clusters when ground-truth annotations are absent? The task of un...
56_preference_proxies_evaluating_.pdf
Preference Proxies: Evaluating Large Language Models in capturing Human Preferences in Human-AI Tasks Mudit Verma* 1Siddhant Bhambri* 1Subbarao Kambhampati1 Abstract In this work, we investigate the potential of Large Language Models (LLMs) to serve as effective human proxies by capturing human preferences in the conte...