file_name stringlengths 7 127 | text stringlengths 1.27k 557k |
|---|---|
2203.14263.pdf | 1 A General Survey on Attention Mechanisms in Deep Learning Gianni Brauwers and Flavius Frasincar Abstract Attention is an important mechanism that can be employed for a variety of deep learning models across many different domains and tasks. This survey provides an overview of the most important attention mechanisms p... |
Integrating-cellular-electron-microscopy-with-mult.pdf | Leading Edge Review Integrating cellular electron microscopy with multimodal data to explore biologyacross space and time Caitlyn L. McCafferty,1,*Sven Klumpe,2,*Rommie E. Amaro,3,*Wanda Kukulski,4,*Lucy Collinson,5,* and Benjamin D. Engel1,* 1Biozentrum, University of Basel, Spitalstrasse 41, 4056 Basel, Switzerland 2... |
2210.00312.pdf | Published as a conference paper at ICLR 2023 MULTIMODAL ANALOGICAL REASONING OVER KNOWLEDGE GRAPHS Ningyu Zhang1Lei Li1Xiang Chen1Xiaozhuan Liang1Shumin Deng2Huajun Chen1 1Zhejiang University, AZFT Joint Lab for Knowledge Engine 2National University of Singapore {zhangningyu,leili21,xiang chen,liangxiaozhuan,231sm,huaj... |
2310.12397.pdf | GPT-4 Doesnt Know Its Wrong: An Analysis of Iterative Prompting for Reasoning Problems Kaya StechlyMatthew MarquezSubbarao Kambhampati Abstract There has been considerable divergence of opinion on the reasoning abilities of Large Language Models (LLMs). While the initial optimism that reasoning might emerge automatical... |
2309.14322.pdf | Small-scale proxies for large-scale Transformer training instabilities Mitchell Wortsman Peter J. Liu Lechao Xiao Katie Everett Alex Alemi Ben Adlam John D. Co-Reyes Izzeddin Gur Abhishek Kumar Roman Novak Jeffrey Pennington Jascha Sohl-dickstein Kelvin Xu Jaehoon Lee*Justin Gilmer*Simon Kornblith* Google DeepMind Abst... |
2308.05660.pdf | Thermodynamic Linear Algebra Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon, Thomas Ahle, Daniel Simpson, Gavin Crooks, Patrick J. Coles Normal Computing Corporation, New York, New York, USA Linear algebraic primitives are at the core of many modern algorithms in engineering, science, and machine learning. Hence, a... |
2309.10150.pdf | Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions Yevgen Chebotar, Quan Vuong, Alex Irpan, Karol Hausman, Fei Xia, Yao Lu, Aviral Kumar, Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum, Sumedh Sontakke, Grecia Salazar, Huong T Tran, Jodi... |
2109.01652.pdf | Published as a conference paper at ICLR 2022 FINETUNED LANGUAGE MODELS AREZERO-SHOT LEARNERS Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V . Le Google Research ABSTRACT This paper explores a simple method for improving the zero-shot learning abiliti... |
1610.06258.pdf | Using Fast Weights to Attend to the Recent Past Jimmy Ba University of Toronto jimmy@psi.toronto.eduGeoffrey Hinton University of Toronto and Google Brain geoffhinton@google.com Volodymyr Mnih Google DeepMind vmnih@google.comJoel Z. Leibo Google DeepMind jzl@google.comCatalin Ionescu Google DeepMind cdi@google.com Abst... |
sciadv.adn0042.pdf | Hikichi et al., Sci. Adv. 10, eadn0042 (2024) 1 March 2024 Science Adv AnceS | ReSeAR cH AR ticle 1 of 20VIROLOGY Epistatic pathways can drive HIV1 escape from integrase strand transfer inhibitors Yuta Hikichi1, Jonathan R. Grover2, Alicia Schfer2, Walther Mothes2, Eric O. Freed1* People living with human immunod... |
2310.12036.pdf | A General Theoretical Paradigm to Understand Learning from Human Preferences Mohammad Gheshlaghi Azar Mark Rowland Bilal Piot Daniel Guo Daniele Calandriello Michal Valko R emi Munos Google DeepMind Abstract The prevalent deployment of learning from human preferences through reinforcement learning ( RLHF) relies on two... |
2202.08371.pdf | arXiv:2202.08371v1 [cs.LG] 15 Feb 2022THE QUARKS OF ATTENTION PIERRE BALDI AND ROMAN VERSHYNIN Abstract. Attention plays a fundamental role in both natural and artifi cial intelligence systems. In deep learning, attention-based neural archite ctures, such as transformer architectures, are widely used to tackle proble... |
s41586-023-06924-6.pdf | 468 | Nature | Vol 625 | 18 January 2024 ArticleMathematical discoveries from program search with large language models Bernardino Romera-Paredes1,4, Mohammadamin Barekatain1,4, Alexander Novikov1,4, Matej Balog1,4, M. Pawan Kumar1,4, Emilien Dupont1,4, Francisco J. R. Ruiz1,4, Jordan S. Ellenberg2, Pengming Wang1, ... |
2112.07868.pdf | Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases Shrimai Prabhumoye1, Rafal Kocielnik2, Mohammad Shoeybi1, Anima Anandkumar1,2, Bryan Catanzaro1 1NVIDIA,2California Institute of Technology {sprabhumoye@nvidia.com, rafalko@caltech.edu} Abstract Warning: this paper contains content that... |
2207.06569.pdf | Benign, Tempered, or Catastrophic: A Taxonomy of Over/f_itting Neil Mallinar UC San Diego nmallina@ucsd.eduJames B. Simon UC Berkeley james.simon@berkeley.eduAmirhesam Abedsoltan UC San Diego aabedsoltan@ucsd.edu Parthe Pandit UC San Diego parthepandit@ucsd.eduMikhail Belkin UC San Diego mbelkin@ucsd.eduPreetum Nakkira... |
1406.2661.pdf | Generative Adversarial Nets Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio Departement dinformatique et de recherche op erationnelle Universit e de Montr eal Montr eal, QC H3C 3J7 Abstract We propose a new framework for estimating generativ... |
2402.10171.pdf | Data Engineering for Scaling Language Models to 128K Context Yao FuRameswar PandaXinyao NiuXiang YueHannaneh HajishirziYoon KimHao Peng University of EdinburghMIT-IBM Watson AI LabUniversity of MelbourneOhio State University University of WashingtonMITUIUC yao.fu@ed.ac.uk yoonkim@mit.edu haopeng@illinois.edu https://gi... |
2402.04845.pdf | AlphaFold Meets Flow Matching for Generating Protein Ensembles Bowen Jing1Bonnie Berger1 2Tommi Jaakkola1 Abstract The biological functions of proteins often depend on dynamic structural ensembles. In this work, we develop a flow-based generative modeling approach for learning and sampling the conformational landscapes... |
1506.00552.pdf | Coordinate Descent Converges Faster with the Gauss-Southwell Rule Than Random Selection Julie Nutini1, Mark Schmidt1, Issam H. Laradji1, Michael Friedlander2, Hoyt Koepke3 1University of British Columbia,2University of California, Davis,3Dato Abstract There has been significant recent work on the theory and application... |
Epistasis and entrenchment of drug resistance in HIV-1 subtype B.pdf | *For correspondence: ronlevy@temple.edu Competing interests: The authors declare that no competing interests exist. Funding: See page 20 Received: 25 July 2019 Accepted: 09 September 2019 Published: 08 October 2019 Reviewing editor: Patricia J Wittkopp, University of Michigan, United States Copyright Biswas et al. This... |
2309.14525.pdf | Preprint ALIGNING LARGE MULTIMODAL MODELS WITH FACTUALLY AUGMENTED RLHF Zhiqing Sun, Sheng Shen, Shengcao Cao Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell UC Berkeley,CMU,UIUC,UWMadison,UMass Amherst Microsoft Research,MIT-IBM Watson AI Lab AB... |
score_matching_sliced.pdf | Sliced Score Matching: A Scalable Approach to Density and Score Estimation Yang Song Stanford UniversitySahaj Garg Stanford UniversityJiaxin Shi Tsinghua UniversityStefano Ermon Stanford University Abstract Score matching is a popular method for estimating unnormalized statistical models. However, it has been so far li... |
2306.12672.pdf | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought Lionel Wong1, Gabriel Grand1, Alexander K. Lew1, Noah D. Goodman2, Vikash K. Mansinghka1, Jacob Andreas1, Joshua B. Tenenbaum1 Equal contribution. 1MIT,2Stanford Abstract How does language inform our downstream ... |
RFeynman_plentySpace.pdf | Plenty of Room at the Bottom Richard P. Feynman (Dated: Dec. 1959) This is the transcript of a talk presented by Richard P. Feynman to the American Physical Society in Pasadena on December 1959, which explores the immense possibilities afforded by miniaturization. I imagine experimental physicists must often look with ... |
2020.12.15.422761v1.full.pdf | TRANSFORMER PROTEIN LANGUAGE MODELS ARE UNSUPERVISED STRUCTURE LEARNERS Roshan Rao UC Berkeley rmrao@berkeley.eduJoshua Meier Facebook AI Research jmeier@fb.comTom Sercu Facebook AI Research tsercu@fb.com Sergey Ovchinnikov Harvard University so@g.harvard.eduAlexander Rives Facebook AI Research & New York University ar... |
2205.11916.pdf | Large Language Models are Zero-Shot Reasoners Takeshi Kojima The University of Tokyo t.kojima@weblab.t.u-tokyo.ac.jpShixiang Shane Gu Google Research, Brain Team Machel Reid Google ResearchYutaka Matsuo The University of TokyoYusuke Iwasawa The University of Tokyo Abstract Pretrained large language models (LLMs) are wi... |
2436_the_usual_suspects_reassessing.pdf | Under review as a conference paper at ICLR 2020 THEUSUAL SUSPECTS ? R EASSESSING BLAME FOR VAE P OSTERIOR COLLAPSE Anonymous authors Paper under double-blind review ABSTRACT In narrow asymptotic settings Gaussian V AE models of continuous data have been shown to possess global optima aligned with ground-truth distribut... |
2209.12892.pdf | LEARNING TO LEARN WITH GENERATIVE MODELS OF NEURAL NETWORK CHECKPOINTS William PeeblesIlija RadosavovicTim Brooks Alexei A. Efros Jitendra Malik University of California, Berkeley ABSTRACT We explore a data-driven approach for learning to optimize neural networks. We construct a dataset of neural network checkpoints an... |
1911.00172.pdf | Published as a conference paper at ICLR 2020 GENERALIZATION THROUGH MEMORIZATION : NEAREST NEIGHBOR LANGUAGE MODELS Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer& Mike Lewis Stanford University Facebook AI Research {urvashik,jurafsky }@stanford.edu {omerlevy,lsz,mikelewis }@fb.com ABSTRACT We introduce ... |
2309.03649.pdf | Exploring kinase DFG loop conformational stability with AlphaFold2-RAVE Bodhi P. Vani,Akashnathan Aranganathan,and Pratyush Tiwary,, Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742, USA Biophysics Program and Institute for Physical Science and Technology, University o... |
NIPS-2007-active-preference-learning-with-discrete-choice-data-Paper.pdf | Active Preference Learning with Discrete Choice Data Eric Brochu, Nando de Freitas and Abhijeet Ghosh Department of Computer Science University of British Columbia Vancouver, BC, Canada {ebrochu, nando, ghosh}@cs.ubc.ca Abstract We propose an active learning algorithm that learns a continuous valuation model from discr... |
2206.14858.pdf | Solving Quantitative Reasoning Problems with Language Models Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra Google Research Abstract Language models have... |
s41586-024-07128-2.pdf | Nature | www.nature.com | 1 ArticleSynthetic reversed sequences reveal default genomic states Brendan R. Camellato1, Ran Brosh1, Hannah J. Ashe1, Matthew T . Maurano1,2 & Jef D. Boeke1,3,4 Pervasive transcriptional activity is observed across diverse species. The genomes of extant organisms have undergone billions of... |
1909.12264.pdf | Quantum Graph Neural Networks Guillaume Verdon X, The Moonshot Factory Mountain View, CA gverdon@x.teamTrevor McCourt Google Research Venice, CA trevormccrt@google.com Enxhell Luzhnica, Vikash Singh, Stefan Leichenauer, Jack Hidary X, The Moonshot Factory Mountain View, CA {enxhell,singvikash, sleichenauer,hidary}@x.te... |
2403.08763.pdf | Simple and Scalable Strategies to Continually Pre-train Large Language Models Adam Ibrahimibrahima@mila.quebec Benjamin Thrienbenjamin.therien@mila.quebec Kshitij Guptakshitij.gupta@mila.quebec Mats L. Richtermats.richter@mila.quebec Quentin Anthonyqubitquentin@gmail.com Timothe Lesortt.lesort@gmail.com Eugene Belilovs... |
cryo 1-s2.0-S0092867424000631-main.pdf | Article Cryo-EM structures of the plant plastid-encoded RNA polymerase Graphical abstract Highlights dPlant chloroplast RNA polymerase comprises a catalytic core and four peripheral modules dThe scaffold module stabilizes the catalytic core and bridgesother modules dThe protection module has SOD activity, and the RNAmo... |
2310.02226.pdf | Think before you speak: Training Language Models With Pause Tokens Sachin Goyal Machine Learning Department Carnegie Mellon University sachingo@andrew.cmu.eduZiwei Ji Google Research, NY ziweiji@google.comAnkit Singh Rawat Google Research, NY ankitsrawat@google.com Aditya Krishna Menon Google Research, NY adityakmenon@... |
Peebles_Scalable_Diffusion_Models_with_Transformers_ICCV_2023_paper.pdf | Scalable Diffusion Models with Transformers William Peebles* UC BerkeleySaining Xie New York University Figure 1: Diffusion models with transformer backbones achieve state-of-the-art image quality. We show selected samples from two of our class-conditional DiT-XL/2 models trained on ImageNet at 512 512 and 256 256 reso... |
2212.00178.pdf | Open Relation and Event Type Discovery with Type Abstraction Sha Li, Heng Ji, Jiawei Han University of Illinois Urbana-Champaign {shal2, hengji, hanj}@illinois.edu Abstract Conventional closed-world" information extraction (IE) approaches rely on human ontologies to define the scope for extraction. As a result, such ap... |
2311.17932.pdf | Generating Molecular Conformer Fields Yuyang Wang1Ahmed A. Elhag1Navdeep Jaitly1Joshua M. Susskind1Miguel Angel Bautista1 Abstract In this paper we tackle the problem of generating conformers of a molecule in 3D space given its molecular graph. We parameterize these conformers as continuous functions that map elements ... |
0273.pdf | Variational Deep Embedding: An Unsupervised and Generative Approach to Clustering Zhuxi Jiang1, Yin Zheng2, Huachun Tan1, Bangsheng Tang3, Hanning Zhou3 1Beijing Institute of Technology, Beijing, China 2Tencent AI Lab, Shenzhen, China 3Hulu LLC., Beijing, China fzjiang, tanhcg@bit.edu.cn, yinzheng@tencent.com, bangshen... |
2305.15076.pdf | Meta-Learning Online Adaptation of Language Models Nathan Hu* Eric Mitchell* Christopher D. Manning Chelsea Finn Stanford University Abstract Large language models encode impressively broad world knowledge in their parameters. However, the knowledge in static language models falls out of date, limiting the models effec... |
2102.03902.pdf | Nystr omformer: A Nystr om-based Algorithm for Approximating Self-Attention Yunyang Xiong1Zhanpeng Zeng1Rudrasis Chakraborty2Mingxing Tan3 Glenn Fung4Yin Li1Vikas Singh1 1University of Wisconsin-Madison2UC Berkeley3Google Brain4American Family Insurance yxiong43@wisc.edu, zzeng38@wisc.edu, rudra@berkeley.edu, tanmingxi... |
2310.07820.pdf | Large Language Models Are Zero-Shot Time Series Forecasters Nate Gruver NYUMarc Finzi CMUShikai Qiu NYUAndrew Gordon Wilson NYU Abstract By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text. Developing this approach, we find that large language m... |
s41586-021-03819-2.pdf | Nature | Vol 596 | 26 August 2021 | 583 ArticleHighly accurate protein structure prediction with AlphaFold John Jumper1,4, Richard Evans1,4, Alexander Pritzel1,4, Tim Green1,4, Michael Figurnov1,4, Olaf Ronneberger1,4, Kathryn Tunyasuvunakool1,4, Russ Bates1,4, Augustin dek1,4, Anna Potapenko1,4, Alex Bridgland1,4, ... |
2211.10438.pdf | SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models Guangxuan Xiao* 1Ji Lin* 1Mickael Seznec2Hao Wu2Julien Demouth2Song Han1 Abstract Large language models (LLMs) show excellent performance but are computeand memoryintensive. Quantization can reduce memory and accelerate inference. ... |
2009.14794.pdf | Published as a conference paper at ICLR 2021 RETHINKING ATTENTION WITH PERFORMERS Krzysztof Choromanski1, Valerii Likhosherstov2, David Dohan1, Xingyou Song1 Andreea Gane1, Tamas Sarlos1, Peter Hawkins1, Jared Davis3, Afroz Mohiuddin1 Lukasz Kaiser1, David Belanger1, Lucy Colwell1,2, Adrian Weller2,4 1Google2University... |
2305.19466.pdf | The Impact of Positional Encoding on Length Generalization in Transformers Amirhossein Kazemnejad1,2, Inkit Padhi3 Karthikeyan Natesan Ramamurthy3,Payel Das3,Siva Reddy1,2,4 1Mila Qubec AI Institute;2McGill University; 3IBM Research;4Facebook CIFAR AI Chair {amirhossein.kazemnejad,siva.reddy}@mila.quebec inkpad@ibm.com... |
2202.01169.pdf | UNIFIED SCALING LAWS FOR ROUTED LANGUAGE MODELS Aidan Clark, Diego de las Casas, Aurelia Guy, Arthur Mensch Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake Hechtman, Trevor Cai, Sebastian Borgeaud, George van den Driessche, Eliza Rutherford, Tom Hennigan, Matthew Johnson, Katie Millican, Albin Cassirer, Chris Jo... |
2304.10970.pdf | Can GPT-4 Perform Neural Architecture Search? Mingkai Zheng1,3Xiu Su1Shan You2Fei Wang2 Chen Qian2Chang Xu1Samuel Albanie3 1The University of Sydney2SenseTime Research3CAML Lab, University of Cambridge mingkaizheng@outlook.com ,xisu5992@uni.sydney.edu.au, {youshan,wangfei,qianchen}@sensetime.com ,c.xu@sydney.edu.au sam... |
2205.11487.pdf | Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, Mohammad Norouzi {sahariac,w... |
2310.08118.pdf | Can Large Language Models Really Improve by Self-critiquing Their Own Plans? Karthik Valmeekam School of Computing & AI Arizona State University Tempe. kvalmeek@asu.eduMatthew Marquez School of Computing & AI Arizona State University, Tempe. mmarqu22@asu.edu Subbarao Kambhampati School of Computing & AI Arizona State U... |
Bradley-RankAnalysisIncomplete-1952.pdf | Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons Author(s): Ralph Allan Bradley and Milton E. Terry Source: Biometrika , Dec., 1952 , Vol. 39, No. 3/4 (Dec., 1952), pp. 324-345 Published by: Oxford University Press on behalf of Biometrika Trust Stable URL: http://www.jstor.com/stab... |
2305.14224.pdf | mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations Jonas Pfeiffer Francesco Piccinno Massimo Nicosia Xinyi Wang Machel Reid Sebastian Ruder Google DeepMind Abstract Multilingual sequence-to-sequence models perform poorly with increased language coverage and fail to consistently generate text ... |
Hastings1970.pdf | Monte Carlo Sampling Methods Using Markov Chains and Their Applications W. K. Hastings Biometrika , Vol. 57, No. 1. (Apr., 1970), pp. 97-109. Stable URL: http://links.jstor.org/sici?sici=0006-3444%28197004%2957%3A1%3C97%3AMCSMUM%3E2.0.CO%3B2-C Biometrika is currently published by Biometrika Trust. Your use of the JSTOR... |
RNA recoding in cephalopods tailors microtubule motor protein function.pdf | Article RNA recoding in cephalopods tailors microtubule motor protein function Graphical abstract Highlights dRNA editing in squid specifies unique kinesin protein variants in different tissues dUnique kinesin variants are made acutely in response toseawater temperature dCold-specific kinesin variants have enhanced sin... |
2303.02535.pdf | Streaming Active Learning with Deep Neural Networks Akanksha Saran1Safoora Yousefi2Akshay Krishnamurthy1John Langford1Jordan T. Ash1 Abstract Active learning is perhaps most naturally posed as an online learning problem. However, prior active learning approaches with deep neural networks assume offline access to the en... |
evad084.pdf | Unsupervised Deep Learning Can Identify Protein Functional Groups from Unaligned Sequences Kyle T. David 1,* and Kenneth M. Halanych 2 1Department of Biological Sciences, Auburn University, Auburn, Alabama, USA 2Center for Marine Sciences, University of North Carolina Wilmington, Wilmington, North Carolina, USA *C... |
relu strikes back.pdf | ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models Iman MirzadehKeivan Alizadeh Sachin Mehta Carlo C Del Mundo Oncel Tuzel Golnoosh Samei Mohammad Rastegari Mehrdad Farajtabar Apple ABSTRACT Large Language Models (LLMs) with billions of parameters have drastically transformed AI applications. Ho... |
2211.17192.pdf | Fast Inference from Transformers via Speculative Decoding Yaniv Leviathan* 1Matan Kalman* 1Yossi Matias1 Abstract Inference from large autoregressive models like Transformers is slow decoding Ktokens takes Kserial runs of the model. In this work we introduce speculative decoding an algorithm to sample from autoregressi... |
2112.04426.pdf | Improving language models by retrieving from trillions of tokens Sebastian Borgeaudy, Arthur Menschy, Jordan Hoffmanny, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffr... |
2023.04.30.538439v1.full.pdf | scGPT: Towards Building a Foundation Model for Single-Cell 1 Multi-omics Using Generative AI 2 Haotian Cui1,2,3 , Chloe Wang1,2,3, Hassaan Maan1,3,4, Bo Wang1,2,3,4,5 3 1Peter Munk Cardiac Centre, University Health Network, Toronto, ON, Canada 4 2Department of Computer Science, University of Toronto, Toronto, ON, Canad... |
2024.01.02.573943v1.full.pdf | De Novo Atomic Protein Structure Modeling for Cryo-EM Density Maps Using 3D Transformer and Hidden Markov Model Nabin Giri1,2and Jianlin Cheng1,2* 1Electrical Engineering and Computer Science, University of Missouri, Columbia, 65211, Missouri, USA. 2NextGen Precision Health Institute, University of Missouri, Columbia, ... |
2401.13660.pdf | MambaByte: Token-free Selective State Space Model Junxiong Wang Tushaar Gangavarapu Jing Nathan Yan Alexander M Rush Cornell University {jw2544,tg352,jy858,arush}@cornell.edu Abstract Token-free language models learn directly from raw bytes and remove the bias of subword tokenization. Operating on bytes, however, resul... |
1905.13678.pdf | Learning Sparse Networks Using Targeted Dropout Aidan N. Gomez1,2,3Ivan Zhang2 Siddhartha Rao Kamalakara2Divyam Madaan2 Kevin Swersky1Yarin Gal3Geoffrey E. Hinton1 1Google Brain2for.ai3Department of Computer Science University of Oxford Abstract Neural networks are easier to optimise when they have many more weights th... |
stein_discrepancy.pdf | Learning the Stein Discrepancy for Training and Evaluating Energy-Based Models without Sampling Will Grathwohl1Kuan-Chieh Wang1Jorn-Henrik Jacobsen1David Duvenaud1Richard Zemel1 Abstract We present a new method for evaluating and training unnormalized density models. Our approach only requires access to the gradient of... |
2309.00754.pdf | EFFICIENT RLHF: R EDUCING THE MEMORY USAGE OF PPO Michael Santacroce, Yadong Lu, Han Yu, Yuanzhi Li, Yelong Shen Microsoft {misantac,yadonglu,hanyu,yuanzhili,yelong.shen}@microsoft.com ABSTRACT Reinforcement Learning with Human Feedback (RLHF) has revolutionized language modeling by aligning models with human preferenc... |
Estimation of Entropy and Mutual Information.pdf | ARTICLE Communicated by Jonathan Victor Estimation of Entropy and Mutual Information Liam Paninski liam@cns.nyu.edu Center for Neural Science, New York University, New York, NY 10003, U.S.A. We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expan... |
2303.11366.pdf | Reflexion: Language Agents with Verbal Reinforcement Learning Noah Shinn Northeastern University noahshinn024@gmail.comFederico Cassano Northeastern University cassano.f@northeastern.edu Edward Berman Northeastern University berman.ed@northeastern.eduAshwin Gopinath Massachusetts Institute of Technology agopi@mit.edu K... |
2203.15556.pdf | Training Compute-Optimal Large Language Models Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon ... |
2304.15004.pdf | Are Emergent Abilities of Large Language Models a Mirage? Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo Computer Science, Stanford University Abstract Recent work claims that large language models display emergent abilities , abilities not present in smaller-scale models that are present in larger-scale models. Wha... |
2309.01933.pdf | PROVABLY SAFE SYSTEMS : THE ONLY PATH TO CONTROLLABLE AGI Max Tegmark Department of Physics Insitute for AI & Fundamental Interactions Massachusetts Institute of Technology Cambridge, MA 02139 Steve Omohundro Beneficial AI Research Palo Alto, CA 94301 September 6, 2023 ABSTRACT We describe a path to humanity safely thr... |
1-s2.0-S009286742300466X-main.pdf | Article RNA recoding in cephalopods tailors microtubule motor protein function Graphical abstract Highlights dRNA editing in squid specifies unique kinesin protein variants in different tissues dUnique kinesin variants are made acutely in response toseawater temperature dCold-specific kinesin variants have enhanced sin... |
2401.04056.pdf | A Minimaximalist Approach to Reinforcement Learning from Human Feedback Gokul Swamy1 *Christoph Dann2Rahul Kidambi2Zhiwei Steven Wu1Alekh Agarwal2 Abstract We present Self-Play Preference Optimization (SPO), an algorithm for reinforcement learning from human feedback. Our approach is minimalistin that it does not requi... |
41586_2023_6924_MOESM1_ESM.pdf | Mathematical discoveries from program search with large language models In the format provided by the authors and unedited Nature | www.nature.com/natureSupplementary informationhttps://doi.org/10.1038/s41586-023-06924-6 Mathematical discoveries from program search with large language models (Supplementary material) ... |
2301.11325.pdf | MusicLM: Generating Music From Text Andrea Agostinelli* 1Timo I. Denk* 1 Zalan Borsos1Jesse Engel1Mauro Verzetti1Antoine Caillon2Qingqing Huang1Aren Jansen1 Adam Roberts1Marco Tagliasacchi1Matt Sharifi1Neil Zeghidour1Christian Frank1 Abstract We introduce MusicLM, a model for generating high-fidelity music from text de... |
QWEN_TECHNICAL_REPORT.pdf | QWEN TECHNICAL REPORT Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu,... |
2210.13382.pdf | Published as a conference paper at ICLR 2023 EMERGENT WORLD REPRESENTATIONS : EXPLORING A SEQUENCE MODEL TRAINED ON A SYNTHETIC TASK Kenneth Li Harvard UniversityAspen K. Hopkins Massachusetts Institute of TechnologyDavid Bau Northeastern University Fernanda Vi egas Harvard UniversityHanspeter Pfister Harvard Universit... |
1809.04281.pdf | MUSIC TRANSFORMER : GENERATING MUSIC WITH LONG -TERM STRUCTURE Cheng-Zhi Anna HuangAshish Vaswani Jakob Uszkoreit Noam Shazeer Ian Simon Curtis Hawthorne Andrew M. Dai Matthew D. Hoffman Monica Dinculescu Douglas Eck Google Brain ABSTRACT Music relies heavily on repetition to build structure and meaning. Self-reference... |
More Is Different Anderson.pdf | The reductionist hypothesis may still lbe a topic for controversy among phi losophers, but among the great majority of active scientists I think it is accepted without question The workings of our minds and bodles, and of all the ani mate or lnanimate matter of which we have any detailed knowledges are as sumed t... |
2305.12132.pdf | Can Public Large Language Models Help Private Cross-device Federated Learning? Boxin Wang3, Yibo Jacky Zhang4, Yuan Cao2, Bo Li3, H. Brendan McMahan1, Sewoong Oh1, Zheng Xu1, Manzil Zaheer2 1Google Research,2Google Deepmind,3UIUC,4Stanford Abstract We study (differentially) private federated learning (FL) of language m... |
De novo protein design—From new structures to programmable functions.pdf | Leading Edge Perspective De novo protein designFrom new structures to programmable functions Tanja Kortemme1,2,3,* 1Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, San Francisco, CA 94158, USA 2Quantitative Biosciences Institute, University of California, San Francisco, S... |
Low-rank Optimal Transport.pdf | Low-rank Optimal Transport: Approximation, Statistics and Debiasing Meyer Scetbon CREST, ENSAE meyer.scetbon@ensae.frMarco Cuturi Apple and CREST, ENSAE cuturi@apple.com Abstract The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed ... |
2304.02034.pdf | Effective Theory of Transformers at Initialization Emily Dinan,Sho Yaida,and Susan Zhang Meta AI Meta Platforms, Inc. We perform an effective-theory analysis of forwardbackward signal propagation in wide and deep Transformers, i.e., residual neural networks with multi-head self-attention blocks and multilayer perceptro... |
2307.12950.pdf | RLCD: R EINFORCEMENT LEARNING FROM CONTRAST DISTILLATION FOR LANGUAGE MODEL ALIGNMENT Kevin Yang1,2Dan Klein1Asli Celikyilmaz2Nanyun Peng3Yuandong Tian2 1UC Berkeley,2Meta AI,3UCLA {yangk,klein}@berkeley.edu,{aslic,yuandong}@meta.com,violetpeng@cs.ucla.edu ABSTRACT We propose Reinforcement Learning from Contrast Distil... |
Notes_and_Transcriptions.pdf | Academic Notes on Prof. George Karniadakis Lecture: Biophysics Informed Neural Nets for Multiscale and Multifidelity Modeling Generated from YouTube Transcript March 2023 1 Introduction Prof. George Karniadakis, a renowned applied mathematician from Brown University, discusses how modern machine learning tools can comp... |
2206.14486.pdf | Beyond neural scaling laws: beating power law scaling via data pruning Ben Sorscher 1Robert Geirhos2Shashank Shekhar3 Surya Ganguli1,3Ari S. Morcos3 equal contribution 1Department of Applied Physics, Stanford University 2University of Tbingen 3Meta AI (FAIR) Joint senior authors Abstract Widely observed neural scaling ... |
2305.16381.pdf | DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models Ying Fan,1,2, Olivia Watkins3, Yuqing Du3, Hao Liu3, Moonkyung Ryu1, Craig Boutilier1, Pieter Abbeel3,Mohammad Ghavamzadeh1,Kangwook Lee2,Kimin Lee,1 Equal technical contribution 1Google Research2University of Wisconsin-Madison3UC Berkeley Abst... |
langegabelriedmiller2011chapter.pdf | Batch Reinforcement Learning Sascha Lange, Thomas Gabel, and Martin Riedmiller Abstract Batch reinforcement learning is a subfield of dynamic programming-based reinforcement learning. Originally defined as the task of learning the best possible policy from a fixed set of a priori-known transition samples, the (batch) a... |
2210.15097.pdf | Contrastive Decoding: Open-ended Text Generation as Optimization Xiang Lisa Li1, Ari Holtzman2, Daniel Fried3, Percy Liang1, Jason Eisner4, Tatsunori Hashimoto1, Luke Zettlemoyer2,5, Mike Lewis5 Stanford University1, University of Washington2, Carnegie Mellon University3, Johns Hopkins University4, FAIR5 xlisali@stanfo... |
2401.12187.pdf | WARM: On the Benefits of Weight Averaged Reward Models Alexandre Ram, Nino Vieillard, Lonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, Johan Ferret Google DeepMind Aligning large language models (LLMs) with human preferences through reinforcement learning (RLHF) can lead to reward hacking, where LLMs ... |
2305.16183.pdf | Passive learning of active causal strategies in agents and language models Andrew K. Lampinen Google DeepMind London, UK lampinen@deepmind.comStephanie C. Y. Chan Google DeepMind London, UK scychan@deepmind.comIshita Dasgupta Google DeepMind London, UK idg@deepmind.com Andrew J. Nam Stanford University Stanford, CA ajh... |
2001.08361.pdf | Scaling Laws for Neural Language Models Jared Kaplan Johns Hopkins University, OpenAI jaredk@jhu.eduSam McCandlish OpenAI sam@openai.com Tom Henighan OpenAI henighan@openai.comTom B. Brown OpenAI tom@openai.comBenjamin Chess OpenAI bchess@openai.comRewon Child OpenAI rewon@openai.com Scott Gray OpenAI scott@openai.comA... |
Beyond Human Data- Scaling Self-Training for Problem-Solving with Language Models.pdf | 2023-12-12 Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models Avi Singh1,*, John D Co-Reyes1,*, Rishabh Agarwal1,2,*, Ankesh Anand1, Piyush Patil1, Peter J. Liu1, James Harrison1, Jaehoon Lee1, Kelvin Xu1, Aaron Parisi1, Abhishek Kumar1, Alex Alemi1, Alex Rizkowsky1, Azade Nova1, Ben Adla... |
1801.10198.pdf | Published as a conference paper at ICLR 2018 GENERATING WIKIPEDIA BY SUMMARIZING LONG SEQUENCES Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, ukasz Kaiser, Noam Shazeer Google Brain Mountain View, CA {peterjliu,msaleh,epot,bgoodrich,rsepassi,lukaszkaiser,noam }@google.com ABSTRACT We show that ... |
2306.16410.pdf | Towards Language Models That Can See: Computer Vision Through the LENS of Natural Language William BerriosGautam MittalTristan Thrush Douwe KielaAmanpreet Singh Contextual AI;Stanford University Abstract We propose LENS , a modular approach for tackling computer vision problems by leveraging the power of large langua... |
Xist-ribonucleoproteins-promote-female-sex-biased-.pdf | Article Xist ribonucleoproteins promote female sex-biased autoimmunity Graphical abstract Highlights dTransgenic mouse models inducibly express Xist in male animals dXist expression in males induces autoantibodies andautoimmune pathology dXist in males reprograms T and B cell populations to female-like patterns dAutoan... |
2021.07.09.450648v2.full.pdf | Language models enable zero-shot prediction of the effects of mutations on protein function Joshua Meier1 2Roshan Rao3Robert Verkuil1Jason Liu1 Tom Sercu1Alexander Rives1 2 Abstract Modeling the effect of sequence variation on function is a fundamental problem for understanding and designing proteins. Since evolution e... |
supplementary gpsa.pdf | Supplementary Information for: Generative Capacity of Probabilistic Protein Sequence Models Francisco McGee Sandro Hauri Quentin Novinger Slobodan Vucetic Ronald M. Levy Vincenzo Carnevale Allan Haldane Supplementary Note 1 sVAE implementation The standard variational autoencoder (sVAE) is a deep, symmetrical, and unde... |
2401.18079.pdf | KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization Coleman Hooper chooper@berkeley.edu UC BerkeleySehoon Kim sehoonkim@berkeley.edu UC BerkeleyHiva Mohammadzadeh hiva@berkeley.edu UC Berkeley Michael W. Mahoney mmahoney@stat.berkeley.edu ICSI, LBNL, UC BerkeleyYakun Sophia Shao ysshao@b... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.