id stringlengths 16 16 | title stringlengths 3 120k | authors stringlengths 0 24.5k | year stringclasses 576 values | doi stringlengths 0 60 |
|---|---|---|---|---|
1eac87104b72d4ab | Faithfulness vs. plausibility: On the (un) reliability of explanations from large language models | Chirag Agarwal; Sree Harsha Tanneru; Himabindu Lakkaraju | 2024 | |
d917cd028733ed65 | Using contents and containers to investigate problem solving strategies among toddlers | Zaid Alkouri | 2016. 2025. 2025-02-21. 1, 34 | |
119ba272169c311c | Generalized energy based models | Michael Arbel; Liang Zhou; Arthur Gretton | 2020 | |
2e98e4fd0f964692 | Residual energy-based models for text | Anton Bakhtin; Yuntian Deng; Sam Gross; Myle Ott; Marc'aurelio Ranzato; Arthur Szlam | 2021 | |
47b4affde903ebf9 | The reversal curse: Llms trained on" a is b" fail to learn" b is a | Lukas Berglund; Meg Tong; Max Kaufmann; Mikita Balesni; Asa Cooper Stickland; Tomasz Korbak; Owain Evans | 2023 | |
9ae49c0d9a21077d | A conceptual introduction to hamiltonian monte carlo | Michael Betancourt | 2017 | |
dc52ea9368ee8d85 | Energy-Based Reranking: Improving Neural Machine Translation Using Energy-Based Models | Sumanta Bhattacharyya; Amirmohammad Rooshenas; Subhajit Naskar; Simeng Sun; Mohit Iyyer; Andrew Mccallum | 2020 | 10.18653/v1/2021.acl-long.349 |
75fef152d6e5750f | Jones, Most Rev. Christopher, (born 3 March 1936), Bishop of Elphin, (RC), 1994–2014, now Bishop Emeritus | M Christopher; Bishop | 1994 | 10.1093/ww/9780199540884.013.14922 |
7e538a59d9429972 | GPT-NeoX-20B: An Open-Source Autoregressive Language Model | Sidney Black; Stella Biderman; Eric Hallahan; Quentin Anthony; Leo Gao; Laurence Golding; Horace He; Connor Leahy; Kyle Mcdonell; Jason Phang; Michael Pieler; Usvsn Sai Prashanth; Shivanshu Purohit; Laria Reynolds; Jonathan Tow; Ben Wang; Samuel Weinbach | 2022 | 10.18653/v1/2022.bigscience-1.9 |
e76f99d5c51d2c25 | AudioLM: A Language Modeling Approach to Audio Generation | Zalán Borsos; Raphaël Marinier; Damien Vincent; Eugene Kharitonov; Olivier Pietquin; Matt Sharifi; Dominik Roblek; Olivier Teboul; David Grangier; Marco Tagliasacchi; Neil Zeghidour | 2023 | 10.1109/taslp.2023.3288409 |
c51c3ff23608f05a | Transformer flops | Adam Casson | 2023 | |
a570e740620f4202 | Neural ordinary differential equations | Yulia Ricky Tq Chen; Jesse Rubanova; David K Bettencourt; Duvenaud | 2018 | |
a741c40e6bbb31b2 | Learning to stop while learning to predict | Xinshi Chen; Hanjun Dai; Yu Li; Xin Gao; Le Song | 2020 | |
76a24a292d4e682e | Scaling laws for predicting downstream performance in llms | Yangyi Chen; Binxuan Huang; Yifan Gao; Zhengyang Wang; Jingfeng Yang; Heng Ji | 2024 | |
c27f48416ddbd74a | Diffusion Policy: Visuomotor Policy Learning via Action Diffusion | Cheng Chi; Siyuan Feng; Yilun Du; Zhenjia Xu; Eric Cousineau; Benjamin Burchfiel; Shuran Song | 2023 | 10.15607/rss.2023.xix.026 |
0833d4b357c96849 | Training verifiers to solve math word problems | Karl Cobbe; Vineet Kosaraju; Mohammad Bavarian; Mark Chen; Heewoo Jun; Lukasz Kaiser; Matthias Plappert; Jerry Tworek; Jacob Hilton; Reiichiro Nakano | 2021 | |
dd2d242a41ee9f6a | Redpajama: an open dataset for training large language models | 2023 | ||
e051ed02ad600d82 | The complexity of theorem-proving procedures | A Stephen; Cook | 2023 | |
922f39f53dab2b95 | How to compute hessian-vector products? In ICLR Blogposts | Mathieu Dagréou; Pierre Ablin; Samuel Vaiter; Thomas Moreau | 2024. 2024. 27, 31, 33 | |
8304dd6378ccfafb | Introduction to latent variable energy-based models: a path toward autonomous machine intelligence | Anna Dawid; Yann Lecun | 2024. 2024 | 10.1088/1742-5468/ad292b |
e974954191a57916 | Universal transformers | Mostafa Dehghani; Stephan Gouws; Oriol Vinyals; Jakob Uszkoreit; Łukasz Kaiser | 2018 | |
a0d95644c329b8e2 | Causal diffusion transformers for generative modeling | Chaorui Deng; Deyao Zhu; Kunchang Li; Shi Guang; Haoqi Fan | 2024 | |
0faf6e244f850803 | Autoregressive Image Generation without Vector Quantization | Mingyang Deng; Kaiming He; He Li; Tianhong Li; Yonglong Tian | 2024 | 10.52202/079017-1797 |
7c1089cfe1e14322 | Bert: Pre-training of deep bidirectional transformers for language understanding | Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova | 2019 | |
818ba10ca0a1023b | Evidence for time‐variant decision making | Jochen Ditterich | 2006 | 10.1111/j.1460-9568.2006.05221.x |
3be13385afa95e80 | Recurrent neuronal circuits in the neocortex | J Rodney; Kevan Ac Douglas; Martin | 2007 | |
1b8072188b06103f | Implicit generation and modeling with energy based models | Yilun Du; Igor Mordatch | 2019 | |
8256ef3542af02f7 | Improved contrastive divergence training of energy based models | Yilun Du; Shuang Li; Joshua Tenenbaum; Igor Mordatch | 2020 | |
31f6cbceea83bea8 | Learning iterative reasoning through energy minimization | Yilun Du; Shuang Li; Joshua Tenenbaum; Igor Mordatch | 2022 | |
fbfa2b5a68dd0bde | Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc | Yilun Du; Conor Durkan; Robin Strudel; Joshua B Tenenbaum; Sander Dieleman; Rob Fergus; Jascha Sohl-Dickstein; Arnaud Doucet; Will Sussman; Grathwohl | 2023 | |
4fe5711e407289ed | Learning iterative reasoning through energy diffusion | Yilun Du; Jiayuan Mao; Joshua B Tenenbaum | 2024 | |
0699a709305b036a | Dual-process theories of reasoning: Contemporary issues and developmental applications | Jonathan St; B T Evans | 2011 | |
e9854d0cc8594b72 | Pytorch lightning | William A Falcon | 2019 | |
266c5e9e0b556b04 | Dual-process and dual-system theories of reasoning | Keith Frankish | 2010 | |
107be4cd1b44d62a | Mapping sentence form onto meaning: The syntax-semantic interface | D Angela; Jürgen Friederici; Weissenborn | 2007 | |
f5555c8731afc354 | Scaling up test-time compute with latent reasoning: A recurrent depth approach | Jonas Geiping; Sean Mcleish; Neel Jain; John Kirchenbauer; Siddharth Singh; Brian R Bartoldson; Bhavya Kailkhura; Abhinav Bhatele; Tom Goldstein | 2025 | |
69dbe9d1670d1f8d | Understanding the difficulty of training deep feedforward neural networks | Xavier Glorot; Yoshua Bengio | 2010 | |
aae05b4139e08cbe | Dissociation of Mechanisms Underlying Syllogistic Reasoning | Vinod Goel; Christian Buchel; Chris Frith; Raymond J Dolan | 2000 | 10.1006/nimg.2000.0636 |
f1663246dc62c528 | The knowledge complexity of interactive proof-systems | Shafi Goldwasser; Silvio Micali; Chales Rackoff | 2019 | 10.1145/3335741.3335750 |
bdaa0fd7d70f6b8b | Generative adversarial nets | Ian J Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio | 2014 | |
53632141927c5bf7 | The" something something" video database for learning and evaluating visual common sense | Raghav Goyal; Samira Ebrahimi Kahou; Vincent Michalski; Joanna Materzynska; Susanne Westphal; Heuna Kim; Valentin Haenel; Ingo Fruend; Peter Yianilos; Moritz Mueller-Freitag | 2017 | |
e7798aab2bca4e34 | The llama 3 herd of models | Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Ahmad Al-Dahle; Aiesha Letman; Akhil Mathur; Alan Schelten; Alex Vaughan | 2024 | |
1d515b73716952ac | Mamba: Linear-time sequence modeling with selective state spaces | Albert Gu; Tri Dao | 2023. 7, 20, 29, 30 | |
d7b06ad53db33060 | Long-context autoregressive video modeling with next-frame prediction | Yuchao Gu; Weijia Mao; Mike Zheng Shou | 2025 | |
db855cc92a579c77 | Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning | Dejian Daya Guo; Haowei Yang; Junxiao Zhang; Ruoyu Song; Runxin Zhang; Qihao Xu; Shirong Zhu; Peiyi Ma; Xiao Wang; Bi | 2025 | |
a375e8ea3dcf4c65 | ძალადობაგამოვლილი ქალების გამოხატვის თავისუფლების შეზღუდვის გენდერულ დისკრიმინაციად კვალიფიკაცია | Ქეთევან Ბახტაძე | 1956. 2025-04-28 | 10.63410/9789941862557 |
892ec28136a4c099 | Training large language models to reason in a continuous latent space | Shibo Hao; Sainbayar Sukhbaatar; Dijia Su; Xian Li; Zhiting Hu; Jason Weston; Yuandong Tian | 2024 | |
6697b5db79afaae3 | Out-of-Distribution Detection with a Single Unconditional Diffusion Model | Alvin Heng; Harold Soh; Alexandre Thiery | 2024 | 10.52202/079017-1395 |
f95ada3ff042df51 | Denoising diffusion probabilistic models | Jonathan Ho; Ajay Jain; Pieter Abbeel | 2020 | |
9594e1d660521c16 | Training compute-optimal large language models | Jordan Hoffmann; Sebastian Borgeaud; Arthur Mensch; Elena Buchatskaya; Trevor Cai; Eliza Rutherford; Diego De Las; Lisa Anne Casas; Johannes Hendricks; Aidan Welbl; Clark | 2022 | |
7ce1a1fd8bc84a6c | Energy transformer | Benjamin Hoover; Yuchen Liang; Bao Pham; Rameswar Panda; Hendrik Strobelt; Horng Duen; Mohammed Chau; Dmitry Zaki; Krotov | 2024 | |
700c981212db543f | Neural networks and physical systems with emergent collective computational abilities | J John; Hopfield | 1982 | |
ca6c595319fc8a58 | Predicting emergent abilities with infinite resolution evaluation | Shengding Hu; Xin Liu; Xu Han; Xinrong Zhang; Chaoqun He; Weilin Zhao; Yankai Lin; Ning Ding; Zebin Ou; Guoyang Zeng | 2023 | |
b458b4068715f944 | T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation | Kaiyi Huang; Kaiyue Sun; Enze Xie; Zhenguo Li; Xihui Liu | 2023 | |
28a1aa9fb6a0b534 | Diffusion models for video prediction and infilling | Tobias Höppe; Arash Mehrjou; Stefan Bauer; Didrik Nielsen; Andrea Dittadi | 2022 | |
9121c98f8bb0db16 | Time matters: Scaling laws for any budget | Itay Inbar; Luke Sernau | 2024 | |
d09f6d17d4f66a54 | Scaling laws for downstream task performance of large language models | Berivan Isik; Natalia Ponomareva; Hussein Hazimeh; Dimitris Paparas; Sergei Vassilvitskii; Sanmi Koyejo | 2024 | |
19ce611200bee20d | PATRON: Perspective-Aware Multitask Model for Referring Expression Grounding Using Embodied Multimodal Cues | Md Mofijul Islam; Alexi Gladstone; Tariq Iqbal | 2023 | 10.1609/aaai.v37i1.25177 |
d117922890a7e2ce | Openai o1 system card | Aaron Jaech; Adam Kalai; Adam Lerer; Adam Richardson; Ahmed El-Kishky; Aiden Low; Alec Helyar; Aleksander Madry; Alex Beutel; Alex Carney | 2024 | |
735ba92505a794a6 | Planning with diffusion for flexible behavior synthesis | Michael Janner; Yilun Du; Joshua B Tenenbaum; Sergey Levine | 2022 | |
9b6ee1284dc49a77 | Less is More: Recursive Reasoning with Tiny Networks | Alexia Jolicoeur-Martineau | 2025 | 10.21203/rs.3.rs-8148771/v1 |
226fa0fb2c026cd7 | Thinking, fast and slow | Daniel Kahneman | 2011 | |
336a35cdbef2266d | Representativeness revisited: Attribute substitution in intuitive judgment | Shane Daniel Kahneman; Frederick | 2002 | |
ddaf29eaf8251d17 | Position: Llms can't plan, but can help planning in llm-modulo frameworks | Subbarao Kambhampati; Karthik Valmeekam; Lin Guan; Mudit Verma; Kaya Stechly; Siddhant Bhambri; Lucas Paul Saldyt; Anil B Murthy | 2024 | |
075823e757f3a655 | Scaling laws for neural language models | Jared Kaplan; Sam Mccandlish; Tom Henighan; Tom B Brown; Benjamin Chess; Rewon Child; Scott Gray; Alec Radford; Jeffrey Wu; Dario Amodei | 2020 | |
dfb4f0dd4eb2b831 | Auto-encoding variational bayes | Max Diederik P Kingma; Welling | 2013 | |
2ef3345ace04127c | When can transformers compositionally generalize in-context? | Seijin Kobayashi; Simon Schug; Yassir Akram; Florian Redhardt; Razvan Johannes Von Oswald; Guillaume Pascanu; João Lajoie; Sacramento | 2024 | |
11023d0814c04f19 | Transformers in speech processing: Overcoming challenges and paving the future | Siddique Latif; Syed Aun Muhammad Zaidi; Heriberto Cuayáhuitl; Fahad Shamshad; Moazzam Shoukat; Muhammad Usama; Junaid Qadir | 2023 | 10.1016/j.cosrev.2025.100768 |
fe5d4268c1e63d74 | A survey on the applications of zero-knowledge proofs | Ryan Lavin; Xuekai Liu; Hardhik Mohanty; Logan Norman; Giovanni Zaarour; Bhaskar Krishnamachari | 2024 | |
5c415052ac8cfe9c | A path towards autonomous machine intelligence version 0 | Yann Lecun | 2022 | |
24fdd1f9b58d78a2 | Energy-Based Models | Yann Lecun; Sumit Chopra; Raia Hadsell; Marc'aurelio Ranzato; Fu Jie Huang | 2006 | 10.7551/mitpress/7443.003.0014 |
79adba13ad1a0770 | Alias-Free Mamba Neural Operator | Wei Li; Xiaoxu Lin; Ni Xu; Xiaoqin Zhang; Jianwei Zheng; Junwei Zhu | 2018 | 10.52202/079017-1678 |
767aa7c73e8ceb62 | mis) fitting: A survey of scaling laws | Margaret Li; Sneha Kudugunta; Luke Zettlemoyer | 2025 | |
a0acffb68ce145b4 | Autoregressive image generation without vector quantization | Tianhong Li; Yonglong Tian; He Li; Mingyang Deng; Kaiming He | 2025 | |
34f8f65eb304090f | Learning Energy-Based Models in High-Dimensional Spaces with Multiscale Denoising-Score Matching | Zengyi Li; Yubei Chen; Friedrich T Sommer | 2023 | 10.3390/e25101367 |
aec689a39766608c | From system 1 to system 2: A survey of reasoning large language models | Zhong-Zhi Li; Duzhen Zhang; Ming-Liang Zhang; Jiaxin Zhang; Zengyan Liu; Yuxuan Yao; Haotian Xu; Junhao Zheng; Pei-Jie Wang; Xiuyi Chen | 2025 | |
0fc9d9ad7d3d9d69 | Let's verify step by step | Vineet Hunter Lightman; Yuri Kosaraju; Harrison Burda; Bowen Edwards; Teddy Baker; Jan Lee; John Leike; Ilya Schulman; Karl Sutskever; Cobbe | 2023 | |
8ba3ad322af4dd08 | Implicit Reasoning in Transformers is Reasoning through Shortcuts | Tianhe Lin; Jian Xie; Siyu Yuan; Deqing Yang | 2025 | 10.18653/v1/2025.findings-acl.493 |
4f02b2008b9399bb | Microsoft COCO: Common Objects in Context | Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence Zitnick | September 6-12, 2014. 2014 | 10.1007/978-3-319-10602-1_48 |
71d27988cfedc971 | Video-t1: Test-time scaling for video generation | Fangfu Liu; Hanyang Wang; Yimo Cai; Kaiyan Zhang; Xiaohang Zhan; Yueqi Duan | 2025 | |
a992f9eb67b72f81 | Compositional Visual Generation with Composable Diffusion Models | Nan Liu; Shuang Li; Yilun Du; Antonio Torralba; Joshua B Tenenbaum | 2022 | 10.1007/978-3-031-19790-1_26 |
c7a845bf2cfc695f | Paving the Way to Eureka—Introducing “Dira” as an Experimental Paradigm to Observe the Process of Creative Problem Solving | Frank Loesche; Jeremy Goslin; Guido Bugmann | 2018 | 10.3389/fpsyg.2018.01773 |
12f90346fe20a323 | Scaling Inference Time Compute for Diffusion Models | Nanye Ma; Shangyuan Tong; Haolin Jia; Hexiang Hu; Yu-Chuan Su; Mingda Zhang; Xuan Yang; Yandong Li; Tommi Jaakkola; Xuhui Jia; Saining Xie | 2025. 24, 34, 35 | 10.1109/cvpr52734.2025.00241 |
097d9e7eef0cb537 | Adaptive inference-time compute: Llms can predict if they can do better, even mid-generation | Rohin Manvi; Anikait Singh; Stefano Ermon | 2024 | |
1498743f0b1411f5 | Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models | Iman Mirzadeh; Keivan Alizadeh; Hooman Shahrokhi; Oncel Tuzel; Samy Bengio; Mehrdad Farajtabar | 2024 | |
5d2a7b52f413a2b8 | Do deep generative models know what they don't know? | Eric Nalisnick; Akihiro Matsukawa; Yee Whye Teh; Dilan Gorur; Balaji Lakshminarayanan | 2018 | |
7b2c8c3b73d6931f | Dual processing in reasoning: Two systems but one reasoner | Wim De; Neys | 2006 | |
bf9fe954d743eb9f | Learning to reason with llms | Openai | 2024. 2025-02-21 | |
3f874761e46a3eae | Dinov2: Learning robust visual features without supervision | Maxime Oquab; Timothée Darcet; Théo Moutakanni; Huy Vo; Marc Szafraniec; Vasil Khalidov; Pierre Fernandez; Daniel Haziza; Francisco Massa; Alaaeldin El-Nouby; Mahmoud Assran; Nicolas Ballas; Wojciech Galuba; Russell Howes; Po-Yao Huang; Shang-Wen Li; Ishan Misra; Michael Rabbat; Vasu Sharma; Gabriel Synnaeve; Hu Xu; Hervé Jegou; Julien Mairal; Patrick Labatut; Armand Joulin; Piotr Bojanowski | 2023 | |
bf70f23dd3bcb8ab | Recurrent relational networks | Rasmus Palm; Ulrich Paquet; Ole Winther | 2018 | |
31bf76013a0da389 | Active Inference | Thomas Parr; Giovanni Pezzulo; Karl J Friston | 2022 | 10.7551/mitpress/12441.001.0001 |
44c9a1e759ec78a4 | Pytorch: An imperative style, high-performance deep learning library | Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga | 2019 | |
ad40f763a2e82371 | Fast exact multiplication by the hessian | A Barak; Pearlmutter | 1994 | |
f09dace0fc88aee8 | Scalable Diffusion Models with Transformers | William Peebles; Saining Xie | 2023 | 10.1109/iccv51070.2023.00387 |
f7178753fd671bef | Transformer uncertainty estimation with hierarchical stochastic attention | Jiahuan Pei; Cheng Wang; György Szarvas | 2022 | |
24ffd3fc4d1192f5 | The fineweb datasets: Decanting the web for the finest text data at scale | Guilherme Penedo; Hynek Kydlíček; Anton Lozhkov; Margaret Mitchell; Colin A Raffel; Leandro Von Werra; Thomas Wolf | 2024 | |
119da46d6798b66b | RWKV: Reinventing RNNs for the Transformer Era | Bo Peng; Eric Alcaide; Quentin Anthony; Alon Albalak; Samuel Arcadinho; Stella Biderman; Huanqi Cao; Xin Cheng; Michael Chung; Leon Derczynski; Xingjian Du; Matteo Grella; Kranthi Gv; Xuzheng He; Haowen Hou; Przemyslaw Kazienko; Jan Kocon; Jiaming Kong; Bartłomiej Koptyra; Hayden Lau; Jiaju Lin; Krishna Sri Ipsit Mantri; Ferdinand Mom; Atsushi Saito; Guangyu Song; Xiangru Tang; Johan Wind; Stanisław Woźniak; Zhenyuan Zhang; Qinghua Zhou; Jian Zhu; Rui-Jie Zhu | 2023 | 10.18653/v1/2023.findings-emnlp.936 |
59fab88661af3697 | Uncertainty and stress: Why it causes diseases and how it is mastered by the brain | Achim Peters; Bruce S Mcewen; Karl J Friston | 2017 | 10.1016/j.pneurobio.2017.05.004 |
17c2a4cd24811659 | Figure 8: Demonstration of continued pre-training of LLaMA2 models in combination of pre-training instructional datasets (Xie et al., 2025). | Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever | 2018 | 10.7717/peerj-cs.3216/fig-8 |
578147b8e3900f56 | Language models are unsupervised multitask learners | Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever | 2019 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 172